You are on page 1of 437

Internet

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 13 Dec 2012 05:08:40 UTC

Contents
Articles
Overview
Internet History of the Internet World Wide Web History of the World Wide Web 1 1 17 42 54 61 61 62 71 73 75 76 77 88 91 95 111 119 119 121 132 151 170 174 182 192 197 197 203 221 223

Precursors and early development


Intergalactic Computer Network ARPANET CSNET ENQUIRE IPPS MILNET NFSNET TELENET UUCP USENET X.25

Today's Internet
"Internet" or "internet"? Internet Protocol Suite Internet access Broadband Internet access Languages used on the Internet List of countries by number of Internet subscriptions List of countries by number of broadband Internet subscriptions Internet governance

Common uses
Timeline of popular Internet services Email Web content File sharing

Search Blogging Microblogging Social networking Remote access Collaborative software Internet phone Internet radio Internet television

228 236 250 253 273 273 283 297 302 308 308 312 325 332 372 372 380 386 388 391 403 403

Social impact
Sociology of the Internet Internet censorship Internet censorship circumvention Internet censorship by country

Organizations
Internet Corporation for Assigned Names and Numbers Internet Society Internet Architecture Board Internet Engineering Task Force Internet Governance Forum

People
Internet pioneers

References
Article Sources and Contributors Image Sources, Licenses and Contributors 416 428

Article Licenses
License 434

Overview
Internet
The Internet (or internet) is a global system of interconnected computer networks that use the standard Internet protocol suite (often called TCP/IP, although not all applications use TCP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support email. Most traditional communications media including telephone, music, film, and television are being reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled and accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet reach back to research of the 1960s, commissioned by the United States government to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of June 2012, more than 2.4 billion peoplenearly a third of the world's human populationhave used the services of the Internet.[1] The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Terminology
Internet is a short form of the technical term internetwork,[2] the result of interconnecting computer networks with special gateways or routers. Historically the word has been used, uncapitalized, as a verb and adjective since 1883 to refer to interconnected motions. It was also used from 1974 before the Internet, uncapitalized, as a verb meaning to connect together, especially for networks.[3] The Internet is also often referred to as the Net. The Internet, referring to the specific entire global system of IP networks, is a proper noun and written with an initial capital letter. In the media and common use it is often not capitalized: "the internet". Some guides specify that the word should be capitalized as a noun but not capitalized as an adjective.[4] The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is common to speak of going on the Internet when invoking a browser to view Web pages. However, the Internet is a particular global computer network connecting millions of computing devices; the World Wide Web is just one of many services

Internet running on the Internet. The Web is a collection of interconnected documents (Web pages) and other resources, linked by hyperlinks and URLs.[5] In addition to the Web, a multitude of other services are implemented over the Internet, including e-mail, file transfer, remote computer control, newsgroups, and online games. Web (and other) services can be implemented on any intranet, accessible to network users.

History
Research into packet switching started in the early 1960s and packet switched networks such as Mark I at NPL in the UK,[6] ARPANET, CYCLADES,[7][8] Merit Network,[9] Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, where multiple separate networks could be joined together into a network of networks thanks to the work of British scientist Donald Davies whose ground-breaking work on Packet Switching was essential to the system.[10] The first two nodes of what would become the ARPANET were interconnected between Leonard Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS Professor Leonard Kleinrock with system at SRI International (SRI) in Menlo Park, California, on 29 October the first ARPANET Interface 1969.[11] The third site on the ARPANET was the Culler-Fried Interactive Message Processors at UCLA Mathematics center at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.[12][13] These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. Early international collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks.[14] Notable exceptions were the Norwegian Seismic Array (NORSAR) in June 1973,[15] followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter T. Kirstein's research group in the UK, initially at the Institute of Computer Science, University of London and later at University College London. In December 1974, RFC 675 Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.[16] Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networks called the Internet was introduced.

T3 NSFNET Backbone, c. 1992

TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNET) provided access to supercomputer sites in the United States from research and education organizations, first at 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[17] Commercial internet service providers (ISPs) began to emerge in the late 1980s and early 1990s. The ARPANET was decommissioned in 1990. The Internet was commercialized in 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic.[18] The Internet started a rapid expansion to Europe and Australia in the mid to late 1980s[19][20] and to Asia in the late 1980s and early 1990s.[21]

Internet

Since the mid-1990s the Internet has had a tremendous impact on culture and commerce, including the rise of near instant communication by email, instant messaging, Voice over Internet Protocol (VoIP) "phone calls", two-way interactive video calls, and the World Wide Web[22] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.[23]

During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[24] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[25] As of 31 March 2011, the estimated total number of Internet users was 2.095billion (30.2% of world population).[26] It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[27]

This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.

Technology
Protocols
The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the As the user data is processed down through the protocol stack, each layer adds an Internet Engineering Task Force (IETF).[28] encapsulation at the sending host. Data is transmitted "over the wire" at the link The IETF conducts standard-setting work level, left to right. The encapsulation stack procedure is reversed by the receiving groups, open to any individual, about the host. Intermediate relays remove and add a new link encapsulation for retransmission, and inspect the IP layer for routing purposes. various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.

Internet The Internet standards describe a framework known as the Internet protocol suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the application layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the transport layer connects applications on different hosts via the network (e.g., clientserver model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The internet layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one another via intermediate (transit) networks. Last, at the bottom of the architecture, is a software layer, the link layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware, which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description or implementation; many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking. The most prominent component of the Internet model is the Internet Protocol (IP), which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and in essence establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of today's Internet and is still in dominant use. It was designed to address up to ~4.3billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,[29] when the global address allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.[30] IPv6 is not interoperable with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for networking devices that need to communicate on both networks. Most modern computer operating systems already support both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Routing
Internet Service Providers connect customers (thought of at the "bottom" of the routing hierarchy) to customers of other ISPs. At the "top" of the routing hierarchy are ten or so Tier 1 networks, large telecommunication companies which exchange traffic directly "across" to all other Tier 1 networks via unpaid peering agreements. Tier 2 networks buy Internet transit from other ISP to reach at least some parties on the global
Internet packet routing is accomplished among various tiers of Internet Service Providers.

Internet Internet, though they may also engage in unpaid peering (especially for local partners of a similar size). ISPs can use a single "upstream" provider for connectivity, or use multihoming to provide protection from problems with individual links. Internet exchange points create physical connections between multiple ISPs, often hosted in buildings owned by independent third parties. Computers and routers use routing tables to direct IP packets among locally connected machines. Tables can be constructed manually or automatically via DHCP for an individual computer or a routing protocol for routers themselves. In single-homed situations, a default route usually points "up" toward an ISP providing transit. Higher-level ISPs use the Border Gateway Protocol to sort out paths to any given range of IP addresses across the complex connections of the global Internet. Academic institutions, large companies, governments, and other organizations can perform the same role as ISPs, engaging in peering and purchasing transit on behalf of their internal networks of individual computers. Research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. These in turn are built around smaller networks (see the list of academic computer network organizations). Not all computer networks are connected to the Internet. For example, some classified United States websites are only accessible from separate secure networks.

General structure
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.[31] Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[32] The Internet is heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins in the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated.[33] The Internet structure was found to be highly robust[34] to random failures and very vulnerable to high degree attacks.[35]

Governance
The Internet is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body. However, to maintain interoperability, all technical and policy aspects of the underlying core infrastructure and the principal name spaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), headquartered in Marina del Rey, California. ICANN is the authority that coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port ICANN headquarters in Marina Del Rey, numbers in the transport protocols, and many other parameters. California, United States Globally unified name spaces, in which names and numbers are uniquely assigned, are essential for the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The government of the United States continues to have the primary role in approving changes to the DNS root zone that lies at the heart of the domain name system.[36] ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central

Internet coordinating body on the global Internet. On 16 November 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Modern uses
The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods. Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org. Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking website, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members. Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread. The Internet allows computer users to remotely access other computers and information stores easily, wherever they may be. They may do this with or without computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure Virtual Private Network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare,[37] because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.

Internet

Services
World Wide Web
Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs symbolically identify services, servers, and other databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web, but it is only one of the hundreds of communication protocols used on the Internet. Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data. World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale. The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. Publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and Twitter currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts. Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow. When the Web began in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, ready to be sent to a user's browser in response to a request. Over time, the process of creating and serving web pages has become more automated and more dynamic. Websites are often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Internet

Communication
Email is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Pictures, documents and other files are sent as email attachments. Emails can be cc-ed to multiple email addresses. Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer. Voice quality can still vary from call to call, but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Wii, PlayStation 3, and Xbox 360 also offer VoIP chat features.

Data transfer
File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed usually fully encrypted across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products. Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where usually audio material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.

Internet Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.[38] Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.[39]

Access
Common methods of Internet access in homes include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G/4G technology cell phones. Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.[40] Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as smartphones in general come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used.[41] An Internet access provider and protocol matrix differentiates the methods used to get online. An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[42] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93%[43] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.[44]

Internet

10

Users
Overall Internet usage has seen tremendous growth. From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion.[48] By 2010, 22 percent of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube.[49] The prevalent language for communication on the Internet has been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.

Internet users per 100 inhabitantsSource: International Telecommunication UnionITU "Internet users per 100 inhabitants 20012011", International Telecommunications Union, Geneva. Retrieved 4 April 2012

After English (27%), the most requested languages on the World Wide Web are Chinese (23%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[50] By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in Africa, 3% in the Middle East and 1% Languages used on the InternetInternet users by language "Number of Internet Users by in Australia/Oceania.[51] The Internet's Language", Internet World Stats, Miniwatts Marketing Group, 31 May 2011. Retrieved technologies have developed enough in 22 April 2012 recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain.

Internet

11

In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.[52] More recent studies Languages used on the InternetWebsite content languages "Usage of content languages for websites". W3Techs.com. . Retrieved 30 December 2011. indicate that in 2008, women significantly outnumbered men on most social networking sites, such as Facebook and Myspace, although the ratios varied with age.[53] In addition, women watched more streaming content, whereas men downloaded more.[54] In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.[55]

Social impact
The Internet has enabled entirely new forms of social interaction, activities, and organizing, thanks to its basic features such as widespread usability and access. In the first decade of the 21st century, the first generation is raised with widespread availability of Internet connectivity, bringing consequences and concerns in areas such as personal privacy and identity, and distribution of copyrighted materials. These "digital natives" face a variety of challenges that were not present for prior generations.

Social networking and entertainment


Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to find out more about their interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. The Internet has seen a growing number of Web desktops, where users can access their files and settings via the Internet. Social networking websites such as Facebook, Twitter, and MySpace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6million people use blogs or message boards as a means of communication and

Internet for the sharing of ideas. The internet pornography and online gambling industries have taken advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites.[56] Although many governments have attempted to restrict both industries' use of the Internet, in general this has failed to stop their widespread popularity.[57] Another area of leisure activity on the Internet is multiplayer gaming.[58] This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer.[59] Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others. Internet usage has been correlated to users' loneliness.[60] Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread. Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in collective study via email, on-line chat rooms and web-based message boards."[61] Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[62] Internet addiction disorder is excessive computer use that interferes with daily life. Psychologist Nicolas Carr believe that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity.[63]

12

Politics and political revolutions


The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing in order to carry out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring.[64][65] The New York Times suggested that social media websites such as Facebook and Twitter helped people organize the political revolutions in Egypt where it helped certain classes of protesters organize protests, communicate grievances, and disseminate information.[66] The potential of the Internet as a civic tool of communicative power was thoroughly explored by Simon R. B. Berdal in his thesis of 2004: As the globally evolving Internet provides ever new access points to virtual discourse forums, it also promotes new civic relations and associations within which communicative power may flow and accumulate. Thus, traditionally ... national-embedded peripheries get entangled into greater, international peripheries, with stronger combined powers... The Internet, as a consequence, changes the topology of the "centre-periphery" model, by stimulating conventional peripheries to interlink into "super-periphery" structures, which enclose and "besiege" several centres at once.[67] Berdal, therefore, extends the Habermasian notion of the Public sphere to the Internet, and underlines the inherent global and civic nature that intervowen Internet technologies provide. To limit the growing civic potential of the

Internet Internet, Berdal also notes how "self-protective measures" are put in place by those threatened by it: If we consider Chinas attempts to filter "unsuitable material" from the Internet, most of us would agree that this resembles a self-protective measure by the system against the growing civic potentials of the Internet. Nevertheless, both types represent limitations to "peripheral capacities". Thus, the Chinese government tries to prevent communicative power to build up and unleash (as the 1989 Tiananmen Square uprising suggests, the government may find it wise to install "upstream measures"). Even though limited, the Internet is proving to be an empowering tool also to the Chinese periphery: Analysts believe that Internet petitions have influenced policy implementation in favour of the publics online-articulated will ...[67]

13

Philanthropy
The spread of low-cost internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites such as DonorsChoose and GlobalGiving allow small-scale donors to direct funds to individual projects of their choice. A popular twist on internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.[68][69] However, the recent spread of low cost Internet access in developing countries has made genuine international person-to-person philanthropy increasingly feasible. In 2009 the US-based nonprofit Zidisha tapped into this trend to offer the first person-to-person microfinance platform to link lenders and borrowers across international borders without intermediaries. Members can fund loans for as little as a dollar, which the borrowers then use to develop business activities that improve their families' incomes while repaying loans to the members with interest. Borrowers access the internet via public cybercafes, donated laptops in village schools, and even smart phones, then create their own profile pages through which they share photos and information about themselves and their businesses. As they repay their loans, borrowers continue to share updates and dialogue with lenders via their profile pages. This direct web-based connection allows members themselves to take on many of the communication and recording tasks traditionally performed by local organizations, bypassing geographic barriers and dramatically reducing the cost of microfinance services to the entrepreneurs.[70]

Censorship
Some governments, such as those of Burma, Iran, North Korea, the Mainland China, Saudi Arabia, and the United Arab Emirates restrict what people in their countries can access on the Internet, especially political and religious content. This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.[71] In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily, possibly to avoid such an arrangement being turned into law, agreed to restrict access to sites listed by authorities. While this list of forbidden URLs is supposed to contain addresses of only known child pornography sites, the content of the list is secret.[72] Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filtering software. There are many free and commercially available software programs, called content-control software, with which a user can choose to block offensive websites on individual computers or networks, in order to limit a child's access to pornographic materials or depiction of violence.

Internet

14

References
[1] "World Stats" (http:/ / www. internetworldstats. com/ stats. htm). Internet World Stats. Miniwatts Marketing Group. June 30, 2012. . [2] "Internet, n." (http:/ / dictionary. oed. com/ cgi/ entry/ 00304286). Oxford English Dictionary (Draft ed.). March 2009. . Retrieved 26 October 2010. "Shortened < INTERNETWORK n., perhaps influenced by similar words in -net" [3] Oxford English Dictionary, 2md ed., gives ninteenth-century use and pre-Internet verb use] [4] "7.76 Terms like 'web' and 'Internet'" (http:/ / www. chicagomanualofstyle. org/ 16/ ch07/ ch07_sec076. html?para=), Chicago Manual of Style, University of Chicago, 16th edition [5] "Links" (http:/ / www. w3. org/ TR/ html401/ struct/ links. html#h-12. 1). HTML 4.01 Specification. World Wide Web Consortium. HTML 4.01 Specification. . Retrieved 13 August 2008. "[T]he link (or hyperlink, or Web link) [is] the basic hypertext construct. A link is a connection from one Web resource to another. Although a simple concept, the link has been one of the primary forces driving the success of the Web." [6] Celebrating 40 years of the net (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 8331253. stm), by Mark Ward, Technology correspondent, BBC News, 29 October 2009 [7] "A Technical History of CYCLADES" (http:/ / www. cs. utexas. edu/ users/ chris/ think/ Cyclades/ index. shtml), Technical Histories of the Internet & other Network Protocols, Computer Science Department, University of Texas Austin, 11 June 2002 [8] "The Cyclades Experience: Results and Impacts" (http:/ / www. informatik. uni-trier. de/ ~ley/ db/ conf/ ifip/ ifip1977. html#Zimmermann77), Zimmermann, H., Proc. IFIP'77 Congress, Toronto, August 1977, pp. 465469 [9] A Chronicle of Merit's Early History (http:/ / www. merit. edu/ about/ history/ article. php), John Mulcahy, 1989, Merit Network, Ann Arbor, Michigan [10] "A Technical History of National Physical Laboratories (NPL) Network Architecture" (http:/ / www. cs. utexas. edu/ users/ chris/ think/ NPL/ index. shtml), Technical Histories of the Internet & other Network Protocols, Computer Science Department, University of Texas Austin, 11 June 2002 [11] "Roads and Crossroads of Internet History" (http:/ / www. netvalley. com/ intval. html) by Gregory Gromov. 1995 [12] Hafner, Katie (1998). Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster. ISBN0-684-83267-4. [13] Ronda Hauben (2001). From the ARPANET to the Internet (http:/ / www. columbia. edu/ ~rh120/ other/ tcpdigest_paper. txt). . Retrieved 28 May 2009. [14] "Events in British Telecomms History" (http:/ / web. archive. org/ web/ 20030405153523/ http:/ / www. sigtel. com/ tel_hist_brief. html). Events in British TelecommsHistory. Archived from the original (http:/ / www. sigtel. com/ tel_hist_brief. html) on 5 April 2003. . Retrieved 25 November 2005. [15] "NORSAR and the Internet" (http:/ / web. archive. org/ web/ 20110724063000/ http:/ / www. norsar. no/ pc-5-30-NORSAR-and-the-Internet. aspx). NORSAR. Archived from the original (http:/ / www. norsar. no/ pc-5-30-NORSAR-and-the-Internet. aspx) on 24 July 2011. . [16] Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff (2003). A Brief History of Internet (http:/ / www. isoc. org/ internet/ history/ brief. shtml). . Retrieved 28 May 2009. [17] NSFNET: A Partnership for High-Speed Networking, Final Report 1987-1995 (http:/ / www. merit. edu/ about/ history/ pdf/ NSFNET_final. pdf), Karen D. Frazer, Merit Network, Inc., 1995 [18] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [19] Ben Segal (1995). A Short History of Internet Protocols at CERN (http:/ / www. cern. ch/ ben/ TCPHIST. html). . [20] Rseaux IP Europens (RIPE) [21] "Internet History in Asia" (http:/ / www. apan. net/ meetings/ busan03/ cs-history. htm). 16th APAN Meetings/Advanced Network Conference in Busan. . Retrieved 25 December 2005. [22] How the web went world wide (http:/ / news. bbc. co. uk/ 2/ hi/ science/ nature/ 5242252. stm), Mark Ward, Technology Correspondent, BBC News. Retrieved 24 January 2011 [23] "Brazil, Russia, India and China to Lead Internet Growth Through 2011" (http:/ / clickz. com/ showPage. html?page=3626274). Clickz.com. . Retrieved 28 May 2009. [24] Coffman, K. G; Odlyzko, A. M. (2 October 1998) (PDF). The size and growth rate of the Internet (http:/ / www. dtc. umn. edu/ ~odlyzko/ doc/ internet. size. pdf). AT&T Labs. . Retrieved 21 May 2007. [25] Comer, Douglas (2006). The Internet book. Prentice Hall. p.64. ISBN0-13-233553-0. [26] "World Internet Users and Population Stats" (http:/ / www. internetworldstats. com/ stats. htm). Internet World Stats. Miniwatts Marketing Group. 22 June 2011. . Retrieved 23 June 2011. [27] "The Worlds Technological Capacity to Store, Communicate, and Compute Information" (http:/ / www. sciencemag. org/ content/ suppl/ 2011/ 02/ 08/ science. 1200970. DC1/ Hilbert-SOM. pdf), Martin Hilbert and Priscila Lpez (April 2011), Science, 332(6025), 6065. [28] "IETF Home Page" (http:/ / www. ietf. org/ ). Ietf.org. . Retrieved 20 June 2009. [29] Huston, Geoff. "IPv4 Address Report, daily generated" (http:/ / www. potaroo. net/ tools/ ipv4/ index. html). . Retrieved 20 May 2009. [30] "Notice of Internet Protocol version 4 (IPv4) Address Depletion" (https:/ / www. arin. net/ knowledge/ about_resources/ ceo_letter. pdf) (PDF). . Retrieved 7 August 2009.

Internet
[31] A. L. Barabasi, R. Albert; Barabsi, Albert-Lszl (2002). "Statistical mechanics of complex networks" (http:/ / rmp. aps. org/ abstract/ RMP/ v74/ i1/ p47_1). Rev. Mod. Phys 74: 4794. doi:10.1103/RevModPhys.74.47. . [32] Walter Willinger, Ramesh Govindan, Sugih Jamin, Vern Paxson, and Scott Shenker (2002). Scaling phenomena in the Internet (http:/ / www. pnas. org/ cgi/ content/ full/ 99/ suppl_1/ 2573), in Proceedings of the National Academy of Sciences, 99, suppl. 1, 25732580 [33] Jesdanun, Anick (16 April 2007). "Internet Makeover? Some argue it's time" (http:/ / seattletimes. nwsource. com/ html/ businesstechnology/ 2003667811_btrebuildnet16. html). Seattletimes.nwsource.com. . Retrieved 8 August 2011. [34] R. Cohen, K. Erez, D. ben-Avraham, S. Havlin (2000). "Resilience of the Internet to random breakdowns" (http:/ / havlin. biu. ac. il/ Publications. php?keyword=Resilience+ of+ the+ Internet+ to+ random+ breakdowns& year=*& match=all). Phys. Rev. Lett 85: 4625. . [35] R. Cohen, K. Erez, D. ben-Avraham, S. Havlin; Erez, K; Ben-Avraham, D; Havlin, S (2001). "Breakdown of the Internet under intentional attack" (http:/ / havlin. biu. ac. il/ Publications. php?keyword=Breakdown+ of+ the+ Internet+ under+ intentional+ attack& year=*& match=all). Phys. Rev. Lett 86 (16): 36825. doi:10.1103/PhysRevLett.86.3682. PMID11328053. . [36] "Bush administration annexes internet" (http:/ / www. theregister. co. uk/ 2005/ 07/ 01/ bush_net_policy/ ), Kieren McCarthy, The Register, 1 July 2005 [37] "The Virtual Private Nightmare: VPN" (http:/ / librenix. com/ ?inode=5013). Librenix. 4 August 2004. . Retrieved 21 July 2010. [38] Morrison, Geoff (18 November 2010). "What to know before buying a 'connected' TV Technology & science Tech and gadgets Tech Holiday Guide" (http:/ / www. msnbc. msn. com/ id/ 40241749/ ns/ technology_and_science-tech_and_gadgets). MSNBC. . Retrieved 8 August 2011. [39] "YouTube Fact Sheet" (http:/ / www. webcitation. org/ 5qyMMarNd). YouTube, LLC. Archived from the original (http:/ / www. youtube. com/ t/ fact_sheet) on 4 July 2010. . Retrieved 20 January 2009. [40] Pasternak, Sean B. (7 March 2006). "Toronto Hydro to Install Wireless Network in Downtown Toronto" (http:/ / www. bloomberg. com/ apps/ news?pid=10000082& sid=aQ0ZfhMa4XGQ& refer=canada). Bloomberg. . Retrieved 8 August 2011. [41] "By 2013, mobile phones will overtake PCs as the most common Web access device worldwide", according a forecast in "Gartner Highlights Key Predictions for IT Organizations and Users in 2010 and Beyond" (http:/ / www. gartner. com/ it/ page. jsp?id=1278413), Gartner, Inc., 13 January 2010 [42] "Georgian woman cuts off web access to whole of Armenia" (http:/ / www. guardian. co. uk/ world/ 2011/ apr/ 06/ georgian-woman-cuts-web-access). The Guardian. 6 April 2011. . Retrieved 11 April 2012. [43] Cowie, James. "Egypt Leaves the Internet" (http:/ / www. renesys. com/ blog/ 2011/ 01/ egypt-leaves-the-internet. shtml). Renesys. Archived (http:/ / www. webcitation. org/ 5w51j0pga) from the original on 28 January 2011. . Retrieved 28 January 2011. [44] "Egypt severs internet connection amid growing unrest" (http:/ / www. bbc. co. uk/ news/ technology-12306041). BBC News. 28 January 2011. . [45] "Internet users per 100 inhabitants 20012011" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ 2011/ Internet_users_01-11. xls), International Telecommunications Union, Geneva. Retrieved 4 April 2012 [46] "Number of Internet Users by Language" (http:/ / www. internetworldstats. com/ stats7. htm), Internet World Stats, Miniwatts Marketing Group, 31 May 2011. Retrieved 22 April 2012 [47] "Usage of content languages for websites" (http:/ / w3techs. com/ technologies/ overview/ content_language/ all). W3Techs.com. . Retrieved 30 December 2011. [48] Internet users graphs (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ ), Market Information and Statistics, International Telecommunications Union [49] "Google Earth demonstrates how technology benefits RI`s civil society, govt" (http:/ / www. antaranews. com/ en/ news/ 71940/ google-earth-demonstrates-how-technology-benefits-ris-civil-society-govt). Antara News. 2011-05-26. . Retrieved 2012-11-19. [50] Internet World Stats (http:/ / www. internetworldstats. com/ stats7. htm), updated for 30 June 2010. Retrieved 20 February 2011. [51] World Internet Usage Statistics News and Population Stats (http:/ / www. internetworldstats. com/ stats. htm) updated for 30 June 2010. Retrieved 20 February 2011. [52] How men and women use the Internet Pew Research Center 28 December 2005 [53] "Rapleaf Study on Social Network Users" (http:/ / business. rapleaf. com/ company_press_2008_07_29. html). . [54] "Women Ahead Of Men In Online Tv, Dvr, Games, And Social Media." (http:/ / www. entrepreneur. com/ tradejournals/ article/ 178175272. html). Entrepreneur.com. 1 May 2008. . Retrieved 8 August 2011. [55] "Technorati's State of the Blogosphere" (http:/ / technorati. com/ blogging/ state-of-the-blogosphere/ ). Technorati. . Retrieved 8 August 2011. [56] "Internet Pornography Statistics" (http:/ / internet-filter-review. toptenreviews. com/ internet-pornography-statistics. html), Jerry Ropelato, Top Ten Reviews, 2006 [57] "Do It Yourself! Amateur Porn Stars Make Bank" (http:/ / abcnews. go. com/ Business/ SmallBiz/ story?id=4151592), Russell Goldman, ABC News, 22 January 2008 [58] "Top Online Game Trends of the Decade" (http:/ / internetgames. about. com/ od/ gamingnews/ a/ trendsdecade. htm), Dave Spohn, About.com, 15 December 2009 [59] "Internet Game Timeline: 1963 2004" (http:/ / internetgames. about. com/ od/ gamingnews/ a/ timeline. htm), Dave Spohn, About.com, 2 June 2011 [60] Carole Hughes, Boston College. "The Relationship Between Internet Use and Loneliness Among College Students" (https:/ / www2. bc. edu/ ~hughesc/ abstract. html). Boston College. . Retrieved 11 August 2011.

15

Internet
[61] Patricia M. Thornton, "The New Cybersects: Resistance and Repression in the Reform era. In Elizabeth Perry and Mark Selden, eds., Chinese Society: Change, Conflict and Resistance (second edition) (London and New York: Routledge, 2003), pp. 14950. [62] "Net abuse hits small city firms" (http:/ / www. scotsman. com/ news/ net-abuse-hits-small-city-firms-1-892163). The Scotsman (Edinburgh). 11 September 2003. . Retrieved 7 August 2009. [63] The Shallows: What the Internet Is Doing to Our Brains (http:/ / www. theshallowsbook. com/ nicholascarr/ Nicholas_Carrs_The_Shallows. html), Nicholas Carr, W. W. Norton, 7 June 2010, 276 pp., ISBN 0-393-07222-3, ISBN 978-0-393-07222-8 [64] "The Arab Uprising's Cascading Effects" (http:/ / www. miller-mccune. com/ politics/ the-cascading-effects-of-the-arab-spring-28575/ ). Miller-mccune.com. 23 February 2011. . Retrieved 27 February 2011. [65] The Role of the Internet in Democratic Transition: Case Study of the Arab Spring (http:/ / www. etd. ceu. hu/ 2011/ chokoshvili_davit. pdf), Davit Chokoshvili, Master's Thesis, June 2011 [66] Kirkpatrick, David D. (9 February 2011). "Wired and Shrewd, Young Egyptians Guide Revolt" (http:/ / www. nytimes. com/ 2011/ 02/ 10/ world/ middleeast/ 10youth. html). The New York Times. . [67] Berdal, S.R.B. (2004) (PDF), Public deliberation on the Web: A Habermasian inquiry into online discourse (http:/ / www. duo. uio. no/ publ/ informatikk/ 2004/ 20535/ SimonBerdal. pdf), Oslo: University of Oslo, [68] Kiva Is Not Quite What It Seems (http:/ / blogs. cgdev. org/ open_book/ 2009/ 10/ kiva-is-not-quite-what-it-seems. php), by David Roodman, Center for Global Development, 2 October 2009, as accessed 2 & 16 January 2010 [69] Confusion on Where Money Lent via Kiva Goes (http:/ / www. nytimes. com/ 2009/ 11/ 09/ business/ global/ 09kiva. html?_r=1& scp=1& sq=Kiva& st=cse), by Stephanie Strom, in The New York Times, 8 November 2009, as accessed 2 & 16 January 2010 [70] "Zidisha Set to "Expand" in Peer-to-Peer Microfinance", Microfinance Focus, Feb 2010 (http:/ / www. microfinancefocus. com/ news/ 2010/ 02/ 07/ zidisha-set-to-expand-in-peer-to-peer-microfinance-julia-kurnia/ ) [71] Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (http:/ / mitpress. mit. edu/ catalog/ item/ default. asp?ttype=2& tid=12187), Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski, and Jonathan Zittrain (eds), MIT Press, April 2010, ISBN 0-262-51435-4, ISBN 978-0-262-51435-4 [72] "Finland censors anti-censorship site" (http:/ / www. theregister. co. uk/ 2008/ 02/ 18/ finnish_policy_censor_activist/ ). The Register. 18 February 2008. . Retrieved 19 February 2008.

16

External links
Organizations
The Internet Society (http://www.isoc.org/) Berkman Center for Internet and Society (http://cyber.law.harvard.edu/) European Commission Information Society (http://ec.europa.eu/information_society/index_en.htm) Living Internet (http://www.livinginternet.com/), Internet history and related information, including information from many creators of the Internet

Articles, books, and journals


First Monday (http://www.firstmonday.org/), a peer-reviewed journal on the Internet established in 1996 as a Great Cities Initiative of the University Library of the University of Illinois at Chicago, ISSN: 1396-0466 Rise of the Network Society (http://www.wiley.com/WileyCDA/WileyTitle/productCd-1405196866.html), Manual Castells, Wiley-Blackwell, 1996 (1st ed) and 2009 (2nd ed), ISBN 978-1-4051-9686-4 "The Internet: Changing the Way We Communicate" (http://www.nsf.gov/about/history/nsf0050/internet/ internet.htm) in America's Investment in the Future (http://www.nsf.gov/about/history/nsf0050/index.jsp), National Science Foundation, Arlington, Va. USA, 2000 Lessons from the History of the Internet (http://www.oup.com/us/catalog/general/subject/Sociology/ EnvironmentTechnology/?view=usa&ci=9780199255771), Manuel Castells, in The Internet Galaxy, Ch. 1, pp 935, Oxford University Press, 2001, ISBN 978-0-19-925577-1 ISBN10: 0-19-925577-6 "Media Freedom Internet Cookbook" (http://www.osce.org/fom/13836) by the OSCE Representative on Freedom of the Media Vienna, 2004 The Internet Explained (http://www.southbourne.com/articles/internet-explained), Vincent Zegna & Mike Pepper, Sonet Digital, November 2005, Pages 1 7. "How Much Does The Internet Weigh? (http://discovermagazine.com/2007/jun/ how-much-does-the-internet-weigh)", by Stephen Cass, Discover, 2007

Internet "The Internet spreads its tentacles" (http://www.sciencenews.org/view/generic/id/8651/description/ Mapping_a_Medusa_The_Internet_spreads_its_tentacles), Julie Rehmeyer, Science News, Vol. 171, No. 25, pp.387388, 23 June 2007 Internet (http://www.routledge.com/books/details/9780415352277/), Lorenzo Cantoni & Stefano Tardini, Routledge, 2006, ISBN 978-0-203-69888-4

17

History of the Internet


The history of the Internet began with the development of electronic computers in the 1950s. The public was first introduced to the Internet when a message was sent from computer science Professor Leonard KleinRock's laboratory at University of California, Los Angeles (UCLA), after the second piece of network equipment was installed at Stanford Research Institute (SRI). This connection not only enabled the first transmission to be made, but is also considered to be the first Internet backbone. This began the point-to-point communication between mainframe computers and terminals, expanded to point-to-point connections between computers and then early research into packet switching. Packet switched networks such as ARPANET, Mark I at NPL in the UK, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, where multiple separate networks could be joined together into a network of networks. In 1982 the Internet protocol suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networks called the Internet was introduced. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET) and again in 1986 when NSFNET provided access to supercomputer sites in the United States from research and education organizations. Commercial Internet 1974 ABC interview with Arthur C. Clarke in service providers (ISPs) began to emerge in the late 1980s and 1990s. which he describes a future of ubiquitous The ARPANET was decommissioned in 1990. The Internet was networked personal computers. commercialized in 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic. Since the mid-1990s the Internet has had a drastic impact on culture and commerce, including the rise of near-instant communication by electronic mail, instant messaging, Voice over Internet Protocol (VoIP) "phone calls", two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.

History of the Internet

18

Internet history timeline


Early research and development:

1961 First packet-switching papers 1966 Merit Network founded 1966 ARPANET planning starts 1969 ARPANET carries its first packets 1970 Mark I network at NPL (UK) 1970 Network Information Center (NIC) 1971 Merit Network's packet-switched network operational 1971 Tymnet packet-switched network 1972 Internet Assigned Numbers Authority (IANA) established 1973 CYCLADES network demonstrated 1974 Telenet packet-switched network 1976 X.25 protocol approved 1978 Minitel introduced 1979 Internet Activities Board (IAB) 1980 USENET news using UUCP 1980 Ethernet standard introduced 1981 BITNET established Len Kleinrock and the first Interface Message [1] Processor.

Merging the networks and creating the Internet:

1981 Computer Science Network (CSNET) 1982 TCP/IP protocol suite formalized 1982 Simple Mail Transfer Protocol (SMTP) 1983 Domain Name System (DNS) 1983 MILNET split off from ARPANET 1985 First .COM domain name registered 1986 NSFNET with 56 kbit/s links 1986 Internet Engineering Task Force (IETF) 1987 UUNET founded 1988 NSFNET upgraded to 1.5 Mbit/s (T1) 1988 OSI Reference Model released 1988 Morris worm 1989 Border Gateway Protocol (BGP) 1989 PSINet founded, allows commercial traffic 1989 Federal Internet Exchanges (FIXes) 1990 GOSIP (without TCP/IP) 1990 ARPANET decommissioned 1990 Advanced Network and Services (ANS) 1990 UUNET/Alternet allows commercial traffic 1990 Archie search engine 1991 Wide area information server (WAIS) 1991 Gopher 1991 Commercial Internet eXchange (CIX) 1991 ANS CO+RE allows commercial traffic 1991 World Wide Web (WWW) 1992 NSFNET upgraded to 45 Mbit/s (T3)

History of the Internet


1992 Internet Society (ISOC) established 1993 Classless Inter-Domain Routing (CIDR) 1993 InterNIC established 1993 Mosaic web browser released 1994 Full text web search engines 1994 North American Network Operators' Group (NANOG) established

19

Commercialization, privatization, broader access leads to the modern Internet:

1995 New Internet architecture with commercial ISPs connected at NAPs 1995 NSFNET decommissioned 1995 GOSIP updated to allow TCP/IP 1995 very high-speed Backbone Network Service (vBNS) 1995 IPv6 proposed 1998 Internet Corporation for Assigned Names and Numbers (ICANN) 1999 IEEE 802.11b wireless networking 1999 Internet2/Abilene Network 1999 vBNS+ allows broader access 2000 Dot-com bubble bursts 2001 New top-level domain names activated 2001 Code Red I, Code Red II, and Nimda worms 2003 National LambdaRail founded 2006 First meeting of the Internet Governance Forum 2010 First internationalized country code top-level domains registered 2012 ICANN begins accepting applications for new generic top-level domain names Examples of popular Internet services: 1990 IMDb Internet movie database 1995 Amazon.com online retailer 1995 eBay online auction and shopping 1995 Craigslist classified advertisements 1996 Hotmail free web-based e-mail 1997 Babel Fish automatic translation 1998 Google Search 1998 Yahoo! Clubs (now Yahoo! Groups) 1998 PayPal Internet payment system 1999 Napster peer-to-peer file sharing 2001 BitTorrent peer-to-peer file sharing 2001 Wikipedia, the free encyclopedia 2003 LinkedIn business networking 2003 Myspace social networking site 2003 Skype Internet voice calls 2003 iTunes Store 2003 4Chan Anonymous image-based bulletin board 2004 Facebook social networking site 2004 Podcast media file series 2004 Flickr image hosting 2005 YouTube video sharing 2005 Google Earth virtual globe 2006 Twitter microblogging 2007 WikiLeaks anonymous news and information leaks

History of the Internet


2007 Google Street View 2008 Amazon Elastic Compute Cloud (EC2) 2008 Dropbox cloud-based file hosting 2009 Bing search engine 2011 Google+ social networking

20

Precursors
The Internet has precursors that date back to the 19th century, especially the telegraph system, more than a century before the digital Internet became widely used in the second half of the 1990s. The concept of data communication transmitting data between two different places, connected via some kind of electromagnetic medium, such as radio or an electrical wire predates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Telegraph systems and telex machines can be considered early precursors of this kind of communication. Fundamental theoretical work in data transmission and information theory was developed by Claude Shannon, Harry Nyquist, and Ralph Hartley, during the early 20th century. Early computers used the technology available at the time to allow communication between the central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. Using these technologies it was possible to exchange data (such as files) between remote computers. However, the point to point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also deemed as inherently unsafe for strategic and military use, because there were no alternative paths for the communication in case of an enemy attack.

Three terminals and an ARPA


A fundamental pioneer in the call for a global network, J. C. R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis. "A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions." J.C.R.Licklider,[2] In August 1962, Licklider and Welden Clark published the paper "On-Line Man Computer Communication", which was one of the first descriptions of a networked future. In October 1962, Licklider was hired by Jack Ruina as Director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obvious by the apparent waste of resources this caused. "For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's

History of the Internet obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet." Robert W.Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with The New York Times,[3] Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus that led his successors such as Lawrence Roberts and Robert Taylor to further the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.[4]

21

Packet switching
At the tip of the problem lay the issue of connecting separate physical networks to form one logical network. During the 1960s, Paul Baran (RAND Corporation), produced a study of survivable networks for the US military. Information transmitted across Baran's network would be divided into what he called 'message-blocks'. Independently, Donald Davies (National Physical Laboratory, UK), proposed and developed a similar network based on what he called packet-switching, the term that would ultimately be adopted. Leonard Kleinrock (MIT) developed mathematical theory behind this technology. Packet-switching provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.[5] Packet switching is a rapid store-and-forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krash and Paul Baran's U.S. military funded research to focus on using message-blocks to include network redundancy.[6] The widespread urban legend that the Internet was designed to resist nuclear attack likely arose as a result of Baran's earlier work on packet switching, which did focus on redundancy in the face of a nuclear "holocaust."[7][8]

Networks that led to the Internet


ARPANET
Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles (UCLA)and the Stanford Research Institute on 22:30 hours on October 29, 1969. "We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone, "Do you see the L?" "Yes, we see the L," came the response. We typed the O, and we asked, "Do you see the O." "Yes, we see the O." Then we typed the G, and the system crashed ... Yet a revolution had begun" ....[9] By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.[10][11] ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still

History of the Internet used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein's research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.[12]

22

NPL
In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching. The proposal was not taken up nationally, but by 1970 he had designed and built the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.[13] By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986.

Merit Network
The Merit Network[14] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[15] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit.[16] In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network.[16][17] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the initial ARPANET design and to support network research generally. It was the first network to make the hosts responsible for the reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms.[18][19]

X.25 and public data networks


Based on ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.[20] The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.[21]

History of the Internet Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET. The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

23

UUCP and Usenet


In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies (commercial organizations who might provide bug fixes) compared to later networks like CSnet and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the internet technology becoming progress through popular diffusion.[22]

Merging the networks and creating the Internet (197390)


TCP/IP
With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design.[23] The specification of the resulting protocol, RFC 675 Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal and Carl Sunshine, Network Working Group, December 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.

Map of the TCP/IP test network in February 1982

History of the Internet

24 With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network.[24][25]

Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, known as flag day, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.[26]

A Stanford Research Institute packet radio van, site of the first three-way internetworked transmission.

ARPANET to the federal wide area networks: MILNET, NSI, ESNet, CSNET, and NSFNET
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the BBN Technologies TCP/IP internet map early 1986 SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet. The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first

History of the Internet Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet. NASA developed the TCP/IP based NASA Science Network (NSN) in the mid 1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally T3 NSFNET Backbone, c. 1992 integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents. In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. Its experience with CSNET lead NSF to use TCP/IP when it created NSFNET, a 56kbit/s backbone established in 1986, that connected the NSF supported supercomputing centers and regional research and education networks in the United States.[27] However, use of NSFNET was not limited to supercomputer users and the 56kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5Mbit/s in 1988. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers.

25

Transition towards the Internet


The term "internet" was adopted in the first RFC published on the TCP protocol (RFC 675:[28] Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was interlinked with NSFNET in the late 1980s, that the term was used as the name of the network, Internet,[29] being a large and global TCP/IP network. As interest in widespread networking grew and new applications for it were developed, the Internet's technologies spread throughout the rest of the world. The network-agnostic approach in TCP/IP meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.[30] Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple email peering, such as allowing access to FTP sites via UUCP or e-mail.[31] Finally, the Internet's remaining centralized routing aspects were removed. The EGP routing protocol was replaced by a new protocol, the Border Gateway Protocol (BGP). This turned the Internet into a meshed topology and moved

History of the Internet away from the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.[32]

26

TCP/IP goes global (19892010)


CERN, the European Internet, the link to the Pacific and beyond
Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989. In 1988 Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections.[33] This coincided with the creation of Rseaux IP Europens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam. At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia. The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[34]

History of the Internet

27

Global digital divide


While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place. Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.

List of countries by number of Internet usersInternet users in 2010 as a percentage of a country's populationSource: International Telecommunications Union. "Percentage of Individuals using the Internet 2000-2010", International Telecommunications Union, accessed 16 April 2012

In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems. In 1996 a USAID funded project, the Leland initiative [36], started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Cte d'Ivoire and Benin in 1998. Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[37] There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[38] Asia and Oceania The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[39] In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.[40]

History of the Internet Latin America As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

28

Opening the network to commerce


The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation. During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, opened in 1989.[42] In 1992, the U.S. Congress passed the Number of Internet hosts worldwide: 1981-2012Source: Internet Systems Consortium. "Internet host count history". Internet Systems Consortium. . Retrieved May 16, 2012. Scientific and Advanced-Technology Act, 42 U.S.C.1862(g) [43], which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks.[44][45] This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.[46] By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point for Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995 when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service and the service ended.[47][48] NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.[49]

History of the Internet

29

Futurology: Beyond Earth and TCP/IP (2010 and beyond)


The first live Internet link into low earth orbit was established on January 22, 2010 when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.[50] Communication with spacecraft beyond earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborn transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space "weather" disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP internet protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008.[51] Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014.[52] This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.[53]

Internet governance
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. It has no centralized governance for either technology or policies, and each constituent network chooses what technologies and protocols it will deploy from the voluntary technical standards that are developed by the Internet Engineering Task Force (IETF).[54] However, throughout its entire history, the Internet system has had an "Internet Assigned Numbers Authority" (IANA) for the allocation and assignment of various technical identifiers needed for the operation of the Internet.[55] The Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.

NIC, InterNIC, IANA and ICANN


The IANA function was originally performed by USC Information Sciences Institute, and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998. As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data NetworkNetwork Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract.[55] In 1991, the Defense Information Systems Agency (DISA)

History of the Internet awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.[56][57] The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366,[58] which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.[59] Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.[60] Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers.[59] Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.[61] In 1998 both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority.[62] The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure.[63] ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.

30

Internet Engineering Task Force


The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the IETF's work is done in Working Groups. It does not "run the Internet", despite what some people might mistakenly say. The IETF does make voluntary standards that are often adopted by Internet users, but it does not control, or even patrol, the Internet.[64][65] The IETF started in January 1986 as a quarterly meeting of U.S. government funded researchers. Non-government representatives were invited starting with the fourth IETF meeting in October 1986. The concept of Working Groups

History of the Internet was introduced at the fifth IETF meeting in February 1987. The seventh IETF meeting in July 1987 was the first meeting with more than 100 attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today the IETF meets three times a year and attendnce is often about 1,300 people, but has been as high as 2,000 upon occasion. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is roughly 50%, even at meetings held in the United States.[64] The IETF is unusual in that it exists as a collection of happenings, but is not a corporation and has no board of directors, no members, and no dues. The closest thing there is to being an IETF member is being on the IETF or a Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG)[66] and the Internet Architecture Board (IAB).[67] The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer term research issues.[64][68] Request for Comments Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by the late Jon Postel, the first RFC Editor.[64][69] RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics.[70] RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.[64][69]

31

The Internet Society


The Internet Society or ISOC is an international, nonprofit organization founded during 1992 to "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.[71] ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision making.[72]

History of the Internet

32

Globalization and Internet governance in the 21st century


Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision making authority are limited and subject to increasing international scrutiny and increasingly objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000,[73] and finally in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued.[74][75][76] The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments. In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow on meetings annually thereafter.[77] Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.[78][79]

Use and culture


E-mail and Usenet
E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.[80] The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report[81] indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.[82] A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol. In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers [83] mailing list). During the early years of the Internet, e-mail and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside e-email messages. The file was encoded, broken in pieces and sent by e-mail; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to

History of the Internet download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.

33

From Gopher to the WWW


As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Archie, Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. In the early 1990s, Gopher, invented by Mark P. McCahill offered a viable alternative to the World Wide Web. However, by the mid 1990s it became clear that Gopher and the other projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks. One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex"[84] and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS.[85] Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard (1987). Gopher became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way. In 1989, while working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread.[86] For his work in developing the World Wide Web, Berners-Lee received the Millennium technology prize in 2004.[87] One early popular web browser, modeled after HyperCard, was ViolaWWW. A turning point for the World Wide Web began with the introduction[88] of the Mosaic web browser[89] in 1993, a graphical browser developed by a team at the National Center for This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Supercomputing Applications at the University of Illinois at Web server. Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991 also known as the Gore Bill.[90] Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology). Mosaic was eventually superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications."[91] 24 Hours in Cyberspace, "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com.[92][93] It was headed by photographer Rick Smolan.[94] A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on January 23, 1997, featuring 70 photos from the project.[95]

History of the Internet

34

Search engines
Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers. As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popularYahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines. Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.[96] On June 3, 2009, Microsoft launched its new search engine, Bing.[97] The following month Microsoft and Yahoo! announced a deal in which Bing would power Yahoo! Search.[98]

File sharing
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today.[99] A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines. In 1999 Napster became the first peer-to-peer file sharing system.[100] Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003.[101] All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses.[102] And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazza in 2006, and Limewire in 2010 to shutdown or refocus their efforts.[103][104] The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders.[105] File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other.[106][107]

History of the Internet

35

Dot-com bubble
Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer appit could bring together unrelated buyers and sellers in seamless and low-cost ways. Entrepreneurs around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business and economics, the majority were simply people with ideas, and did not manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so. The dot-com bubble burst in March 2000, with the technology heavy NASDAQ Composite index peaking at 5,048.62 on March 10[108] (5,132.52 intraday), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit. But despite this, the Internet continues to grow, driven by commerce, ever greater amounts of online information and knowledge and social networking.

Mobile phones and the Internet


The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001 the mobile phone email system by Research in Motion for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 2030% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.[109]

History of the Internet

36

Online population forecast


A study conducted by JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly. The report says 1.1 billion people have regular Web access. For the study, JupiterResearch defined online users as people who regularly access the Internet from dedicated Internet-access devices, which exclude cellular telephones.[111]
Internet users per 100 inhabitantsSource: International Telecommunications Union. "Internet users per 100 inhabitants 2001-2011", International Telecommunications Union, Geneva, accessed 4 April 2012

Historiography

Some concerns have been raised over the historiography of the Internet's development. Specifically that it is hard to find documentation of much of the Internet's development, for several reasons, including a lack of centralized documentation for much of the early developments that led to the Internet. "The Arpanet period is somewhat well documented because the corporation in charge BBN left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. [...] So much of what happened was done verbally and on the basis of individual trust." Doug Gale (2007)[112]

References
[1] Leonard Kleinrock (2005). The history of the Internet (http:/ / www. lk. cs. ucla. edu/ personal_history. html). . Retrieved May 28, 2009. [2] J. C. R. Licklider (1960). Man-Computer Symbiosis. [3] "An Internet Pioneer Ponders the Next Revolution" (http:/ / partners. nytimes. com/ library/ tech/ 99/ 12/ biztech/ articles/ 122099outlook-bobb. html?Partner=Snap). An Internet Pioneer Ponders the Next Revolution. December20,1999. . Retrieved November 25, 2005. [4] Licklider and the Universal Network (http:/ / www. livinginternet. com/ i/ ii_licklider. htm) [5] Ruthfield, Scott (September 1995). "The Internet's History and Development From Wartime Tool to the Fish-Cam" (http:/ / dl. acm. org/ citation. cfm?id=332198. 332202& coll=portal& dl=ACM). Crossroads 2 (1): pp.24. doi:10.1145/332198.332202. Archived (http:/ / web. archive. org/ web/ 20071018045734/ http:/ / www. acm. org/ crossroads/ xrds2-1/ inet-history. html) from the original on October 18, 2007. . Retrieved July 25, 2012. [6] "About Rand" (http:/ / www. rand. org/ about/ history/ baran. html). Paul Baran and the Origins of the Internet. . Retrieved July 25, 2012. [7] Baran, Paul (May 27, 1960) (PDF). Reliable Digital Communications Using Unreliable Network Repeater Nodes (http:/ / www. rand. org/ content/ dam/ rand/ pubs/ papers/ 2008/ P1995. pdf). The RAND Corporation. p.1. . Retrieved July 25, 2012. [8] Johna Till Johnson (June 7, 2004). "'Net was born of economic necessity, not fear" (http:/ / www. networkworld. com/ columnists/ 2004/ 0607johnson. html). . Retrieved July 25, 2012. [9] Gromov, Gregory (1995). "Roads and Crossroads of Internet History" (http:/ / www. netvalley. com/ intval. html). . [10] Hafner, Katie (1998). Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster. ISBN0-684-83267-4. [11] Ronda Hauben (2001). From the ARPANET to the Internet (http:/ / www. columbia. edu/ ~rh120/ other/ tcpdigest_paper. txt). . Retrieved May 28, 2009. [12] "NORSAR and the Internet" (http:/ / www. norsar. no/ pc-5-30-NORSAR-and-the-Internet. aspx). NORSAR. . Retrieved June 5, 2009. [13] Ward, Mark (October 29, 2009). "Celebrating 40 years of the net" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 8331253. stm). BBC News. .

History of the Internet


[14] The Merit Network, Inc. is an independent non-profit 501(c)(3) corporation governed by Michigan's public universities. Merit receives administrative services under an agreement with the University of Michigan. [15] A Chronicle of Merit's Early History (http:/ / www. merit. edu/ about/ history/ article. php), John Mulcahy, 1989, Merit Network, Ann Arbor, Michigan [16] Merit Network Timeline: 19701979 (http:/ / www. merit. edu/ about/ history/ timeline_1970. php), Merit Network, Ann Arbor, Michigan [17] Merit Network Timeline: 19801989 (http:/ / www. merit. edu/ about/ history/ timeline_1980. php), Merit Network, Ann Arbor, Michigan [18] "A Technical History of CYCLADES" (http:/ / www. cs. utexas. edu/ users/ chris/ think/ Cyclades/ index. shtml). Technical Histories of the Internet & other Network Protocols. Computer Science Department, University of Texas Austin. . [19] "The Cyclades Experience: Results and Impacts" (http:/ / www. informatik. uni-trier. de/ ~ley/ db/ conf/ ifip/ ifip1977. html#Zimmermann77), Zimmermann, H., Proc. IFIP'77 Congress, Toronto, August 1977, pp. 465469 [20] tsbedh. "History of X.25, CCITT Plenary Assemblies and Book Colors" (http:/ / www. itu. int/ ITU-T/ studygroups/ com17/ history. html). Itu.int. . Retrieved June 5, 2009. [21] "Events in British Telecomms History" (http:/ / web. archive. org/ web/ 20030405153523/ http:/ / www. sigtel. com/ tel_hist_brief. html). Events in British TelecommsHistory. Archived from the original (http:/ / www. sigtel. com/ tel_hist_brief. html) on April 5, 2003. . Retrieved November 25, 2005. [22] UUCP Internals Frequently Asked Questions (http:/ / www. faqs. org/ faqs/ uucp-internals/ ) [23] Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff (2003). A Brief History of Internet (http:/ / www. isoc. org/ internet/ history/ brief. shtml). . Retrieved May 28, 2009. [24] "Computer History Museum and Web History Center Celebrate 30th Anniversary of Internet Milestone" (http:/ / www. computerhistory. org/ about/ press_relations/ releases/ 20071101/ ). . Retrieved November 22, 2007. [25] Ogg, Erica (2007-11-08). "'Internet van' helped drive evolution of the Web" (http:/ / news. cnet. com/ Internet-van-helped-drive-evolution-of-the-Web/ 2100-1033_3-6217511. html). CNET. . Retrieved 2011-11-12. [26] Jon Postel, NCP/TCP Transition Plan, RFC 801 [27] David Roessner, Barry Bozeman, Irwin Feller, Christopher Hill, Nils Newman (1997). The Role of NSF's Support of Engineering in Enabling Technological Innovation (http:/ / www. sri. com/ policy/ csted/ reports/ techin/ inter2. html). . Retrieved May 28, 2009. [28] "RFC 675 Specification of internet transmission control program" (http:/ / tools. ietf. org/ html/ rfc675). Tools.ietf.org. . Retrieved May 28, 2009. [29] Tanenbaum, Andrew S. (1996). Computer Networks. Prentice Hall. ISBN0-13-394248-1. [30] Hauben, Ronda (2004). "The Internet: On its International Origins and Collaborative Vision" (http:/ / www. ais. org/ ~jrh/ acn/ ACn12-2. a03. txt). Amateur Computerist 12 (2). . Retrieved May 29, 2009. [31] "Internet Access Provider Lists" (http:/ / ftp. cac. psu. edu/ pub/ internexus/ ACCESS. PROVIDRS). . Retrieved May 10, 2012. [32] "RFC 1871 CIDR and Classful Routing" (http:/ / tools. ietf. org/ html/ rfc1871). Tools.ietf.org. . Retrieved May 28, 2009. [33] Ben Segal (1995). A Short History of Internet Protocols at CERN (http:/ / www. cern. ch/ ben/ TCPHIST. html). . [34] "Internet History in Asia" (http:/ / www. apan. net/ meetings/ busan03/ cs-history. htm). 16th APAN Meetings/Advanced Network Conference in Busan. . Retrieved December 25, 2005. [35] "Percentage of Individuals using the Internet 2000-2010" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ 2010/ IndividualsUsingInternet_00-10. xls), International Telecommunications Union, accessed 16 April 2012 [36] http:/ / www. usaid. gov/ regions/ afr/ leland/ chrono. htm [37] "ICONS webpage" (http:/ / icons. afrinic. net/ ). Icons.afrinic.net. . Retrieved May 28, 2009. [38] Nepad, Eassy partnership ends in divorce (http:/ / www. fmtech. co. za/ ?p=209),(South African) Financial Times FMTech, 2007 [39] "APRICOT webpage" (http:/ / www. apricot. net/ ). Apricot.net. May 4, 2009. . Retrieved May 28, 2009. [40] "A brief history of the Internet in China" (http:/ / www. pcworld. idg. com. au/ index. php/ id;854351844;pp;2;fp;2;fpid;1). China celebrates 10 years of being connected to the Internet. . Retrieved December 25, 2005. [41] "Internet host count history" (https:/ / www. isc. org/ solutions/ survey/ history). Internet Systems Consortium. . Retrieved May 16, 2012. [42] "The World internet provider" (http:/ / www. std. com/ ). . Retrieved May 28, 2009. [43] http:/ / www. law. cornell. edu/ uscode/ 42/ 1862(g). html [44] OGC-00-33R Department of Commerce: Relationship with the Internet Corporation for Assigned Names and Numbers (http:/ / www. gao. gov/ new. items/ og00033r. pdf). Government Accountability Office. July 7, 2000. p.6. . [45] Even after the appropriations act was amended in 1992 to give NSF more flexibility with regard to commercial traffic, NSF never felt that it could entirely do away with the AUP and its restrictions on commercial traffic, see the response to Recommendation 5 in NSF's response to the Inspector General's review (a April 19, 1993 memo from Frederick Bernthal, Acting Director, to Linda Sundro, Inspector General, that is included at the end of Review of NSFNET (http:/ / www. nsf. gov/ pubs/ stis1993/ oig9301/ oig9301. txt), Office of the Inspector General, National Science Foundation, March 23, 1993) [46] Management of NSFNET (http:/ / www. eric. ed. gov/ ERICWebPortal/ search/ recordDetails. jsp?ERICExtSearch_SearchValue_0=ED350986& searchtype=keyword& ERICExtSearch_SearchType_0=no& _pageLabel=RecordDetails& accno=ED350986& _nfls=false), a transcript of the March 12, 1992 hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, One Hundred Second Congress, Second Session, Hon. Rick Boucher, subcommittee chairman, presiding

37

History of the Internet


[47] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris, Ph.D., and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [48] "A Brief History of the Internet" (http:/ / www. walthowe. com/ navnet/ history. html). . [49] NSF Solicitation 93-52 (http:/ / w2. eff. org/ Infrastructure/ Govt_docs/ nsf_nren. rfp) Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and the NREN(SM) Program, May 6, 1993 [50] NASA Extends the World Wide Web Out Into Space (http:/ / www. nasa. gov/ home/ hqnews/ 2010/ jan/ HQ_M10-011_Hawaii221169. html). NASA media advisory M10-012, January 22, 2010. Archived (http:/ / www. webcitation. org/ 5uaKVooin) [51] NASA Successfully Tests First Deep Space Internet (http:/ / www. nasa. gov/ home/ hqnews/ 2008/ nov/ HQ_08-298_Deep_space_internet. html). NASA media release 08-298, November 18, 2008 Archived (http:/ / www. webcitation. org/ 5uaKpKCGz) [52] Disruption Tolerant Networking for Space Operations (DTN). July 31, 2012 (http:/ / www. nasa. gov/ mission_pages/ station/ research/ experiments/ DTN. html) [53] "Cerf: 2011 will be proving point for 'InterPlanetary Internet'" (http:/ / www. webcitation. org/ 678nhEdYj). Network World interview with Vint Cerf. February 18, 2011. Archived from the original (http:/ / www. networkworld. com/ news/ 2011/ 021811-cerf-interplanetary-internet. html) on December 9, 2012. . [54] "Internet Architecture" (http:/ / www. rfc-editor. org/ rfc/ rfc1958. txt). IAB Architectural Principles of the Internet. . Retrieved April 10, 2012. [55] "DDN NIC" (http:/ / www. rfc-editor. org/ rfc/ rfc1174. txt). IAB Recommended Policy on Distributing Internet Identifier Assignment. . Retrieved December 26, 2005. [56] "GSI-Network Solutions" (http:/ / www. rfc-editor. org/ rfc/ rfc1261. txt). TRANSITION OF NIC SERVICES. . Retrieved December 26, 2005. [57] "Thomas v. NSI, Civ. No. 97-2412 (TFH), Sec. I.A. (DCDC April6,1998)" (http:/ / lw. bna. com/ lw/ 19980428/ 972412. htm). Lw.bna.com. . Retrieved May 28, 2009. [58] "RFC 1366" (http:/ / www. rfc-editor. org/ rfc/ rfc1366. txt). Guidelines for Management of IP Address Space. . Retrieved April 10, 2012. [59] "Development of the Regional Internet Registry System" (http:/ / www. cisco. com/ web/ about/ ac123/ ac147/ archived_issues/ ipj_4-4/ regional_internet_registries. html). Cisco. . Retrieved April 10, 2012. [60] "NIS Manager Award Announced" (http:/ / www. ripe. net/ ripe/ maillists/ archives/ lir-wg/ 1992/ msg00028. html). NSF Network information services awards. . Retrieved December 25, 2005. [61] "Internet Moves Toward Privatization" (http:/ / www. nsf. gov/ news/ news_summ. jsp?cntn_id=102819). http:/ / www. nsf. gov. 24 June 1997. . [62] "RFC 2860" (http:/ / www. rfc-editor. org/ rfc/ rfc2860. txt). Memorandum of Understanding Concerning the Technical Work of the Internet Assigned Numbers Authority. . Retrieved December 26, 2005. [63] "ICANN Bylaws" (http:/ / www. icann. org/ en/ about/ governance/ bylaws). . Retrieved April 10, 2012. [64] "The Tao of IETF: A Novice's Guide to the Internet Engineering Task Force", FYI 17 and RFC 4677, P. Hoffman and S. Harris, Internet Society, September 2006 [65] "A Mission Statement for the IETF", H. Alvestrand, Internet Society, BCP 95 and RFC 3935, October 2004 [66] "An IESG charter", H. Alvestrand, RFC 3710, Internet Society, February 2004 [67] "Charter of the Internet Architecture Board (IAB)", B. Carpenter, BCP 39 and RFC 2850, Internet Society, May 2000 [68] "IAB Thoughts on the Role of the Internet Research Task Force (IRTF)", S. Floyd, V. Paxson, A. Falk (eds), RFC 4440, Internet Society, March 2006 [69] "The RFC Series and RFC Editor", L. Daigle, RFC 4844, Internet Society, July 2007 [70] "Not All RFCs are Standards", C. Huitema, J. Postel, S. Crocker, RFC 1796, Internet Society, April 1995 [71] Internet Society (ISOC) - Introduction to ISOC (http:/ / www. isoc. org/ isoc/ ) [72] Internet Society (ISOC) - ISOC's Standards Activities (http:/ / www. isoc. org/ standards/ ) [73] USC/ICANN Transition Agreement (http:/ / www. icann. org/ en/ general/ usc-icann-transition-agreement. htm) [74] ICANN cuts cord to US government, gets broader oversight: ICANN, which oversees the Internet's domain name system, is a private nonprofit that reports to the US Department of Commerce. Under a new agreement, that relationship will change, and ICANN's accountability goes global (http:/ / arstechnica. com/ tech-policy/ news/ 2009/ 09/ icann-cuts-cord-to-us-government-gets-broader-oversight. ars) Nate Anderson, September 30, 2009 [75] Rhoads, Christopher (October 2, 2009). "U.S. Eases Grip Over Web Body: Move Addresses Criticisms as Internet Usage Becomes More Global" (http:/ / online. wsj. com/ article/ SB125432179022552705. html). . [76] Rabkin, Jeremy; Eisenach, Jeffrey (October 2, 2009). "The U.S. Abandons the Internet: Multilateral governance of the domain name system risks censorship and repression" (http:/ / online. wsj. com/ article/ SB10001424052748704471504574446942665685208. html). . [77] Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. p.67. ISBN978-0-262-01459-5. [78] Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. pp.7980. ISBN978-0-262-01459-5. [79] DeNardis, Laura, The Emerging Field of Internet Governance (http:/ / ssrn. com/ abstract=1678343) (September 17, 2010). Yale Information Society Project Working Paper Series. [80] "The Risks Digest" (http:/ / catless. ncl. ac. uk/ Risks/ 20. 25. html#subj3). Great moments in e-mail history. . Retrieved April 27, 2006.

38

History of the Internet


[81] "The History of Electronic Mail" (http:/ / www. multicians. org/ thvv/ mail-history. html). The History of Electronic Mail. . Retrieved December 23, 2005. [82] "The First Network Email" (http:/ / openmap. bbn. com/ ~tomlinso/ ray/ firstemailframe. html). The First Network Email. . Retrieved December 23, 2005. [83] http:/ / www. sflovers. org/ [84] Bush, Vannevar (1945). As We May Think (http:/ / www. theatlantic. com/ doc/ 194507/ bush). . Retrieved May 28, 2009. [85] Douglas Engelbart (1962). Augmenting Human Intellect: A Conceptual Framework (http:/ / www. bootstrap. org/ augdocs/ friedewald030402/ augmentinghumanintellect/ ahi62index. html). . [86] "The Early World Wide Web at SLAC" (http:/ / www. slac. stanford. edu/ history/ earlyweb/ history. shtml). The Early World Wide Web at SLAC: Documentation of the Early Web at SLAC. . Retrieved November 25, 2005. [87] "Millennium Technology Prize 2004 awarded to inventor of World Wide Web" (http:/ / web. archive. org/ web/ 20070830111145/ http:/ / www. technologyawards. org/ index. php?m=2& s=1& id=16& sm=4). Millennium Technology Prize. Archived from the original (http:/ / www. technologyawards. org/ index. php?m=2& s=1& id=16& sm=4) on August 30, 2007. . Retrieved May 25, 2008. [88] "Mosaic Web Browser History NCSA, Marc Andreessen, Eric Bina" (http:/ / www. livinginternet. com/ w/ wi_mosaic. htm). Livinginternet.com. . Retrieved May 28, 2009. [89] "NCSA Mosaic September10,1993 Demo" (http:/ / www. totic. org/ nscp/ demodoc/ demo. html). Totic.org. . Retrieved May 28, 2009. [90] "Vice President Al Gore's ENIAC Anniversary Speech" (http:/ / www. cs. washington. edu/ homes/ lazowska/ faculty. lecture/ innovation/ gore. html). Cs.washington.edu. February 14, 1996. . Retrieved May 28, 2009. [91] "UCLA Center for Communication Policy" (http:/ / www. digitalcenter. org/ webreport94/ apph. htm). Digitalcenter.org. . Retrieved May 28, 2009. [92] Mirror of Official site map (http:/ / undertow. arch. gatech. edu/ homepages/ virtualopera/ cyber24/ SITE/ htm3/ site. htm) [93] Mirror of Official Site (http:/ / undertow. arch. gatech. edu/ homepages/ virtualopera/ cyber24/ SITE/ htm3/ toc. htm?new) [94] "24 Hours in Cyberspace (and more)" (http:/ / www. baychi. org/ calendar/ 19970909/ ). Baychi.org. . Retrieved May 28, 2009. [95] "The human face of cyberspace, painted in random images" (http:/ / archive. southcoasttoday. com/ daily/ 02-97/ 02-22-97/ b02li072. htm). Archive.southcoasttoday.com. . Retrieved May 28, 2009. [96] Randall Stross (22 September 2009). Planet Google: One Company's Audacious Plan to Organize Everything We Know (http:/ / books. google. com/ books?id=xOk3EIUW9VgC). Simon and Schuster. ISBN978-1-4165-4696-2. . Retrieved 9 December 2012. [97] "Microsofts New Search at Bing.com Helps People Make Better Decisions: Decision Engine goes beyond search to help customers deal with information overload (Press Release)" (http:/ / www. microsoft. com/ presspass/ press/ 2009/ may09/ 05-28NewSearchPR. mspx?rss_fdn=Press Releases). Microsoft News Center. May 28, 2009. . Retrieved May 29, 2009. [98] "Microsoft and Yahoo seal web deal" (http:/ / news. bbc. co. uk/ 1/ hi/ business/ 8174763. stm), BBC Mobile News, July 29, 2009. [99] RFC 765: File Transfer Protocol (FTP) (http:/ / www. ietf. org/ rfc/ rfc0959. txt), J. Postel and J. Reynolds, ISI, October 1985 [100] Reliable distributed systems: technologies, Web services, and applications - Kenneth P. Birman - Google Books (http:/ / books. google. ca/ books?id=KeIENcC2BPwC& pg=PA532& lpg=PA532& dq=napster+ first#PPA532,M1). Books.google.ca. 2005-03-25. ISBN9780387215099. . Retrieved 2012-01-20. [101] Menta, Richard (July 20, 2001). "Napster Clones Crush Napster. Take 6 out of the Top 10 Downloads on CNet" (http:/ / www. mp3newswire. net/ stories/ 2001/ topclones. html). MP3 Newswire. . [102] Movie File-Sharing Booming: Study (http:/ / www. srgnet. com/ pdf/ Movie File-Sharing Booming Release Jan 24 07 Final. pdf), Solutions Research Group, Toronto, 24 January 2006 [103] Menta, Richard (December 9, 1999). "RIAA Sues Music Startup Napster for $20 Billion" (http:/ / www. mp3newswire. net/ stories/ napster. html). MP3 Newswire. . [104] "EFF: What Peer-to-Peer Developers Need to Know about Copyright Law" (http:/ / w2. eff. org/ IP/ P2P/ p2p_copyright_wp. php). W2.eff.org. . Retrieved 2012-01-20. [105] Kobie, Nicole (November 26, 2010). "Pirate Bay trio lose appeal against jail sentences" (http:/ / www. pcpro. co. uk/ news/ 363178/ pirate-bay-trio-lose-appeal-against-jail-sentences). pcpro.co.uk (PCPRO). . Retrieved November 26, 2010. [106] "Poll: Young Say File Sharing OK" (http:/ / www. cbsnews. com/ stories/ 2003/ 09/ 18/ opinion/ polls/ main573990. shtml), Bootie Cosgrove-Mather, CBS News, 11 February 2009 [107] Green, Stuart P. (29 March 2012). "OP-ED CONTRIBUTOR; When Stealing Isn't Stealing" (http:/ / www. nytimes. com/ 2012/ 03/ 29/ opinion/ theft-law-in-the-21st-century. html). The New York Times: p.27. . [108] Nasdaq peak of 5,048.62 (http:/ / bigcharts. marketwatch. com/ historical/ default. asp?detect=1& symbol=NASDAQ& close_date=3/ 10/ 00& x=34& y=12) [109] Hillebrand, Friedhelm, ed. (2002). GSM and UMTS, The Creation of Global Mobile Communications. John Wiley & Sons. ISBN0-470-84322-5. [110] "Internet users per 100 inhabitants 2001-2011" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ 2011/ Internet_users_01-11. xls), International Telecommunications Union, Geneva, accessed 4 April 2012 [111] "Brazil, Russia, India and China to Lead Internet Growth Through 2011" (http:/ / clickz. com/ showPage. html?page=3626274). Clickz.com. . Retrieved May 28, 2009. [112] "An Internet Pioneer Ponders the Next Revolution" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 6959034. stm). Illuminating the net's Dark Ages. August23,2007. . Retrieved February 26, 2008.

39

History of the Internet

40

Further reading
Abbate, Janet. Inventing the Internet (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=4633), Cambridge: MIT Press, 1999. Bemer, Bob, "A History of Source Concepts for the Internet/Web" (http://web.archive.org/web/ 20041216124504/www.bobbemer.com/CONCEPTS.HTM) Campbell-Kelly, Martin; Aspray, William. Computer: A History of the Information Machine. New York: BasicBooks, 1996. Clark, D. (1988). "The Design Philosophy of the DARPA Internet Protocols" (http://www.cs.princeton.edu/ ~jrex/teaching/spring2005/reading/clark88.pdf). SIGCOMM '88 Symposium proceedings on Communications architectures and protocols (ACM): 106114. doi:10.1145/52324.52336. ISBN0897912799. Retrieved 2011-10-16. Graham, Ian S. The HTML Sourcebook: The Complete Guide to HTML. New York: John Wiley and Sons, 1995. Krol, Ed. Hitchhiker's Guide to the Internet, 1987. Krol, Ed. Whole Internet User's Guide and Catalog. O'Reilly & Associates, 1992. Scientific American Special Issue on Communications, Computers, and Networks, September 1991.

External links
Thomas Greene, Larry James Landweber, George Strawn (2003). A Brief History of NSF and the Internet (http:// www.nsf.gov/od/lpa/news/03/fsnsf_internet.htm). National Science Foundation. Retrieved May 28, 2009. Robert H Zakon. "Hobbes' Internet Timeline v10.1" (http://www.zakon.org/robert/internet/timeline/). Retrieved July 23, 2010. "Principal Figures in the Development of the Internet and the World Wide Web" (http://www.unc.edu/depts/ jomc/academics/dri/pioneers2d.html). University of North Carolina. Retrieved July 3, 2006. "Internet History Timeline" (http://www.computerhistory.org/exhibits/internet_history/). Computer History Museum. Retrieved November 25, 2005. Marcus Kazmierczak (September 24, 1997). "Internet History" (http://web.archive.org/web/20051031200142/ http://www.mkaz.com/ebeab/history/). Archived from the original (http://www.mkaz.com/ebeab/history/) on October 31, 2005. Retrieved November 25, 2005. Harri K. Salminen. "History of the Internet" (http://www.nic.funet.fi/index/FUNET/history/internet/en/ etusivu-en.html). Heureka Science Center, Finland. Retrieved June 11, 2008. "Histories of the Internet" (http://www.isoc.org/internet/history/). Internet Society. Retrieved December 1, 2007. "Living Internet" (http://www.livinginternet.com/i/ii.htm). Retrieved January 1, 2009. Internet History with input from many of the people who helped invent the Internet (http://www.livinginternet.com/tcomments.htm) "Voice of America: Overhearing the Internet" (http://www.eff.org/Net_culture/overhearing_the_internet. article.txt), Robert Wright, The New Republic, September 13, 1993 "How the Internet Came to Be" (http://www.netvalley.com/archives/mirrors/cerf-how-inet.html), by Vinton Cerf, 1993 "Cybertelecom :: Internet History" (http://www.cybertelecom.org/notes/internet_history.htm), focusing on the governmental, legal, and policy history of the Internet "History of the Internet" (http://vimeo.com/2696386?pg=embed&sec=2696386), an animated documentary from 2009 explaining the inventions from time-sharing to filesharing, from Arpanet to Internet "The Roads and Crossroads of Internet History" (http://www.netvalley.com/intval1.html), by Gregory R. Gromov The History of the Internet According to Itself: A Synthesis of Online Internet Histories Available at the Turn of the Century (http://members.cox.net/opfer/Internet.htm), Steven E. Opfer, 1999

History of the Internet "Fool Us Once Shame on YouFool Us Twice Shame on Us: What We Can Learn from the Privatizations of the Internet Backbone Network and the Domain Name System" (http://digitalcommons.law.wustl.edu/lawreview/ vol79/iss1/2), Jay P. Kesan and Rajiv C. Shah, Washington University Law Review, Volume 79, Issue 1 (2001) "How It All Started" (http://www.w3.org/2004/Talks/w3c10-HowItAllStarted/) (slides), Tim Berners-Lee, W3C, December 2004 "A Little History of the World Wide Web: from 1945 to 1995" (http://www.w3.org/History.html), Dan Connolly, W3C, 2000 "The World Wide Web: Past, Present and Future" (http://www.w3.org/People/Berners-Lee/1996/ppf.html), Tim Berners-Lee, August 1996

41

World Wide Web

42

World Wide Web


World Wide Web

The Web's logo designed by Robert Cailliau Inventor Company Availability Tim Berners-Lee CERN Worldwide [1]

The World Wide Web (abbreviated as WWW or W3,[2] commonly known as the Web), is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia, and navigate between them via hyperlinks. Using concepts from his earlier hypertext systems like ENQUIRE, British engineer, computer scientist and at that time employee of CERN, Sir Tim Berners-Lee, now Director of the World Wide Web Consortium (W3C), wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At CERN, a European research organisation near Geneva situated on Swiss and French soil,[3] Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext "to link and access information of various kinds as a web of nodes in which the user can browse at will",[4] and they publicly introduced the project in December of the same year.[5]

History
In the May 1970 issue of Popular Science magazine, Arthur C. Clarke predicted that satellites would someday "bring the accumulated knowledge of the world to your fingertips" using a console that would combine the functionality of the Xerox, telephone, television and a small computer, allowing data transfer and video conferencing around the globe.[6] In March 1989, Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system.[7] With help from Robert Cailliau, he published a more formal proposal (on 12 November 1990) to build a "Hypertext project" called

The NeXT Computer used by Berners-Lee. The handwritten label declares, "This machine is a server. DO NOT POWER IT DOWN!!"

World Wide Web "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a clientserver architecture.[4] This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available." While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, blogs, Web 2.0 and RSS/Atom.[8] The proposal was modeled after the Dynatext SGML reader by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web:[9] the first web browser (which was a web editor as well); the first web server; and the first web pages,[10] which described the project itself. On 6 August 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.[11] This date also marked the debut of the Web as a publicly available The CERN datacenter in 2010 housing some service on the Internet. Many newsmedia have reported that the first WWW servers photo on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro; Gennaro has disclaimed this story, writing that media were "totally distorting our words for the sake of cheap sensationalism."[12] The first server outside Europe was set up at the Stanford Linear Accelerator Center (SLAC) in Palo Alto, California, to host the SPIRES-HEP database. Accounts differ substantially as to the date of this event. The World Wide Web Consortium says December 1992,[13] whereas SLAC itself claims 1991.[14][15] This is supported by a W3C document titled A Little History of the World Wide Web.[16] The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex", which was described in the 1945 essay "As We May Think".
[17]

43

Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed three essential technologies: 1. a system of globally unique identifiers for resources on the Web and elsewhere, the Universal Document Identifier (UDI), later known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI); 2. the publishing language HyperText Markup Language (HTML); 3. the Hypertext Transfer Protocol (HTTP).[18] The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop

World Wide Web servers and clients independently and to add extensions without licensing restrictions. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due.[19] Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW for Unix and the X Windowing System. Scholars generally agree that a turning point for the World Wide Web began with the introduction[20] of the Mosaic web browser[21] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative and the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore.[22] Prior to the release of Mosaic, Robert Cailliau, Jean-Franois Abramatic of graphics were not commonly mixed with text in web pages and the IBM, and Tim Berners-Lee at the 10th Web's popularity was less than older protocols in use over the Internet, anniversary of the World Wide Web Consortium. such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol. The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University. By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of which are the precursors or inspiration for today's most popular services. Connected by the existing Internet, other websites were created around the world, adding international standards for domain names and HTML. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet.[23] Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet.[24] The Web is a collection of documents and both client and server software using Internet protocols such as TCP/IP and HTTP. Tim Berners-Lee was knighted in 2004 by Queen Elizabeth II for his contribution to the World Wide Web.

44

Function
The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of text documents and other resources, linked by hyperlinks and URLs, usually accessed by web browsers from web servers. In short, the Web can be thought of as an application "running" on the Internet.[25] Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it. As an example, consider accessing a page with the URL http://example.org/wiki/World_Wide_Web .

World Wide Web First, the browser resolves the server-name portion of the URL (example.org) into an Internet Protocol address using the globally distributed database known as the Domain Name System (DNS); this lookup returns an IP address such as 208.80.152.2. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that particular address. It makes the request to a particular application port in the underlying Internet Protocol Suite so that the computer receiving the request can distinguish an HTTP request from other network protocols it may be servicing such as e-mail delivery; the HTTP protocol normally uses port 80. The content of the HTTP request can be as simple as the two lines of text GET /wiki/World_Wide_Web HTTP/1.1 Host: example.org The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the web server can fulfill the request it sends an HTTP response back to the browser indicating success, which can be as simple as HTTP/1.0 200 OK Content-Type: text/html; charset=UTF-8 followed by the content of the requested page. The Hypertext Markup Language for a basic web page looks like <html> <head> <title>Example.org The World Wide Web</title> </head> <body> <p>The World Wide Web, abbreviated as WWW and commonly known ...</p> </body> </html> The web browser parses the HTML, interpreting the markup (<title>, <p> for paragraph, and such) that surrounds the words in order to draw the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behavior, and Cascading Style Sheets that affect page layout. The browser will make additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.

45

World Wide Web

46

Linking
Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like <a href="http://example.org/wiki/Main_Page">Example.org, a free encyclopedia</a> Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[4] The hyperlink structure of the WWW is described by the webgraph: the nodes of the webgraph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks.
Graphic representation of a minute fraction of the Over time, many web resources pointed to by hyperlinks disappear, WWW, demonstrating hyperlinks relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is the best known of such efforts.

Dynamic updates of web pages


JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages.[26] The standardised version is ECMAScript.[26] To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is being retrieved. Web pages may also regularly poll the server to check whether new information is available.[27]

WWW prefix
Many domain names used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. The hostname for a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System or [domain name server](DNS) subdomain names, as in www.example.com. The use of 'www' as a subdomain name is not required by any technical or policy standard and many web sites do not use it; indeed, the first ever web server was called nxoc01.cern.ch.[28] According to Paolo Palazzi,[29] who worked at CERN along with Tim Berners-Lee, the popular use of 'www' subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page, however the dns records were never switched, and the practice of prepending 'www' to an institution's website domain name was subsequently copied. Many established websites still use 'www', or they invent other subdomain names such as 'www2', 'secure', etc. Many such web servers are set up so that both the domain root (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result

World Wide Web cannot be achieved by using the bare domain root. When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering 'microsoft' may be transformed to http://www.microsoft.com/ and 'openoffice' to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx.[30] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.[31] In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his "Podgrammes" series of podcasts, pronouncing it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for". In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wn wi wng ( ), which satisfies www and literally means "myriad dimensional net",[32] a translation that very appropriately reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens.[33] Use of the www prefix is declining as Web 2.0 web applications seek to brand their domain names and make them easily pronounceable.[34] As the mobile web grows in popularity, services like Gmail.com, MySpace.com, Facebook.com, Bebo.com and Twitter.com are most often discussed without adding www to the domain (or, indeed, the .com).

47

Scheme specifiers: http and https


The scheme specifier http:// or https:// at the start of a Web URI refers to Hypertext Transfer Protocol or HTTP Secure respectively. Unlike www, which has no specific purpose, these specify the communication protocol to be used for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web and the added encryption layer in HTTPS is essential when confidential information such as passwords or banking information are to be exchanged over the public Internet. Web browsers usually prepend http:// to addresses too, if omitted.

Web servers
The primary function of a web server is to deliver web pages on the request to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and scripts.

Privacy
Every time a web page is requested from a web server the server can identify, and usually it logs, the IP address from which the request arrived. Equally, unless set not to do so, most web browsers record the web pages that have been requested and viewed in a history feature, and usually cache much of the content locally. Unless HTTPS encryption is used, web requests and responses travel in plain text across the internet and they can be viewed, recorded and cached by intermediate systems. When a web page asks for, and the user supplies, personally identifiable information such as their real name, address, e-mail address, etc., then a connection can be made between the current web traffic and that individual. If the website uses HTTP cookies, username and password authentication, or other tracking techniques, then it will be able to relate other web visits, before and after, to the identifiable information provided. In this way it is possible for a web-based organisation to develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping

World Wide Web interests, their profession, and other aspects of their demographic profile. These profiles are obviously of potential interest to marketeers, advertisers and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organisations without the user being informed. For many ordinary people, this means little more than some unexpected e-mails in their in-box, or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counter terrorism and espionage agencies can also identify, target and track individuals based on what appear to be their interests or proclivities on the web. Social networking sites make a point of trying to get the user to truthfully expose their real names, interests and locations. This makes the social networking experience more realistic and therefore engaging for all their users. On the other hand, photographs uploaded and unguarded statements made will be identified to the individual, who may regret some decisions to publish these data. Employers, schools, parents and other relatives may be influenced by aspects of social networking profiles that the posting individual did not intend for these audiences. On-line bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine grained control of the privacy settings for each individual posting, but these can be complex and not easy to find or use, especially for beginners.[35] Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an on-line profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events and scenarios that have been imaged elsewhere. Because of image caching, mirroring and straightforward copying, it is difficult to imagine that an image, once published onto the World Wide Web, can ever actually or totally be removed.

48

Intellectual property
The intellectual property rights for any creative work initially rests with its creator. Web users who want to publish their work onto the World Wide Web, however, need to be aware of the details of the way they do it. If artwork, photographs, writings, poems, or technical innovations are published by their creator onto a privately owned web server, then they may choose the copyright and other conditions freely themselves. This is unusual though; more commonly work is uploaded to web sites and servers that are owned by other organizations. It depends upon the terms and conditions of the site or service provider to what extent the original owner automatically signs over rights to their work by the choice of destination and by the act of uploading. Many users of the web erroneously assume that everything they may find on line is freely available to them as if it was in the public domain. This is almost never the case, unless the web site publishing the work clearly states that it is. On the other hand, content owners are aware of this widespread belief, and expect that sooner or later almost everything that is published will probably be used in some capacity somewhere without their permission. Many publishers therefore embed visible or invisible digital watermarks in their media files, sometimes charging users to receive unmarked copies for legitimate use. Digital rights management includes forms of access control technology that further limit the use of digital content even after it has been bought or downloaded.

Security
The Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering.[] Web-based vulnerabilities now outnumber traditional computer security concerns,[36][37] and as measured by Google, about one in ten web pages may contain malicious code.[38] Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.[39] The most common of all malware threats is SQL injection attacks against websites.[40] Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript[41] and were exacerbated to some degree by Web 2.0 and Ajax web design

World Wide Web that favors the use of scripts.[42] Today by one estimate, 70% of all websites are open to XSS attacks on their users.[43] Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations,[44] and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source.[45] Some have argued that for enterprise to see security as a business opportunity rather than a cost center,[46] "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks.[47] Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.[48]

49

Standards
Many formal standards and other technical specifications and software define the operation of different aspects of the World Wide Web, the Internet, and computer information exchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations. Usually, when web standards are discussed, the following publications are seen as foundational: Recommendations for markup languages, especially HTML and XHTML, from the W3C. These define the structure and interpretation of hypertext documents. Recommendations for stylesheets, especially CSS, from the W3C. Standards for ECMAScript (usually in the form of JavaScript), from Ecma International. Recommendations for the Document Object Model, from W3C. Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following: Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Internet, such as hypertext documents and images. URIs, often called URLs, are defined by the IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its predecessors and numerous URI scheme-defining RFCs; HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC 2617: HTTP Authentication, which specify how the browser and server authenticate each other.

Accessibility
There are methods available for accessing the web in alternative mediums and formats, so as to enable use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech related, cognitive, neurological, or some combination therin. Accessibility features also help others with temporary disabilities like a broken arm or the aging population as their abilities change.[49] The Web is used for receiving information as well as providing information and interacting with society. The World Wide Web Consortium claims it essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities.[50] Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."[49] Many countries regulate web accessibility as a requirement for websites.[51] International cooperation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.[49][52]

World Wide Web

50

Internationalization
The W3C Internationalization Activity assures that web technology will work in all languages, scripts, and cultures.[53] Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding.[54] Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more charactersany character in the Universal Character Setand now a resource can be identified by IRI in any language.[55]

Statistics
Between 2005 and 2010, the number of Web users doubled, and was expected to surpass two billion in 2010.[56] Early studies in 1998 and 1999 estimating the size of the web using capture/recapture methods showed that much of the web was not indexed by search engines and the web was much larger than expected.[57][58] According to a 2001 study, there were a massive number, over 550 billion, of documents on the Web, mostly in the invisible Web, or Deep Web.[59] A 2002 survey of 2,024 million Web pages[60] determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more recent study, which used Web searches in 75 different languages to sample the Web, determined that there were over 11.5 billion Web pages in the publicly indexable Web as of the end of January 2005.[61] As of March 2009, the indexable web contains at least 25.21 billion pages.[62] On 25 July 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that Google Search had discovered one trillion unique URLs.[63] As of May 2009, over 109.5 million domains operated.[64] Of these 74% were commercial or other sites operating in the .com generic top-level domain.[64] Statistics measuring a website's popularity are usually based either on the number of page views or on associated server 'hits' (file requests) that it receives.

Speed issues
Frustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to a pejorative name for the World Wide Web: the World Wide Wait.[65] Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the congestion can be found at W3C.[66] Guidelines for Web response times are:[67] 0.1 second (one tenth of a second). Ideal response time. The user does not sense any interruption. 1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience. 10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.

Caching
If a user revisits a Web page after only a short interval, the page data may not need to be re-obtained from the source Web server. Almost all web browsers cache recently obtained data, usually on the local hard drive. HTTP requests sent by a browser will usually ask only for data that has changed since the last download. If the locally cached data are still current, they will be reused. Caching helps reduce the amount of Web traffic on the Internet. The decision about expiration is made independently for each downloaded file, whether image, stylesheet, JavaScript, HTML, or other resource. Thus even on sites with highly dynamic content, many of the basic resources need to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. This helps reduce page download times and lowers demands on the Web server. There are other components of the Internet that can cache Web content. Corporate and academic firewalls often cache Web resources requested by one user for the benefit of all. (See also caching proxy server.) Some search

World Wide Web engines also store cached content from websites. Apart from the facilities built into Web servers that can determine when files have been updated and so need to be re-sent, designers of dynamically generated Web pages can control the HTTP headers sent back to requesting users, so that transient or sensitive pages are not cached. Internet banking and news sites frequently use this facility. Data requested with an HTTP 'GET' is likely to be cached if other conditions are met; data obtained in response to a 'POST' is assumed to depend on the data that was POSTed and so is not cached.

51

References
[1] Quittner, Joshua (29 March 1999). "Tim Berners Lee Time 100 People of the Century" (http:/ / www. time. com/ time/ magazine/ article/ 0,9171,990627,00. html). Time Magazine. . Retrieved 17 May 2010. "He wove the World Wide Web and created a mass medium for the 21st century. The World Wide Web is Berners-Lee's alone. He designed it. He loosed it on the world. And he more than anyone else has fought to keep it open, nonproprietary and free. ." [2] "World Wide Web Consortium" (http:/ / www. w3. org/ ). . "The World Wide Web Consortium (W3C)..." [3] Le Web a t invent... en France ! Le Point (http:/ / www. lepoint. fr/ technologie/ le-web-a-ete-invente-en-france-31-01-2012-1425943_58. php) [4] "Berners-Lee, Tim; Cailliau, Robert (12 November 1990). "WorldWideWeb: Proposal for a hypertexts Project" (http:/ / w3. org/ Proposal. html). . Retrieved 27 July 2009. [5] Berners-Lee, Tim. "Pre-W3C Web and Internet Background" (http:/ / w3. org/ 2004/ Talks/ w3c10-HowItAllStarted/ ?n=15). World Wide Web Consortium. . Retrieved 21 April 2009. [6] von Braun, Wernher (May 1970). "TV Broadcast Satellite" (http:/ / www. popsci. com/ archive-viewer?id=8QAAAAAAMBAJ& pg=66& query=a+ c+ clarke). Popular Science: 6566. . Retrieved 12 January 2011. [7] Berners-Lee, Tim (March 1989). "Information Management: A Proposal" (http:/ / w3. org/ History/ 1989/ proposal. html). W3C. . Retrieved 27 July 2009. [8] "Tim Berners-Lee's original World Wide Web browser" (http:/ / info. cern. ch/ NextBrowser. html). . "With recent phenomena like blogs and wikis, the web is beginning to develop the kind of collaborative nature that its inventor envisaged from the start." [9] "Tim Berners-Lee: client" (http:/ / w3. org/ People/ Berners-Lee/ WorldWideWeb). W3.org. . Retrieved 27 July 2009. [10] "First Web pages" (http:/ / w3. org/ History/ 19921103-hypertext/ hypertext/ WWW/ TheProject. html). W3.org. . Retrieved 27 July 2009. [11] "Short summary of the World Wide Web project" (http:/ / groups. google. com/ group/ alt. hypertext/ msg/ 395f282a67a1916c). Google. 6 August 1991. . Retrieved 27 July 2009. [12] "Silvano de Gennaro disclaims 'the first photo on the web'" (http:/ / musiclub. web. cern. ch/ MusiClub/ bands/ cernettes/ disclaimer. html). . Retrieved 27 July 2012. "If you read well our website, it says that it was, to our knowledge, the 'first photo of a band'. Dozens of media are totally distorting our words for the sake of cheap sensationalism. Nobody knows which was the first photo on the web." [13] "W3C timeline" (http:/ / w3. org/ 2005/ 01/ timelines/ timeline-2500x998. png). . Retrieved 30 March 2010. [14] "About SPIRES" (http:/ / slac. stanford. edu/ spires/ about/ ). . Retrieved 30 March 2010. [15] "The Early World Wide Web at SLAC" (http:/ / www. slac. stanford. edu/ history/ earlyweb/ history. shtml). . [16] "A Little History of the World Wide Web" (http:/ / www. w3. org/ History. html). . [17] Conklin, Jeff (1987), IEEE Computer 20 (9): 1741 [18] "Inventor of the Week Archive: The World Wide Web" (http:/ / web. mit. edu/ invent/ iow/ berners-lee. html). Massachusetts Institute of Technology: MIT School of Engineering. . Retrieved 23 July 2009. [19] "Ten Years Public Domain for the Original Web Software" (http:/ / tenyears-www. web. cern. ch/ tenyears-www/ Welcome. html). Tenyears-www.web.cern.ch. 30 April 2003. . Retrieved 27 July 2009. [20] "Mosaic Web Browser History NCSA, Marc Andreessen, Eric Bina" (http:/ / livinginternet. com/ w/ wi_mosaic. htm). Livinginternet.com. . Retrieved 27 July 2009. [21] "NCSA Mosaic September 10, 1993 Demo" (http:/ / totic. org/ nscp/ demodoc/ demo. html). Totic.org. . Retrieved 27 July 2009. [22] "Vice President Al Gore's ENIAC Anniversary Speech" (http:/ / cs. washington. edu/ homes/ lazowska/ faculty. lecture/ innovation/ gore. html). Cs.washington.edu. 14 February 1996. . Retrieved 27 July 2009. [23] "Internet legal definition of Internet" (http:/ / legal-dictionary. thefreedictionary. com/ Internet). West's Encyclopedia of American Law, edition 2. Free Online Law Dictionary. 15 July 2009. . Retrieved 25 November 2008. [24] "WWW (World Wide Web) Definition" (http:/ / techterms. com/ definition/ www). TechTerms. . Retrieved 19 February 2010. [25] "The W3C Technology Stack" (http:/ / www. w3. org/ Consortium/ technology). World Wide Web Consortium. . Retrieved 21 April 2009. [26] Hamilton, Naomi (31 July 2008). "The A-Z of Programming Languages: JavaScript" (http:/ / computerworld. com. au/ article/ 255293/ -z_programming_languages_javascript). Computerworld. IDG. . Retrieved 12 May 2009. [27] Buntin, Seth (23 September 2008). "jQuery Polling plugin" (http:/ / buntin. org/ 2008/ sep/ 23/ jquery-polling-plugin/ ). . Retrieved 2009-08-22. [28] Berners-Lee, Tim. "Frequently asked questions by the Press" (http:/ / w3. org/ People/ Berners-Lee/ FAQ. html). W3C. . Retrieved 27 July 2009. [29] Palazzi, P (2011) 'The Early Days of the WWW at CERN' (http:/ / soft-shake. ch/ 2011/ en/ conference/ sessions. html?key=earlydays)

World Wide Web


[30] "automatically adding www.___.com" (http:/ / forums. mozillazine. org/ viewtopic. php?f=9& t=10980). mozillaZine. 16 May 2003. . Retrieved 27 May 2009. [31] Masnick, Mike (7 July 2008). "Microsoft Patents Adding 'www.' And '.com' To Text" (http:/ / techdirt. com/ articles/ 20080626/ 0203581527. shtml). Techdirt. . Retrieved 27 May 2009. [32] "MDBG Chinese-English dictionary Translate" (http:/ / us. mdbg. net/ chindict/ chindict. php?page=translate& trst=0& trqs=World+ Wide+ Web& trlang=& wddmtm=0). . Retrieved 27 July 2009. [33] "Frequently asked questions by the Press Tim BL" (http:/ / w3. org/ People/ Berners-Lee/ FAQ. html). W3.org. . Retrieved 27 July 2009. [34] "It's not your grandfather's Internet" (http:/ / findarticles. com/ p/ articles/ mi_hb6421/ is_4_92/ ai_n56479358/ ). Strategic Finance. 2010. . [35] boyd, danah; Hargittai, Eszter (July 2010). "Facebook privacy settings: Who cares?" (http:/ / www. uic. edu/ htbin/ cgiwrap/ bin/ ojs/ index. php/ fm/ article/ view/ 3086/ 2589). First Monday (University of Illinois at Chicago) 15 (8). . [36] Christey, Steve and Martin, Robert A. (22 May 2007). "Vulnerability Type Distributions in CVE (version 1.1)" (http:/ / cwe. mitre. org/ documents/ vuln-trends/ index. html). MITRE Corporation. . Retrieved 7 June 2008. [37] (PDF) Symantec Internet Security Threat Report: Trends for JulyDecember 2007 (Executive Summary) (http:/ / eval. symantec. com/ mktginfo/ enterprise/ white_papers/ b-whitepaper_exec_summary_internet_security_threat_report_xiii_04-2008. en-us. pdf). XIII. Symantec Corp.. April 2008. pp.12. . Retrieved 11 May 2008. [38] "Google searches web's dark side" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 6645895. stm). BBC News. 11 May 2007. . Retrieved 26 April 2008. [39] "Security Threat Report" (http:/ / www. sophos. com/ sophos/ docs/ eng/ marketing_material/ sophos-threat-report-Q108. pdf) (PDF). Sophos. Q1 2008. . Retrieved 24 April 2008. [40] "Security threat report" (http:/ / www. sophos. com/ sophos/ docs/ eng/ papers/ sophos-security-report-jul08-srna. pdf) (PDF). Sophos. July 2008. . Retrieved 24 August 2008. [41] Fogie, Seth, Jeremiah Grossman, Robert Hansen, and Anton Rager (2007) (PDF). Cross Site Scripting Attacks: XSS Exploits and Defense (http:/ / web. archive. org/ web/ 20080625065121/ http:/ / www. syngress. com/ book_catalog/ / SAMPLE_1597491543. pdf). Syngress, Elsevier Science & Technology. pp.6869, 127. ISBN1-59749-154-3. Archived from the original (http:/ / www. syngress. com/ book_catalog/ / SAMPLE_1597491543. pdf) on 25 June 2008. . Retrieved 6 June 2008. [42] O'Reilly, Tim (30 September 2005). "What Is Web 2.0" (http:/ / www. oreillynet. com/ pub/ a/ oreilly/ tim/ news/ 2005/ 09/ 30/ what-is-web-20. html). O'Reilly Media. pp.45. . Retrieved 4 June 2008. and AJAX web applications can introduce security vulnerabilities like "client-side security controls, increased attack surfaces, and new possibilities for Cross-Site Scripting (XSS)", in Ritchie, Paul (March 2007). "The security risks of AJAX/web 2.0 applications" (http:/ / web. archive. org/ web/ 20080625065122/ http:/ / www. infosecurity-magazine. com/ research/ Sep07_Ajax. pdf) (PDF). Infosecurity (Elsevier). Archived from the original (http:/ / www. infosecurity-magazine. com/ research/ Sep07_Ajax. pdf) on 25 June 2008. . Retrieved 6 June 2008. which cites Hayre, Jaswinder S. and Kelath, Jayasankar (22 June 2006). "Ajax Security Basics" (http:/ / www. securityfocus. com/ infocus/ 1868). SecurityFocus. . Retrieved 6 June 2008. [43] Berinato, Scott (1 January 2007). "Software Vulnerability Disclosure: The Chilling Effect" (http:/ / web. archive. org/ web/ 20080418072230/ http:/ / www. csoonline. com/ article/ 221113). CSO (CXO Media): p.7. Archived from the original (http:/ / www. csoonline. com/ article/ 221113) on 18 April 2008. . Retrieved 7 June 2008. [44] Prince, Brian (9 April 2008). "McAfee Governance, Risk and Compliance Business Unit" (http:/ / www. eweek. com/ c/ a/ Security/ McAfee-Governance-Risk-and-Compliance-Business-Unit/ ). eWEEK (Ziff Davis Enterprise Holdings). . Retrieved 25 April 2008. [45] Ben-Itzhak, Yuval (18 April 2008). "Infosecurity 2008 New defence strategy in battle against e-crime" (http:/ / www. computerweekly. com/ Articles/ 2008/ 04/ 18/ 230345/ infosecurity-2008-new-defence-strategy-in-battle-against. htm). ComputerWeekly (Reed Business Information). . Retrieved 20 April 2008. [46] Preston, Rob (12 April 2008). "Down To Business: It's Past Time To Elevate The Infosec Conversation" (http:/ / www. informationweek. com/ news/ security/ client/ showArticle. jhtml?articleID=207100989). InformationWeek (United Business Media). . Retrieved 25 April 2008. [47] Claburn, Thomas (6 February 2007). "RSA's Coviello Predicts Security Consolidation" (http:/ / www. informationweek. com/ news/ security/ showArticle. jhtml?articleID=197003826). InformationWeek (United Business Media). . Retrieved 25 April 2008. [48] Duffy Marsan, Carolyn (9 April 2008). "How the iPhone is killing the 'Net" (http:/ / www. networkworld. com/ news/ 2008/ 040908-zittrain. html). Network World (IDG). . Retrieved 17 April 2008. [49] "Web Accessibility Initiative (WAI)" (http:/ / www. w3. org/ WAI/ l). World Wide Web Consortium. . Retrieved 7 April 2009. [50] "Developing a Web Accessibility Business Case for Your Organization: Overview" (http:/ / www. w3. org/ WAI/ bcase/ Overview). World Wide Web Consortium. . Retrieved 7 April 2009. [51] "Legal and Policy Factors in Developing a Web Accessibility Business Case for Your Organization" (http:/ / www. w3. org/ WAI/ bcase/ pol). World Wide Web Consortium. . Retrieved 7 April 2009. [52] "Web Content Accessibility Guidelines (WCAG) Overview" (http:/ / www. w3. org/ WAI/ intro/ wcag. php). World Wide Web Consortium. . Retrieved 7 April 2009. [53] "Internationalization (I18n) Activity" (http:/ / www. w3. org/ International/ ). World Wide Web Consortium. . Retrieved 10 April 2009. [54] Davis, Mark (5 April 2008). "Moving to Unicode 5.1" (http:/ / googleblog. blogspot. com/ 2008/ 05/ moving-to-unicode-51. html). Google. . Retrieved 10 April 2009. [55] "World Wide Web Consortium Supports the IETF URI Standard and IRI Proposed Standard" (http:/ / www. w3. org/ 2004/ 11/ uri-iri-pressrelease. html) (Press release). World Wide Web Consortium. 26 January 2005. . Retrieved 10 April 2009.

52

World Wide Web


[56] Lynn, Jonathan (19 October 2010). "Internet users to exceed 2 billion ..." (http:/ / www. reuters. com/ article/ 2010/ 10/ 19/ us-telecoms-internet-idUSTRE69I24720101019). Reuters. . Retrieved 9 February 2011. [57] S. Lawrence, C.L. Giles, "Searching the World Wide Web," Science, 280(5360), 98100, 1998. [58] S. Lawrence, C.L. Giles, "Accessibility of Information on the Web," Nature, 400, 107109, 1999. [59] "The 'Deep' Web: Surfacing Hidden Value" (http:/ / web. archive. org/ web/ 20080404044203/ http:/ / www. brightplanet. com/ resources/ details/ deepweb. html). Brightplanet.com. Archived from the original (http:/ / www. brightplanet. com/ resources/ details/ deepweb. html) on 4 April 2008. . Retrieved 27 July 2009. [60] "Distribution of languages on the Internet" (http:/ / www. netz-tipp. de/ languages. html). Netz-tipp.de. . Retrieved 27 July 2009. [61] Alessio Signorini. "Indexable Web Size" (http:/ / www. cs. uiowa. edu/ ~asignori/ web-size/ ). Cs.uiowa.edu. . Retrieved 27 July 2009. [62] "The size of the World Wide Web" (http:/ / www. worldwidewebsize. com/ ). Worldwidewebsize.com. . Retrieved 27 July 2009. [63] Alpert, Jesse; Hajaj, Nissan (25 July 2008). "We knew the web was big..." (http:/ / googleblog. blogspot. com/ 2008/ 07/ we-knew-web-was-big. html). The Official Google Blog. . [64] "Domain Counts & Internet Statistics" (http:/ / www. domaintools. com/ internet-statistics/ ). Name Intelligence. . Retrieved 17 May 2009. [65] "World Wide Wait" (http:/ / www. techweb. com/ encyclopedia/ defineterm. jhtml?term=world+ wide+ wait). TechEncyclopedia. United Business Media. . Retrieved 10 April 2009. [66] Khare, Rohit and Jacobs, Ian (1999). "W3C Recommendations Reduce 'World Wide Wait'" (http:/ / www. w3. org/ Protocols/ NL-PerfNote. html). World Wide Web Consortium. . Retrieved 10 April 2009. [67] Nielsen, Jakob (from Miller 1968; Card et al. 1991) (1994). "5" (http:/ / www. useit. com/ papers/ responsetime. html). Usability Engineering: Response Times: The Three Important Limits. Morgan Kaufmann. . Retrieved 10 April 2009.

53

Further reading
Niels Brgger, ed. Web History (2010) 362 pages; Historical perspective on the World Wide Web, including issues of culture, content, and preservation. Fielding, R.; Gettys, J.; Mogul, J.; Frystyk, H.; Masinter, L.; Leach, P.; Berners-Lee, T. (June 1999). Hypertext Transfer Protocol HTTP/1.1 (ftp://ftp.isi.edu/in-notes/rfc2616.txt). Request For Comments 2616. Information Sciences Institute. Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilley, Chris; Mendelsohn, Noah; Orchard, David; Walsh, Norman; Williams, Stuart (15 December 2004). Architecture of the World Wide Web, Volume One (http://www.w3.org/TR/webarch/). Version 20041215. W3C. Polo, Luciano (2003). "World Wide Web Technology Architecture: A Conceptual Analysis" (http://newdevices. com/publicaciones/www/). New Devices. Retrieved 31 July 2005. Skau, H.O. (March 1990). "The World Wide Web and Health Information" (http://newdevices.com/ publicaciones/www/). New Devices. Retrieved 1989.

External links
Early archive of the first Web site (http://www.w3.org/History/19921103-hypertext/hypertext/WWW/) Internet Statistics: Growth and Usage of the Web and the Internet (http://www.mit.edu/people/mkgray/net/) Living Internet (http://www.livinginternet.com/w/w.htm) A comprehensive history of the Internet, including the World Wide Web. Web Design and Development (http://www.dmoz.org/Computers/Internet/Web_Design_and_Development/) at the Open Directory Project World Wide Web Consortium (W3C) (http://www.w3.org/) W3C Recommendations Reduce "World Wide Wait" (http://www.w3.org/Protocols/NL-PerfNote.html) World Wide Web Size (http://www.worldwidewebsize.com/) Daily estimated size of the World Wide Web. Antonio A. Casilli, Some Elements for a Sociology of Online Interactions (http://cle.ens-lyon.fr/40528325/0/ fiche___pagelibre/) The Erds Webgraph Server (http://web-graph.org/) offers weekly updated graph representation of a constantly increasing fraction of the WWW.

History of the World Wide Web

54

History of the World Wide Web


The World Wide Web ("WWW" or simply the "Web") is a global information medium which users can read and write via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet itself, but the Web is a service that operates over the Internet, just as e-mail also does. The history of the Internet dates back significantly further than that of the World Wide Web. The hypertext portion of the Web in particular has an intricate intellectual history; notable influences and precursors include Vannevar Bush's Memex,[1] IBM's Generalized Markup Language,[2] and Ted Nelson's Project Xanadu.[1]

Today, the Web and the Internet allow connectivity from literally everywhere on eartheven ships at sea and in outer space.

The concept of a home-based global information system goes at least as far back as "A Logic Named Joe", a 1946 short story by Murray Leinster, in which computer terminals, called "logics," were in every home. Although the computer system in the story is centralized, the story captures some of the feeling of the ubiquitous information explosion driven by the Web.

19791991: Development of the World Wide Web


"In August, 1984 I wrote a proposal to the SW Group Leader, Les Robertson, for the establishment of a pilot project to install and evaluate TCP/IP protocols on some key non-Unix machines at CERN ... By 1990 CERN had become the largest Internet site in Europe and this fact... positively in Europe and elsewhere... A key result of all these happenings was that by 1989 CERN's Internet facility was ready to become the medium within which Tim Berners-Lee would create the World Wide Web with a truly visionary idea..." Ben Segal. Short History of Internet Protocols at CERN, April 1995 [3] In 1980, Tim Berners-Lee, an independent contractor at the European Organization for Nuclear Research (CERN), Switzerland, built ENQUIRE, as a personal database of people and software models, but also as a way to play with hypertext; each new page of information in ENQUIRE had to be linked to an existing page.[1] In 1984 Berners-Lee returned to CERN, and considered its problems of information presentation: physicists from around the world needed to share data, and with no common machines and no common presentation software. He wrote a proposal in The NeXTcube used by Tim Berners-Lee at CERN March 1989 for "a large hypertext database with typed became the first Web server. links", but it generated little interest. His boss, Mike Sendall, encouraged Berners-Lee to begin implementing his system on a newly acquired NeXT workstation.[4] He considered several names, including Information

History of the World Wide Web Mesh,[5] The Information Mine (turned down as it abbreviates to TIM, the WWW's creator's name) or Mine of Information (turned down because it abbreviates to MOI which is "Me" in French), but settled on World Wide Web.[6] He found an enthusiastic collaborator in Robert Cailliau, who rewrote the proposal (published on November 12, 1990) and sought resources within CERN. Berners-Lee and Cailliau pitched their ideas to the European Conference on Hypertext Technology in September 1990, but found no vendors who could appreciate their vision of marrying hypertext with the Internet. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9 [7], the HyperText Markup Language (HTML), the first Robert Cailliau, Jean-Franois Abramatic and Tim Berners-Lee at Web browser (named WorldWideWeb, which the 10th anniversary of the WWW Consortium. was also a Web editor), the first HTTP server software (later known as CERN httpd), the first web server (http:/ / info. cern. ch), and the first Web pages that described the project itself. The browser could access Usenet newsgroups and FTP files as well. However, it could run only on the NeXT; Nicola Pellow therefore created a simple text browser that could run on almost any computer called the Line Mode Browser.[8] To encourage use within CERN, Bernd Pollermann put the CERN telephone directory on the web previously users had to log onto the mainframe in order to look up phone numbers.[8] According to Tim Berners-Lee, the Web was mainly invented in the Building 31 at CERN ( 461357N 60242E ) but also at home, in the two houses he lived in during that time (one in France, one in Switzerland).[9] On August 6, 1991,[10] Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.[11] This date also marked the debut of the Web as a publicly available service on the Internet. The WorldWideWeb (WWW) project aims to allow all links to be made to any information anywhere. [...] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!" from Tim Berners-Lee's first message Paul Kunz from the Stanford Linear Accelerator Center visited CERN in September 1991, and was captivated by the Web. He brought the NeXT software back to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system on the IBM mainframe as a way to display SLACs catalog of online documents;[8] this was the first web server outside of Europe and the first in North America.[12] An early CERN-related contribution to the Web was the parody band Les Horribles Cernettes, whose promotional image is believed to be among the Web's first five pictures.[13]

55

History of the World Wide Web

56

19921995: Growth of the WWW


In keeping with its birth at CERN, early adopters of the World Wide Web were primarily university-based scientific departments or physics laboratories such as Fermilab and SLAC. Early websites intermingled links for both the HTTP web protocol and the then-popular Gopher protocol, which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages, such as Berners-Lee's first site at http://info.cern.ch/, or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines. There was still no graphical browser available for computers besides the NeXT. This gap was filled in April 1992 with the release of Erwise, an application developed at the Helsinki University of Technology, and in May by ViolaWWW, created by Pei-Yuan Wei, which included advanced features such as embedded graphics, scripting, and animation.[8] ViolaWWW was originally an application for HyperCard. Both programs ran on the X Window System for Unix.[8] Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx, to access the web. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx wasnt worth visiting.

Early browsers
The turning point for the World Wide Web was the introduction[14] of the Mosaic web browser[15] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by then-Senator Al Gore's High Performance Computing and Communication Act of 1991 also known as the Gore Bill.[16] Remarkably the first Mosaic Browser lacked a "back button" which was proposed in 1992-3 by the same individual who invented the concept of clickable text documents. The request was emailed from the University of Texas computing facility. The browser was intended to be an editor and not simply a viewer, but was to work with computer generated hyper text lists called "search engines". The origins of Mosaic had begun in 1992. In November 1992, the NCSA at the University of Illinois (UIUC) established a website. In December 1992, Andreessen and Eric Bina, students attending UIUC and working at the NCSA, began work on Mosaic. They released an X Window browser in February 1993. It gained popularity due to its strong support of integrated multimedia, and the authors rapid response to user bug reports and recommendations for new features. The first Microsoft Windows browser was Cello, written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since more lawyers had more access to Windows than to Unix. Cello was released in June 1993.[8] After graduation from UIUC, Andreessen and James H. Clark, former CEO of Silicon Graphics, met and formed Mosaic Communications Corporation to develop the Mosaic browser commercially. The company changed its name to Netscape in April 1994, and the browser was developed further as Netscape Navigator.

History of the World Wide Web

57

Web organization
In May 1994 the first International WWW Conference, organized by Robert Cailliau,[17][18] was held at CERN;[19] the conference has been held every year since. In April 1993 CERN had agreed that anyone could use the Web protocol and code royalty-free; this was in part a reaction to the perturbation caused by the University of Minnesota announcing that it would begin charging license fees for its implementation of the Gopher protocol. In September 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology with support from the Defense Advanced Research Projects Agency (DARPA) and the European Commission. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made the Web available freely, with no patent and no royalties due. The W3C decided that their standards must be based on royalty-free technology, so they can be easily adopted by anyone. By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of which are the precursors or inspiring examples of today's most popular services.

19961998: Commercialization of the WWW


By 1996 it became obvious to most publicly traded companies that a public Web presence was no longer optional. Though at first people saw mainly the possibilities of free publishing and instant worldwide information, increasing familiarity with two-way communication over the "Web" led to the possibility of direct Web-based commerce (e-commerce) and instantaneous group communications worldwide. More dotcoms, displaying products on hypertext webpages, were added into the Web.

19992001: "Dot-com" boom and bust


Low interest rates in 199899 facilitated an increase in start-up companies. Although a number of these new entrepreneurs had realistic plans and administrative ability, most of them lacked these characteristics but were able to sell their ideas to investors because of the novelty of the dot-com concept. Historically, the dot-com boom can be seen as similar to a number of other technology-inspired booms of the past including railroads in the 1840s, automobiles in the early 20th century, radio in the 1920s, television in the 1940s, transistor electronics in the 1950s, computer time-sharing in the 1960s, and home computers and biotechnology in the early 1980s. In 2001 the bubble burst, and many dot-com startups went out of business after burning through their venture capital and failing to become profitable. Many others, however, did survive and thrive in the early 21st century. Many companies which began as online retailers blossomed and became highly profitable. More conventional retailers found online merchandising to be a profitable additional source of revenue. While some online entertainment and news outlets failed when their seed capital ran out, others persisted and eventually became economically self-sufficient. Traditional media outlets (newspaper publishers, broadcasters and cablecasters in particular) also found the Web to be a useful and profitable additional channel for content distribution, and an additional vehicle to generate advertising revenue. The sites that survived and eventually prospered after the bubble burst had two things in common; a sound business plan, and a niche in the marketplace that was, if not unique, particularly well-defined and well-served.

History of the World Wide Web

58

2002present: The Web becomes ubiquitous


In the aftermath of the dot-com bubble, telecommunications companies had a great deal of overcapacity as many Internet business clients went bust. That, plus ongoing investment in local cell infrastructure kept connectivity charges low, and helping to make high-speed Internet connectivity more affordable. During this time, a handful of companies found success developing business models that helped make the World Wide Web a more compelling experience. These include airline booking sites, Google's search engine and its profitable approach to simplified, keyword-based advertising, as well as ebay's do-it-yourself auction site and Amazon.com's online department store. This new era also begot social networking websites, such as MySpace and Facebook, which, though unpopular at first, very rapidly gained acceptance in becoming a major part of youth culture.

Web 2.0
Beginning in 2002, new ideas for sharing and exchanging content ad hoc, such as Weblogs and RSS, rapidly gained acceptance on the Web. This new model for information exchange, primarily featuring DIY user-edited and generated websites, was coined Web 2.0. The Web 2.0 boom saw many new service-oriented startups catering to a new, democratized Web. Some believe it will be followed by the full realization of a Semantic Web. Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:[20] I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web the content, links, and transactions between people and computers. A Semantic Web, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The intelligent agents people have touted for ages will finally materialize. Tim Berners-Lee, 1999 Predictably, as the World Wide Web became easier to query, attained a higher degree of usability, and shed its esoteric reputation, it gained a sense of organization and unsophistication which opened the floodgates and ushered in a rapid period of popularization. New sites such as Wikipedia and its sister projects proved revolutionary in executing the User edited content concept. In 2005, 3 ex-PayPal employees formed a video viewing website called YouTube. Only a year later, YouTube was proven the most quickly popularized website in history, and even started a new concept of user-submitted content in major events, as in the CNN-YouTube Presidential Debates. The popularity of YouTube, Facebook, etc., combined with the increasing availability and affordability of high-speed connections has made video content far more common on all kinds of websites. Many video-content hosting and creation sites provide an easy means for their videos to be embedded on third party websites without payment or permission. This combination of more user-created or edited content, and easy means of sharing content, such as via RSS widgets and video embedding, has led to many sites with a typical "Web 2.0" feel. They have articles with embedded video, user-submitted comments below the article, and RSS boxes to the side, listing some of the latest articles from other sites. Continued extension of the World Wide Web has focused on connecting devices to the Internet, coined Intelligent Device Management. As Internet connectivity becomes ubiquitous, manufacturers have started to leverage the expanded computing power of their devices to enhance their usability and capability. Through Internet connectivity, manufacturers are now able to interact with the devices they have sold and shipped to their customers, and customers are able to interact with the manufacturer (and other providers) to access new content. Lending credence to the idea of the ubiquity of the web, Web 2.0 has found a place in the global English lexicon. On June 10, 2009 the Global Language Monitor declared it to be the one-millionth English word.[21]

History of the World Wide Web

59

References
Robert Cailliau, James Gillies, How the Web Was Born: The Story of the World Wide Web, ISBN 978-0-19-286207-5, Oxford University Press (Jan 1, 2000) Tim Berners-Lee with Mark Fischetti, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor, ISBN 978-0-06-251586-5, HarperSanFrancisco, 1999 978-0-06-251587-X (pbk.), HarperSanFrancisco, 2000 Andrew Herman, The World Wide Web and Contemporary Cultural Theory : Magic, Metaphor, Power, ISBN 978-0-415-92502-0, Routledge, 1st Edition (June 2000) History of computer by Ahmad Abubakar Umar 2010

Footnotes
[1] Berners-Lee, Tim. "Frequently asked questions - Start of the web: Influences" (http:/ / www. w3. org/ People/ Berners-Lee/ FAQ. html#Influences). World Wide Web Consortium. . Retrieved 22 July 2010. [2] Berners-Lee, Tim. "Frequently asked questions - Why the //, #, etc?" (http:/ / www. w3. org/ People/ Berners-Lee/ FAQ. html#etc). World Wide Web Consortium. . Retrieved 22 July 2010. [3] A Short History of Internet Protocols at CERN (http:/ / ben. home. cern. ch/ ben/ TCPHIST. html) by Ben Segal. 1995 [4] The Next Crossroad of Web History (http:/ / www. netvalley. com/ intvalnext. html) by Gregory Gromov [5] Berners-Lee, Tim (May 1990). "Information Management: A Proposal" (http:/ / www. w3. org/ History/ 1989/ proposal. html). World Wide Web Consortium. . Retrieved 24 August 2010. [6] Tim Berners-Lee, Weaving the Web, HarperCollins, 2000, p.23 [7] http:/ / www. w3. org/ Protocols/ HTTP/ AsImplemented. html [8] Berners-Lee, Tim (ca 1993/1994). "A Brief History of the Web" (http:/ / www. w3. org/ DesignIssues/ TimBook-old/ History. html). World Wide Web Consortium. . Retrieved 17 August 2010. [9] (http:/ / davidgalbraith. org/ uncategorized/ the-exact-location-where-the-web-was-invented/ 2343/ ) Tim Berners-Lee's account of the exact locations at CERN where the Web was invented [10] How the web went world wide (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 5242252. stm), Mark Ward, Technology Correspondent, BBC News. Retrieved 24 January 2011 [11] Berners-Lee, Tim. "Qualifiers on Hypertext links... - alt.hypertext" (http:/ / groups. google. com/ group/ alt. hypertext/ tree/ browse_frm/ thread/ 7824e490ea164c06/ f61c1ef93d2a8398?rnum=1& hl=en& q=group:alt. hypertext+ author:Tim+ author:Berners-Lee& _done=/ group/ alt. hypertext/ browse_frm/ thread/ 7824e490ea164c06/ f61c1ef93d2a8398?tvc=1& q=group:alt. hypertext+ author:Tim+ author:Berners-Lee& hl=en& #doc_06dad279804cb3ba). . Retrieved 11 July 2012. [12] Tim Berners-Lee, Weaving the Web, HarperCollins, 2000, p.46 [13] Heather McCabe (1999-02-09). "Grrl Geeks Rock Out" (http:/ / www. wired. com/ news/ culture/ 0,1294,17821,00. html). Wired magazine. . [14] Mosaic Web Browser History NCSA, Marc Andreessen, Eric Bina (http:/ / www. livinginternet. com/ w/ wi_mosaic. htm) [15] NCSA Mosaic September 10, 1993 Demo (http:/ / www. totic. org/ nscp/ demodoc/ demo. html) [16] Vice President Al Gore's ENIAC Anniversary Speech (http:/ / www. cs. washington. edu/ homes/ lazowska/ faculty. lecture/ innovation/ gore. html). [17] Robert Cailliau (21 July 2010). "A Short History of the Web" (http:/ / www. netvalley. com/ archives/ mirrors/ robert_cailliau_speech. htm). NetValley. . Retrieved 21 July 2010. [18] Tim Berners-Lee. "Frequently asked questions - Robert Cailliau's role" (http:/ / www. w3. org/ People/ Berners-Lee/ FAQ. html#Cailliau). World Wide Web Consortium. . Retrieved 22 July 2010. [19] "IW3C2 - Past and Future Conferences" (http:/ / www. iw3c2. org/ conferences). International World Wide Web Conferences Steering Committee. 2010-05-02. . Retrieved 16 May 2010. [20] Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web. HarperSanFrancisco. chapter 12. ISBN978-0-06-251587-2. [21] "'Millionth English Word' declared". NEWS.BBC.co.uk (http:/ / news. bbc. co. uk/ 1/ hi/ world/ americas/ 8092549. stm)

History of the World Wide Web

60

External links
First World Web site (http://info.cern.ch/) The World Wide Web History Project (http://www.webhistory.org/home.html) Important Events in the History of the World Wide Web (http://internet-browser-review.toptenreviews.com/ important-events-in-the-history-of-the-world-wide-web.html) Internet History (http://www.computerhistory.org/internet_history/), Computer History Museum

61

Precursors and early development


Intergalactic Computer Network
Intergalactic Computer Network can be said to be the first conception of what would eventually become the Internet. The Internet Society has used a short form Galactic Network for the same thing.[1] J.C.R. Licklider used the term at ARPA in 1963, addressing his colleagues as "Members and Affiliates of the Intergalactic Computer Network".[2]

References
[1] Leiner, Barry M. et al. (2003-12-10). ""Origins of the Internet" in A Brief History of the Internet version 3.32" (http:/ / www. isoc. org/ internet/ history/ brief. shtml#Origins). The Internet Society. . Retrieved 2007-11-03. [2] Licklider, J. C. R. (23 April 1963). "Topics for Discussion at the Forthcoming Meeting, Memorandum For: Members and Affiliates of the Intergalactic Computer Network" (http:/ / www. kurzweilai. net/ articles/ art0366. html?printable=1). Washington, D.C.: Advanced Research Projects Agency, via KurzweilAI.net. . Retrieved 2007-11-03.

Further reading
Jones, Steve (2003). Encyclopedia of New Media (http://books.google.com/books?id=26NyHREJwP8C& pg=PT253). Sage Publications, via Google Books limited preview. p.287. ISBN0-7619-2382-9. Retrieved 2007-11-03. Page, Dan and Cynthia Lee (1999). "Looking Back at Start of a Revolution" (http://web.archive.org/web/ 20071224090235/http://www.today.ucla.edu/1999/990928looking.html). UCLA Today (The Regents of the University of California (UC Regents)). Archived from the original (http://www.today.ucla.edu/1999/ 990928looking.html) on 2007-12-24. Retrieved 2007-11-03. Hauben, Ronda (19 March 2001). "Draft for Comment 1.001, "The Information Processing Techniques Office and the Birth of the Internet"" (http://www.columbia.edu/~rh120/other/misc/lick101.doc) (Microsoft Word). Retrieved 2007-11-03.

ARPANET

62

ARPANET
ARPANET

ARPANET logical map, March 1977 Commercial? No

Type of network data Location Protocols Established Funding Current status USA NCP, TCP/IP 1969 DARPA defunct, superseded by NSFNET in 1990

The Advanced Research Projects Agency Network (ARPANET) was the world's first operational packet switching network and the progenitor of what was to become the global Internet. The network was initially funded by the Advanced Research Projects Agency (ARPA, later DARPA) within the U.S. Department of Defense for use by its projects at universities and research laboratories in the US. The packet switching of the ARPANET was based on designs by British scientist Donald Davies[1][2] and Lawrence Roberts of the Lincoln Laboratory.[3]

History
Packet switching, today the dominant basis for data communications worldwide, was a new concept at the time of the conception of the ARPANET. Prior to the advent of packet switching, both voice and data communications had been based on the idea of circuit switching, as in the traditional telephone circuit, wherein each telephone call is allocated a dedicated, end to end, electronic connection between the two communicating stations. Such stations might be telephones or computers. The (temporarily) dedicated line is typically composed of many intermediary lines which are assembled into a chain that stretches all the way from the originating station to the destination station. With packet switching, a data system could use a single communications link to communicate with more than one machine by collecting data into datagrams and transmitting these as packets onto the attached network link, as soon as the link becomes idle. Thus, not only can the link be shared, much as a single post box can be used to post letters to different destinations, but each packet can be routed independently of other packets.[4] The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider, of Bolt, Beranek and Newman (BBN), in August 1962, in memoranda discussing his concept for an "Intergalactic Computer Network". Those ideas contained almost everything that composes the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency ARPA (the initial ARPANET acronym). He then convinced Ivan Sutherland and Bob Taylor that this computer network concept was very important and merited development, although Licklider left ARPA before any

ARPANET contracts were let that worked on this concept.[5] Ivan Sutherland and Bob Taylor continued their interest in creating such a computer communications network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to put to use the computers ARPA was providing them, and, in part, to make new software and other computer science results quickly and widely available.[6] In his office, Taylor had three computer terminals, each connected to separate computers, which ARPA was funding: the first, for the System Development Corporation (SDC) Q-32, in Santa Monica; the second, for Project Genie, at the University of California, Berkeley; and the third, for Multics, at MIT. Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, "Oh Man!", it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET".[7] Somewhat contemporaneously, several other people had (mostly independently) worked out the aspects of "packet switching", with the first public demonstration presented by the National Physical Laboratory (NPL), on 5 August 1968, in the United Kingdom.[8]

63

Creation
By mid-1968, Taylor had prepared a complete plan for a computer network, and, after ARPA's approval, a Request for Quotation (RFQ) was sent to 140 potential bidders. Most computer science companies regarded the ARPATaylor proposal as outlandish, and only twelve submitted bids to build the network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors, and awarded the contract to build the network to BBN Technologies on 7 April 1969. The initial, seven-man BBN team were much aided by the technical specificity of their response to the ARPA RFQ and thus quickly produced the first working computers. This team was led by Frank Heart. The BBN-proposed network closely followed Taylor's ARPA plan: a network composed of small computers called Interface Message Processors (IMPs), that functioned as gateways (today called routers) interconnecting local resources. At each site, the IMPs performed store-and-forward packet switching functions, Len Kleinrock and the first Interface Message and were interconnected with modems that were connected to [9] Processor. leased lines, initially running at 50kbit/second. The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months.[10] The first-generation IMPs were initially built by BBN Technologies using a rugged computer version of the Honeywell DDP-516 computer configured with 24kB of expandable core memory, and a 16-channel Direct Multiplex Control (DMC) direct memory access unit.[11] The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator-lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts, and could communicate with up to six remote IMPs via leased lines.

ARPANET

64

Misconceptions of design goals


Common ARPANET lore posits that the computer network was designed to survive a nuclear attack. In A Brief History of the Internet, the Internet Society describes the coalescing of the technical ideas that produced the ARPANET: It was from the RAND study that the false rumor started, claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.[12] Although the ARPANET was designed to survive subordinate-network losses, the principal reason was that the switching nodes and network links were unreliable, even without any nuclear attacks. About the resource scarcity that spurred the creation of the ARPANET, Charles Herzfeld, ARPA Director (19651967), said: The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[13] Packet switching pioneer Paul Baran affirms this, explaining: "Bob Taylor had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the ARPANET. The method used to connect things together was an open issue for a time."[14]

ARPANET deployed
The initial ARPANET consisted of four IMPs:[15] University of California, Los Angeles (UCLA), where Leonard Kleinrock had established a Network Measurement Center, with an SDS Sigma 7 being the first computer attached to it; The Stanford Research Institute's Augmentation Research Center, where Douglas Engelbart had created the ground-breaking NLS system, a very important early hypertext system (with the SDS 940 that ran NLS, named "Genie", being the first host attached);

Historical document: First ARPANET IMP log: the first message ever sent via the ARPANET, 10:30pm, 29 October 1969. This IMP Log excerpt, kept at UCLA, describes setting up a message transmission from the UCLA SDS Sigma 7 Host computer to the SRI SDS 940 Host computer

University of California, Santa Barbara (UCSB), with the Culler-Fried Interactive Mathematics Center's IBM 360/75, running OS/MVT being the machine attached; The University of Utah's Computer Science Department, where Ivan Sutherland had moved, running a DEC PDP-10 operating on TENEX. The first message on the ARPANET was sent by UCLA student programmer Charley Kline, at 10:30pm on 29 October 1969, from Boelter Hall 3420.[16] Kline transmitted from the university's SDS Sigma 7 Host computer to the Stanford Research Institute's SDS 940 Host computer. The message text was the word login; the l and the o letters

ARPANET were transmitted, but the system then crashed. Hence, the literal first message over the ARPANET was lo. About an hour later, having recovered from the crash, the SDS Sigma 7 computer effected a full login. The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the entire four-node network was established.[17]

65

Growth and evolution


In March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 (when the network included 23 university and government hosts); 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days.[15] In 1973 a transatlantic satellite link connected the Norwegian Seismic Array (NORSAR) to the ARPANET, making Norway the first country outside the US to be connected to the network. At about the same time a terrestrial circuit added a London IMP.[18] In 1975, the ARPANET was declared "operational". The Defense Communications Agency took control since ARPA was intended to fund advanced research.[15] In 1983, the ARPANET was split with U.S. military sites on their own Military Network (MILNET) for unclassified defense department communications. The combination was called the Defense Data Network (DDN).[19] Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. Gateways relayed electronic mail between the two networks. MILNET later became the NIPRNet.

Rules and etiquette


Because of its government ties, certain forms of traffic were discouraged or prohibited. A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette:[20] It is considered illegal to use the ARPANet for anything which is not in direct support of Government business ... personal messages to other ARPANet subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful ... Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet.[20]

Technology
Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used. 1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) Honeywell 316 as an IMP. It could also be configured as a Terminal Interface Processor (TIP), which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts.[21] The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973. In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a small number of sites. In 1981, BBN introduced IMP software running on its own C/30 processor product. In 1983, TCP/IP protocols replaced NCP as the ARPANET's principal protocol, and the ARPANET then became one subnet of the early Internet.[22][23]

ARPANET

66

Shutdown and legacy


The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as 1989.[24] The ARPANET Completion Report, jointly published by BBN and ARPA, concludes that: ...it is somewhat fitting to end on the note that the ARPANET program has had a strong and direct feedback into the support and strength of computer science, from which the network, itself, sprang.[25] In the wake of ARPANET being formally decommissioned on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled "Requiem of the ARPANET":[26] It was the first, and being first, was best, but now we lay it down to ever rest. Now pause with me a moment, shed some tears. For auld lang syne, for love, for years and years of faithful service, duty done, I weep. Lay down thy packet, now, O friend, and sleep. -Vinton Cerf Senator Albert Gore, Jr. began to craft the High Performance Computing and Communication Act of 1991 (commonly referred to as "The Gore Bill") after hearing the 1988 report toward a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock, professor of computer science at UCLA. The bill was passed on 9 December 1991 and led to the National Information Infrastructure (NII) which Al Gore called the "information superhighway". ARPANET was the subject of two IEEE Milestones, both dedicated in 2009.[27][28]

Software and protocols


The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP.[29] The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message (RFNM) acknowledgement to the sending, host IMP. Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Program (NCP), which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept incorporated to the OSI model.[22] In 1983, TCP/IP protocols replaced NCP as the ARPANET's principal protocol, and the ARPANET then became one component of the early Internet.[23]

ARPANET

67

Network applications
NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service. When the ARPANET migrated to the Internet protocols in 1983, the major application protocols migrated with it. E-mail: In 1971, Ray Tomlinson, of BBN sent the first network e-mail (RFC 524, RFC 561).[30] By 1973, e-mail constituted 75 percent of ARPANET traffic. File transfer: By 1973, the File Transfer Protocol (FTP) specification had been defined (RFC 354) and implemented, enabling file transfers over the ARPANET. Voice traffic: The Network Voice Protocol (NVP) specifications were defined in 1977 (RFC 741), then implemented, but, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol (packet voice) was decades away.

ARPANET in film and other media


Contemporary
Steven King (Producer), Peter Chvany (Director/Editor) (1972). Computer Networks: The Heralds of Resource Sharing [31]. Retrieved 20 December 2011. A 30 minute documentary film featuring Fernando J. Corbato, J.C.R. Licklider, Lawrence G. Roberts, Robert Kahn, Frank Heart, William R. Sutherland, Richard W. Watson, John R. Pasta, Donald W. Davies, and economist, George W. Mitchell. Scenario, a February 1985 episode of the U.S. television sitcom Benson (season 6, episode 20), includes a scene in which ARPANET is accessed. This is believed to be the first incidence of a popular TV show referencing the Internet or its progenitors.[32]

Post-ARPANET
In Let the Great World Spin: A Novel, published in 2009 but set in 1974 and written by Colum McCann, a character named The Kid and others use ARPANET from a Palo Alto computer to dial phone booths in New York City to hear descriptions of Philippe Petit's tight rope walk between the World Trade Center Towers. In Metal Gear Solid 3: Snake Eater, a character named Sigint takes part in the development of ARPANET after the events depicted in the game. The Doctor Who Past Doctor Adventures novel Blue Box, written in 2003 but set in 1981, includes a character predicting that by the year 2000 there will be four hundred machines connected to ARPANET. There is an electronic music artist known as Arpanet, Gerald Donald, one of the members of Drexciya. The artist's 2002 album Wireless Internet features commentary on the expansion of the internet via wireless communication, with songs such as NTT DoCoMo, dedicated to the mobile communications giant based in Japan. In numerous The X-Files episodes ARPANET is referenced and usually hacked into by The Lone Gunmen. This is most noticeable in the episode "Unusual Suspects". Thomas Pynchon's 2009 novel Inherent Vice, set in southern California circa 1970, contains a character who accesses the "ARPAnet" throughout the course of the book. The viral marketing campaign for the video game Resistance 2 features a website similar in design and purpose to ARPANET, called SRPANET.

ARPANET

68

References
[1] http:/ / www. thocp. net/ biographies/ davies_donald. htm [2] http:/ / www. internethalloffame. org/ inductees/ donald-davies [3] "Lawrence Roberts Manages The ARPANET Program" (http:/ / www. livinginternet. com/ i/ ii_roberts. htm). Living Internet.com. . Retrieved 6 November 2008. [4] "Packet Switching History" (http:/ / www. livinginternet. com/ i/ iw_packet_inv. htm), Living Internet, retrieved 26 August 2012 [5] "J.C.R. Licklider And The Universal Network" (http:/ / www. livinginternet. com/ i/ ii_licklider. htm), Living Internet [6] "IPTO Information Processing Techniques Office" (http:/ / www. livinginternet. com/ i/ ii_ipto. htm), Living Internet [7] John Markoff (20 December 1999). "An Internet Pioneer Ponders the Next Revolution" (http:/ / partners. nytimes. com/ library/ tech/ 99/ 12/ biztech/ articles/ 122099outlook-bobb. html). The New York Times. Archived (http:/ / web. archive. org/ web/ 20080922095019/ http:/ / partners. nytimes. com/ library/ tech/ 99/ 12/ biztech/ articles/ 122099outlook-bobb. html) from the original on 22 September 2008. . Retrieved 20 September 2008. [8] "The accelerator of the modern age" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 7541123. stm). BBC News. 5 August 2008. Archived (http:/ / web. archive. org/ web/ 20090610082212/ http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 7541123. stm) from the original on 10 June 2009. . Retrieved 19 May 2009. [9] Leonard Kleinrock (2005). The history of the Internet (http:/ / www. lk. cs. ucla. edu/ personal_history. html). . Retrieved May 28, 2009. [10] "IMP Interface Message Processor" (http:/ / www. livinginternet. com/ i/ ii_imp. htm), Living Internet [11] Wise, Adrian. "Honeywell DDP-516" (http:/ / www. old-computers. com/ museum/ computer. asp?c=551). Old-Computers.com. . Retrieved 21 September 2008. [12] "A Brief History of the Internet" (http:/ / www. isoc. org/ internet/ history/ brief. shtml). Internet Society. Archived (http:/ / web. archive. org/ web/ 20080918213304/ http:/ / www. isoc. org/ internet/ history/ brief. shtml) from the original on 18 September 2008. . Retrieved 20 September 2008. [13] "Charles Herzfeld on ARPANET and Computers" (http:/ / inventors. about. com/ library/ inventors/ bl_Charles_Herzfeld. htm). About.com. . Retrieved 21 December 2008. [14] Brand, Stewart (March 2001). "Founding Father" (http:/ / www. wired. com/ wired/ archive/ 9. 03/ baran. html). Wired (9.03). . Retrieved 31 December 2011. [15] "ARPANET The First Internet" (http:/ / www. livinginternet. com/ i/ ii_arpanet. htm), Living Internet [16] Jessica Savio (1 April 2011). "Browsing history: A heritage site is being set up in Boelter Hall 3420, the room the first Internet message originated in" (http:/ / www. dailybruin. com/ index. php/ article/ 2011/ 04/ browsing_history). Daily Bruin (UCLA). . [17] Chris Sutton (2 September 2004). "Internet Began 35 Years Ago at UCLA with First Message Ever Sent Between Two Computers" (http:/ / web. archive. org/ web/ 20080308120314/ http:/ / www. engineer. ucla. edu/ stories/ 2004/ Internet35. htm). UCLA. Archived from the original (http:/ / www. engineer. ucla. edu/ stories/ 2004/ Internet35. htm) on 8 March 2008. . [18] "NORSAR and the Internet" (http:/ / www. norsar. no/ pc-5-30-NORSAR-and-the-Internet. aspx). NORSAR (Norway Seismic Array Research). . Retrieved 25 August 2012. [19] Fritz E. Froehlich; Allen Kent (1990). "ARPANET, the Defense Data Network, and Internet" (http:/ / books. google. com/ books?id=gaRBTHdUKmgC& pg=PA341). The Froehlich/Kent Encyclopedia of Telecommunications. 1. CRC Press. pp.341375. ISBN978-0-8247-2900-4. . [20] Stacy, Christopher C. (7 September 1982). Getting Started Computing at the AI Lab (http:/ / independent. academia. edu/ ChristopherStacy/ Papers/ 1464820/ Getting_Started_Computing_at_the_AI_Lab). AI Lab, Massachusetts Institute of Technology. pp.9. . [21] Kirstein, Peter T. (JulySeptember 2009). "The Early Days of the Arpanet" (http:/ / muse. jhu. edu/ journals/ ahc/ summary/ v031/ 31. 3. kirstein. html). IEEE Annals of the History of Computing 31 (3): 67. ISSN1058-6180. . [22] "NCP Network Control Program" (http:/ / www. livinginternet. com/ i/ ii_ncp. htm), Living Internet [23] "TCP/IP Internet Protocol" (http:/ / www. livinginternet. com/ i/ ii_tcpip. htm), Living Internet [24] "NSFNET National Science Foundation Network" (http:/ / www. livinginternet. com/ i/ ii_nsfnet. htm), Living Internet [25] A History of the ARPANET: The First Decade (http:/ / www. dtic. mil/ cgi-bin/ GetTRDoc?Location=U2& doc=GetTRDoc. pdf& AD=ADA115440) (Report). Arlington, VA: Bolt, Beranek & Newman Inc.. 1 April 1981. p. 132. . section 2.3.4 [26] Abbate, Janet (11 June 1999). Inventing the Internet. Cambridge, MA: MIT Press. ASINB003VPWY6E. ISBN0262011727. [27] "Milestones:Birthplace of the Internet, 1969" (http:/ / www. ieeeghn. org/ wiki/ index. php/ Milestones:Birthplace_of_the_Internet,_1969). IEEE Global History Network. IEEE. . Retrieved 4 August 2011. [28] "Milestones:Inception of the ARPANET, 1969" (http:/ / www. ieeeghn. org/ wiki/ index. php/ Milestones:Inception_of_the_ARPANET,_1969). IEEE Global History Network. IEEE. . Retrieved 4 August 2011. [29] Interface Message Processor: Specifications for the Interconnection of a Host and an IMP (http:/ / www. bitsavers. org/ pdf/ bbn/ imp/ BBN1822_Jan1976. pdf), Report No. 1822, Bolt Beranek and Newman, Inc. (BBN) [30] Tomlinson, Ray. "The First Network Email" (http:/ / openmap. bbn. com/ ~tomlinso/ ray/ firstemailframe. html). BBN. . Retrieved 6 March 2012. [31] http:/ / documentary. operationreality. org/ 2011/ 08/ 27/ computer-networks-the-heralds-of-resource-sharing [32] "Scenario" (http:/ / www. imdb. com/ title/ tt0789851/ ), Benson, Season 6, Episode 132 of 158, American Broadcasting Company (ABC), Witt/Thomas/Harris Productions, 22 February 1985

ARPANET

69

Further reading
Norberg, Arthur L.; O'Neill, Judy E. (1996). Transforming Computer Technology: Information Processing for the Pentagon, 19621982. Johns Hopkins University. pp.153196. ISBN978-0801863691. A History of the ARPANET: The First Decade (http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2& doc=GetTRDoc.pdf&AD=ADA115440) (Report). Arlington, VA: Bolt, Beranek & Newman Inc.. 1 April 1981. Hafner, Katie; Lyon, Matthew (1996). Where Wizards Stay Up Late: The Origins of the Internet. Simon and Schuster. ISBN0-7434-6837-6. Abbate, Janet (11 June 1999). Inventing the Internet. Cambridge, MA: MIT Press. pp.36111. ASINB003VPWY6E. ISBN0262011727. Banks, Michael A. (2008). On the Way to the Web: The Secret History of the Internet and Its Founders. APress/Springer Verlag. ISBN1-4302-0869-4. Salus, Peter H. (1 May 1995). Casting the Net: from ARPANET to Internet and Beyond. Addison-Wesley. ISBN978-0201876741. Waldrop, M. Mitchell (23 August 2001). The Dream Machine: J. C. R. Licklider and the Revolution That Made Computing Personal. New York: Viking. ASINB00008MNVW. ISBN0670899763. "The Computer History Museum, SRI International, and BBN Celebrate the 40th Anniversary of First ARPANET Transmission" (http://www.computerhistory.org/press/museum-celebrates-arpanet-anniversary.html). Computer History Museum. 27 October 2009.

Oral histories
"Oral history interview with Robert E. Kahn" (http://purl.umn.edu/107387). University of Minnesota, Minneapolis: Charles Babbage Institute. 24 April 1990. Retrieved 15 May 2008. Focuses on Kahn's role in the development of computer networking from 1967 through the early 1980s. Beginning with his work at Bolt Beranek and Newman (BBN), Kahn discusses his involvement as the ARPANET proposal was being written and then implemented, and his role in the public demonstration of the ARPANET. The interview continues into Kahn's involvement with networking when he moves to IPTO in 1972, where he was responsible for the administrative and technical evolution of the ARPANET, including programs in packet radio, the development of a new network protocol (TCP/IP), and the switch to TCP/IP to connect multiple networks. "Oral history interview with Vinton Cerf" (http://purl.umn.edu/107214). University of Minnesota, Minneapolis: Charles Babbage Institute. 24 April 1990. Retrieved 1 July 2008. Cerf describes his involvement with the ARPA network, and his relationships with Bolt Beranek and Newman, Robert Kahn, Lawrence Roberts, and the Network Working Group. "Oral history interview with Paul Baran" (http://purl.umn.edu/107101). University of Minnesota, Minneapolis: Charles Babbage Institute. 5 March 1990. Retrieved 1 July 2008. Baran describes his work at RAND, and discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET. "Oral history interview with Leonard Kleinrock" (http://purl.umn.edu/107411). University of Minnesota, Minneapolis: Charles Babbage Institute. 3 April 1990. Retrieved 1 July 2008. Kleinrock discusses his work on the ARPANET. "Oral history interview with Larry Roberts" (http://purl.umn.edu/107608). University of Minnesota, Minneapolis: Charles Babbage Institute. 4 April 1989. Retrieved 1 July 2008. "Oral history interview with Stephen Lukasik" (http://purl.umn.edu/107446). University of Minnesota, Minneapolis: Charles Babbage Institute. 17 October 1991. Retrieved 1 July 2008. Lukasik discusses his tenure at the Advanced Research Projects Agency (ARPA), the development of computer networks and the ARPANET.

ARPANET

70

Detailed technical reference works


Roberts, Larry; Marrill, Tom (October 1966). "Toward a Cooperative Network of Time-Shared Computers" (http://www.packet.cc/files/toward-coop-net.html). Fall AFIPS Conference. Roberts, Larry (October 1967). "Multiple computer networks and intercomputer communication" (http://www. packet.cc/files/multi-net-inter-comm.html). ACM Symposium on Operating System Principles. Davies, D. W.; Bartlett, K. A.; Scantlebury, R. A.; Wilkinson, P. T. (October 1967). "A digital communications network for computers giving rapid response at remote terminals". ACM Symposium on Operating Systems Principles. Roberts, Larry; Wessler, Barry (May 1970). "Computer Network Development to Achieve Resource Sharing" (http://www.packet.cc/files/arpa/comp-net-dev.html). Proceedings of the Spring Joint Computer Conference, Atlantic City, New Jersey. Heart, Frank; Kahn, Robert; Ornstein, Severo; Crowther, William; Walden, David (1970). "The Interface Message Processor for the ARPA Computer Network" (http://www.walden-family.com/public/ 1970-imp-afips.pdf). 36. 1970 Spring Joint Computer Conference. pp.551567. Carr, Stephen; Crocker, Stephen; Cerf, Vinton (1970). "Host-Host Communication Protocol in the ARPA Network" (http://tools.ietf.org/pdf/rfc33). 36. 1970 Spring Joint Computer Conference. pp.589598. RFC 33. Ornstein, Severo; Heart, Frank; Crowther, William; Russell, S. B.; Rising, H. K.; Michel, A. (1972). "The Terminal IMP for the ARPA Computer Network" (http://dx.doi.org/10.1145/1478873.1478906). 40. 1972 Spring Joint Computer Conference. pp.243254. McQuillan, John; Crowther, William; Cosell, Bernard; Walden, David; Heart, Frank (1972). "Improvements in the Design and Performance of the ARPA Network" (http://dx.doi.org/10.1145/1480083.1480096). 41. 1972 Fall Joint Computer Conference. pp.741754. Feinler, Elizabeth J.; Postel, Jonathan B. (January 1978). ARPANET Protocol Handbook, NIC 7104. Menlo Park: Network Information Center (NIC), SRI International. ASINB000EN742K. Roberts, Larry (November 1978). "The Evolution of Packet Switching" (http://www.packet.cc/files/ ev-packet-sw.html). Proceedings of the IEEE. Roberts, Larry (Sept 1986). The ARPANET & Computer Networks (http://www.packet.cc/files/ arpanet-computernet.html). ACM.

External links
"ARPANET Maps 1969 to 1977" (http://som.csudh.edu/cis/lpress/history/arpamaps/). California State University, Dominguez Hills (CSUDH). 4 January 1978. Retrieved 17 May 2012. Walden, David C. (February 2003). "Looking back at the ARPANET effort, 34 years later" (http://www. livinginternet.com/i/ii_imp_walden.htm). Living Internet. East Sandwich, Massachusetts: livinginternet.com. Retrieved 17 August 2005. "Images of ARPANET from 1964 onwards" (http://www.computerhistory.org/exhibits/internet_history/). The Computer History Museum. Retrieved 29 August 2004. Timeline. "Paul Baran and the Origins of the Internet" (http://www.rand.org/about/history/baran.html). RAND Corporation. Retrieved 3 September 2005. Kleinrock, Leonard. "The Day the Infant Internet Uttered its First Words" (http://www.lk.cs.ucla.edu/ internet_first_words.html). UCLA. Retrieved 11 November 2004. Personal anecdote of the first message ever sent over the ARPANET "Doug Engelbart's Role in ARPANET History" (http://www.dougengelbart.org/firsts/internet.html). 2008. Retrieved 3 September 2009. "Internet Milestones: Timeline of Notable Internet Pioneers and Contributions" (http://www.juliantrubin.com/ schooldirectory/internet_milestones_pioneers.html). Retrieved 6 January 2012. Timeline.

ARPANET Waldrop, Mitch (April 2008). "DARPA and the Internet Revolution" (http://www.darpa.mil/WorkArea/ DownloadAsset.aspx?id=2554). 50 years of Bridging the Gap. DARPA. pp.7885. Retrieved 26 August 2012.

71

CSNET
The Computer Science Network (CSNET) was a computer network that began operation in 1981 in the United States.[1] Its purpose was to extend networking benefits, for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet. CSNET was funded by the National Science Foundation for an initial three-year period from 1981 to 1984.

History
Lawrence Landweber at the University of Wisconsin-Madison prepared the original CSNET proposal, on behalf of a consortium of universities (Georgia Tech, University of Minnesota, University of New Mexico, Oklahoma University, Purdue University, University of California-Berkeley, University of Utah, University of Virginia, University of Washington, University of Wisconsin, and Yale University). The US National Science Foundation (NSF) requested a review from David J. Farber at the University of Delaware. Farber assigned the task to his graduate student Dave Crocker who was already active in the development of electronic mail.[2] The project was deemed interesting but in need of significant refinement. The proposal eventually gained the support of Vinton Cerf and DARPA. In 1980, the NSF awarded $5 million to launch the network. It was an unusually large project for the NSF at the time.[3] A stipulation for the award of the contract was that the network needed to become self-sufficient by 1986.[1] The first management team consisted of Landweber (University of Wisconsin), Farber (University of Delaware), Peter J. Denning (Purdue University), Anthony Hearn (RAND Corporation), and Bill Kern from the NSF.[4] Once CSNET was fully operational, the systems and ongoing network operations were transferred to Bolt Beranek and Newman (BBN) of Cambridge, Massachusetts by 1984.[5] By 1981, three sites were connected: University of Delaware, Princeton University, and Purdue University. By 1982, 24 sites were connected expanding to 84 sites by 1984, including one in Israel. Soon thereafter, connections were established to computer science departments in Australia, Canada, France, Germany, Korea, and Japan. CSNET eventually connected more than 180 institutions.[6] One of the earliest experiments in free software distribution on a network, netlib, was available on CSNET.[7] CSNET was a forerunner of the National Science Foundation Network (NSFNet) which eventually became a backbone of the Internet. CSNET operated autonomously until 1989, when it merged with Bitnet to form the Corporation for Research and Educational Networking (CREN). By 1991, the success of the NSFNET and NSF-sponsored regional networks had rendered the CSNET services redundant, and the CSNET network was shut down in October 1991.[8]

CSNET

72

Components
The CSNET project had three primary components: an email relaying service (Delaware and RAND), a name service (Wisconsin), and TCP/IP-over-X.25 tunnelling technology (Purdue). Initial access was with email relaying, through gateways at Delaware and RAND, over dial-up telephone or X.29/X.25 terminal emulation. Eventually CSNET access added TCP/IP, including running over X.25.[9] The email relaying service was called Phonenet, after the telephone-specific channel of the MMDF software developed by Crocker. The CSNET name service allowed manual and automated email address lookup based on various user attributes, such as name, title, or institution.[10] The X.25 tunneling allowed an institution to connect directly to the ARPANET via a commercial X.25 service (Telenet), by which the institution's TCP/IP traffic would be tunneled to a CSNET computer that acted as a relay between the ARPANET and the commercial X.25 networks. CSNET also developed dialup-on-demand (Dialup IP) software to automatically initiate or disconnect SLIP sessions as needed to remote locations.[11] CSNET was developed on Digital Equipment Corporation (DEC) VAX-11 systems using BSD Unix, but it grew to support a variety of hardware and operating system platforms.

Recognition
At the July 2009 Internet Engineering Task Force meeting in Stockholm, Sweden, the Internet Society recognized the pioneering contribution of CSNET by honoring it with the Jonathan B. Postel Service Award. Crocker accepted the award on behalf of Landweber and the other principal investigators.[12] A recording of the award presentation and acceptance is available.[13]

References
[1] "The InternetFrom Modest Beginnings" (http:/ / www. nsf. gov/ about/ history/ nsf0050/ internet/ modest. htm). NSF website. . Retrieved September 30, 2011. [2] Dave Crocker (August 18, 2008). "Impact of Email Work at The Rand Corporation in the mid-1970s" (http:/ / bbiw. net/ articles/ rand-email. pdf). . Retrieved September 30, 2011. [3] Douglas Comer (October 1983). "History and overview of CSNET". Communications (Association of Computing Machinery) 26 (10). doi:10.1145/358413.358423. [4] Peter J. Denning; Anthony Hearn; C. William Kern (April 1983). "History and overview of CSNET" (http:/ / www. isoc. org/ internet/ history/ documents/ Comm83. pdf). Proceedings of the symposium on Communications Architectures & Protocols (SIGCOMM, Association of Computing Machinery) 13 (2). doi:10.1145/1035237.1035267. ISBN0-89791-089-3. . [5] Rick Adrion (October 5, 1983). "CSNET Transition Plan Bulletin #1" (http:/ / www. rfc-editor. org/ in-notes/ museum/ csnet-transition-bulletin. n1. 1). email message. National Science Foundation. . Retrieved September 30, 2011. [6] CSNET History (http:/ / www. livinginternet. com/ i/ ii_csnet. htm) [7] Jack J. Dongarra; Eric Grosse (May 1987). "Distribution of mathematical software via electronic mail". Communications (Association of Computing Machinery) 30 (5). doi:10.1145/22899.22904. [8] CSNET-CIC Shutdown Notice (ftp:/ / athos. rutgers. edu/ resource-guide/ chapter6/ section6-6. txt) [9] Craig Partridge; Leo Lanzillo (Feb 1989). "Implementation of Dial-up IP for UNIX Systems". Proceedings of the 1989 Winter USENIX Technical Conference (USENIX Association). [10] Larry Landweber; Michael Litzkow; D. Neuhengen; Marvin Solomon (April 1983). "Architecture of the CSNET name server". Proceedings of the symposium on Communications Architectures & Protocols (SIGCOMM, Association of Computing Machinery) 13 (2). doi:10.1145/1035237.1035268. ISBN0-89791-089-3. [11] Dialup IP 2.0 README (ftp:/ / ftp. isy. liu. se/ pub/ misc/ dialup2. 0. README) [12] "Trailblazing CSNET Network Receives 2009 Jonathan B. Postel Service Award" (http:/ / isoc. org/ wp/ newsletter/ ?p=1098). News release (Internet Society). July 29, 2009. . Retrieved September 30, 2011. [13] Lynn St. Amour, Dave Crocker (July 29, 2009). "Postel Award to CSNET" (http:/ / bbiw. net/ misc/ IETF75-ISOC-Postel-CSNet. mp3). Audio recording. . Retrieved September 30, 2011.

CSNET

73

External links
Living Internet: CSNet (http://livinginternet.com/i/ii_csnet.htm) Exploring the Internet: Round Three, Madison (http://museum.media.org/eti/RoundThree08.html)

ENQUIRE
ENQUIRE
Inventor Tim Berners-Lee

Launch year 1980[1] Company CERN

ENQUIRE was a software project written in 1980 by Tim Berners-Lee at CERN,[2] which was the predecessor to the World Wide Web.[2][3][4] It was a simple hypertext program[4] that had some of the same ideas as the Web and the Semantic Web but was different in several important ways. According to Berners-Lee, the name was inspired by a book entitled Enquire Within Upon Everything.[2][3][5]

The conditions
At that time approximately 10,000 people were working at CERN with different hardware, software and individual requirements. Much work was done by email and file interchange.[4] The scientists needed to keep track of different things[3] and different projects became involved with each other.[2] Berners-Lee started to work for 6 months on 23 June 1980 at CERN while he developed ENQUIRE.[6] The requirements for setting up a new system were compatibility with different networks, disk formats, data formats, and character encoding schemes, which made any attempt to transfer information between dissimilar systems a daunting and generally impractical task.[7] The different hypertext-systems before ENQUIRE were not passing these requirements i.e. Memex and NLS.[7]

Difference to HyperCard
ENQUIRE was similar to Apple's HyperCard which also lacked clickable text and was not "hypertext", but ENQUIRE lacked an image system.[1] The advantage was that it was portable and ran different systems.[1]

Differences to the World Wide Web


It was not supposed to be released to the general public. ENQUIRE had pages called cards and hyperlinks within the cards. The links had different meanings and about a dozen relationships which were displayed to the creator, things, documents and groups described by the card. The relationship between the links could be seen by everybody explaining what the need of the link was or what happen if a card was removed.[4] Everybody was allowed to add new cards but they always needed an existing card.[6]

ENQUIRE

74

Relationship Inverse Relationship made includes uses describes was made by is part of is used by described by

ENQUIRE was closer to a modern wiki than to a web site: database, though a closed system (all of the data could be taken as a workable whole)[2] bidirectional hyperlinks (in Wikipedia and MediaWiki, this is approximated by the What links here feature). This bidirectionality allows ideas, notes, etc. to link to each other without the author being aware of this. In a way, they (or, at least, their relationships) get a life of their own.[4][8] direct editing of the server (like wikis and CMS/blogs)[2] ease of compositing, particularly when it comes to hyperlinking.[2] The World Wide Web was created to unify the different existing systems at CERN like ENQUIRE, the CERNDOC, VMS/Notes and the USENET.[1]

Why ENQUIRE failed


Berners-Lee came back to CERN in 1984 and intensively used his own system.[1][4] He realized that most of the time coordinating the project was to keep information up to date.[4] He recognized that a system similar to ENQUIRE was needed, "but accessible to everybody."[4] There was a need that people be able to create cards independently of others and to link to other cards without updating the linked card. This idea is the big difference and the cornerstone to the World Wide Web.[4] Berners-Lee didn't make ENQUIRE suitable for other persons to use the system successfully, and in other CERN divisions there were similar situations to the division he was in.[1] Another problem was that external links, for example to existing databases, weren't allowed, and that the system wasn't powerful enough to handle enough connections to the database.[1][2] Further development stopped because Berners-Lee gave the ENQUIRE disc to Robert Cailliau, who had been working under Brian Carpenter before he left CERN. Carpenter suspects that the disc was reused for other purposes since nobody was later available to do further work on ENQUIRE.[9]

Technical
The application ran on terminal with plaintext 24x80.[4] The first version was able to hyperlink between files.[2] ENQUIRE was written in the Pascal programming language and implemented on a Norsk Data NORD-10 under SINTRAN III,[2][6][4][8][9] and version 2 was later ported to MS-DOS and to VAX/VMS.[2][4]

References
[1] Berners-Lee, Tim (May 1990). "Information Management: A Proposal" (http:/ / www. w3. org/ History/ 1989/ proposal. html). World Wide Web Consortium. . Retrieved 25 August 2010. [2] Berners-Lee, Tim. "Frequently asked questions Start of the web: Influences" (http:/ / www. w3. org/ People/ Berners-Lee/ FAQ. html#Influences). World Wide Web Consortium. . Retrieved 22 July 2010. [3] Jeffery, Simon; Fenn, Chris; Smith, Bobbie; Coumbe, John (23 October 2009). "A people's history of the internet: from Arpanet in 1969 to today" (http:/ / www. guardian. co. uk/ technology/ interactive/ 2009/ oct/ 23/ internet-arpanet) (Flash). London: The Guardian. pp.See 1980. . Retrieved 7 January 2010. [4] Berners-Lee, Tim (ca. 1993/1994). "A Brief History of the Web" (http:/ / www. w3. org/ DesignIssues/ TimBook-old/ History. html). World Wide Web Consortium. . Retrieved 24 August 2010. [5] Finkelstein, Prof. Anthony (15 August 2003). "ENQUIRE WITHIN UPON EVERYTHING" (http:/ / www. open2. net/ ictportal/ app/ comp_life/ future1. htm). ICT Portal. BBC. . Retrieved 7 January 2010.

ENQUIRE
[6] "History of the Web" (http:/ / www. w3c. rl. ac. uk/ primers/ history/ origins. htm). Oxford Brookes University. 2002. . Retrieved 20 November 2010. [7] Berners-Lee, Tim (August 1996). "The World Wide Web: Past, Present and Future" (http:/ / www. w3. org/ People/ Berners-Lee/ 1996/ ppf. htm). World Wide Web Consortium. . Retrieved 25 August 2010. [8] Cailliau, Robert (1995). "A Little History of the World Wide Web" (http:/ / www. w3. org/ History. html). World Wide Web Consortium. . Retrieved 25 July 2010. [9] Palmer, Sean B.; Berners-Lee, Tim (February/March 2001). "Enquire Manual In HyperText" (http:/ / infomesh. net/ 2001/ enquire/ manual/ #editorial). . Retrieved 30 August 2010.

75

Further reading
Berners-Lee, Tim (2000). Weaving the web. The original design and ultimate destiny of the World Wide Web. New York: Harper Business.

External links
ENQUIRE Manual (http://infomesh.net/2001/enquire/manual/) scanned images of the Enquire Manual from 1980 (http://www.w3.org/History/1980/Enquire/scaled/)

IPPS
The International Packet Switched Service (IPSS) was created in 1978 by a collaboration between the United Kingdom's General Post Office, Western Union International and the United States' Tymnet. This network grew from Europe and the USA to cover Canada, Hong Kong and Australia by 1981, and by the 1990s it provided a worldwide networking infrastructure. Companies and individual users could connect in to the network, via a PSS (Packet Switch Stream) modem, or an X.25 PAD (Packet Assembler/Disassembler), and a dedicated PSS line, and use it to connect to a variety of online databases and mainframe systems. There was a choice of about three different speeds of PSS lines, although a faster line was more costly to rent. By 1984 British Telecomm had joined the PSS global network and was providing IPSS services to customers. Companies including Dynatech, were providers of Interconnectivity and infrastructure devices including line drivers, modems, self configuring modems, 4port, 8port and 16port PADs, and Switches. These were physical boxes delivering full impmentation of x.25, x.28, x.29, x3 protocols with physical connectivity conforming to RS232 synchronous connectivity specification. In 1988 the IPSS directory listed approximately 800 global sites available for connection via X.25

MILNET

76

MILNET
In computer networking, MILNET (Military Network) was the name given to the part of the ARPANET internetwork designated for unclassified United States Department of Defense traffic. MILNET was split off from the ARPANET in 1983: the ARPANET remained in service for the academic research community, but direct connectivity between the networks was severed for security reasons. Gateways relayed electronic mail between the two networks. BBN Technologies built and managed both the MILNET and the ARPANET and the two networks used very similar technology. It is also known as "Military Net." During the 1980s the MILNET expanded to become the Defense Data Network, a worldwide set of military networks running at different security levels. In the 1990s, MILNET became the NIPRNET.

References

NFSNET

77

NFSNET
NSFNET
Commercial? No Type of network Data Location Operator Protocols Established Funding Current status Website USA Merit Network with IBM, MCI, the State of Michigan, and later ANS TCP/IP and OSI 1985 National Science Foundation Decommissioned April 30, 1995, superseded by the commercial Internet NSFNET history
[1]

The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) beginning in 1985 to promote advanced research and education networking in the United States.[2] NSFNET was also the name given to several nationwide backbone networks that were constructed to support NSF's networking initiatives from 1985-1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.

History
Following the deployment of the Computer Science Network (CSNET), a network that provided Internet services to academic computer science departments, in 1981, the U.S. National Science Foundation (NSF) aimed to create an academic research network facilitating access by researchers to the supercomputing centers funded by NSF in the United States.[3] In 1985, NSF began funding the creation of five new supercomputing centers: the John von Neumann Computing Center at Princeton University, the San Diego Supercomputer Center (SDSC) on the campus of the University of California, San Diego (UCSD), the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, the Cornell Theory Center at Cornell University, and the Pittsburgh Supercomputing Center (PSC), a joint effort of Carnegie Mellon University, the University of Pittsburgh, and Westinghouse. Also in 1985, under the leadership of Dennis Jennings, the NSF established the National Science Foundation Network (NSFNET). NSFNET was to be a general-purpose research network, a hub to connect the five supercomputing centers along with the NSF-funded National Center for Atmospheric Research (NCAR) to each other and to the regional research and education networks that would in turn connect campus networks. Using this three tier network architecture NSFNET would provide access between the supercomputer centers and other sites over the backbone network at no cost to the centers or to the regional networks using the open TCP/IP protocols initially deployed successfully on the ARPANET.

NSF's three tiered network architecture

NFSNET

78

The 56-kbit/s backbone


The NSFNET initiated operations in 1986 using TCP/IP. Its six backbone sites were interconnected with leased 56-kbit/s links, built by a group including the University of Illinois National Center for Supercomputing Applications (NCSA), Cornell University Theory Center, University of Delaware, and Merit Network. PDP-11/73 minicomputers with routing and management software, called Fuzzballs, served as the network routers since they already implemented the TCP/IP standard. This original 56-kbit/s backbone was overseen by the supercomputer centers themselves with the lead taken by Ed Krol at the University of Illinois at Urbana-Champaign. PDP-11/73 Fuzzball routers were configured and run by Hans-Werner Braun at the Merit Network[4] and statistics were collected by Cornell University. Support for NSFNET end-users was provided by the Network Service Center (NNSC), located at BBN Technologies and included publishing the softbound "Internet Manager's Phonebook" which listed the contact information for every issued domain name and IP address in 1990.[5] Incidentally, Ed Krol also authored the Hitchhiker's Guide to the Internet to help users of the NSFNET understand its capabilities.[6] The Hitchhiker's Guide became one of the first help manuals for the Internet. As regional networks grew the 56 K-bit/sec NSFNET backbone experienced rapid increases in network traffic and became seriously congested. In June 1987 NSF issued a new solicitation to upgrade and expand NSFNET.[7]

56K NSFNET Backbone, c. 1988

T1 NSFNET Backbone, c. 1991

The 1.5 Mbit/s (T1) backbone


T3 NSFNET Backbone, c. 1992

As a result of a November 1987 NSF award to the Merit Network, a networking consortium by public universities in Michigan, the original 56-kbit/s network was expanded to include 13 nodes interconnected at 1.5 Mbit/s (T1) by July 1988. The backbone nodes used routers based on a collection of nine IBM RT systems running AOS, IBM's version of Berkeley UNIX. Under its cooperative agreement with NSF the Merit Network was the lead organization in a partnership that included IBM, MCI, and the State of Michigan. Merit provided overall project coordination, network design and engineering, a Network Operations Center (NOC), and information services to assist the regional networks. IBM provided equipment, software development, installation, maintenance and operations support. MCI provided the T1 data circuits at reduced rates. The state of Michigan provided funding for facilities and personnel. Eric M. Aupperle, Merit's President, was the NSFNET Project Director, and Hans-Werner Braun was Co-Principal Investigator.

NSFNET Traffic 1991, NSFNET backbone nodes are shown at the top, regional networks below, traffic volume is depicted from purple (zero bytes) to white (100 billion bytes), visualization by NCSA using traffic data provided by the Merit Network.

NFSNET From 1987 to 1994 Merit organized a series of "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and the Merit engineering staff. During this period, but separate from its support for the NSFNET backbone, NSF funded: the NSF Connections Program that helped colleges and universities obtain or upgrade connections to regional networks; regional networks to obtain or upgrade equipment and data communications circuits; the NNSC, and successor Network Information Services Manager (aka InterNIC) information help desks;[8] the International Connections Manager (ICM), a task performed by Sprint, that encouraged connections between the NSFNET backbone and international research and education networks; and various ad hoc grants to organizations such as the Federation of American Research Networks (FARNET). The NSFNET became the principal Internet backbone starting in approximately 1988, when in addition to the five NSF supercomputer centers it included connectivity to the regional networks BARRNet, Merit/MichNet, MIDnet, NCAR, NorthWestNet, NYSERNet, JVNCNet, SESQUINET, SURAnet, and Westnet, which in turn connected about 170 additional networks to the NSFNET.[9] Three new nodes were added as part of the upgrade to T3: NEARNET in Cambridge, Massachusetts; Argone National Laboratory outside of Chicago; and SURAnet in Atlanta, Georgia.[10] NSFNET connected to other federal government networks including the NASA Science Internet, the Energy Science Network (ESNET), and others. Connections were also established to international research and education networks, first to France and Canada, then to NordUnet serving Denmark, Finland, Iceland, Norway, and Sweden, to Mexico, and many others. Two Federal Internet Exchanges (FIXes) were established in June 1989[11] under the auspices of the Federal Engineering Planning Group (FEPG). FIX East, at the University of Maryland in College Park and FIX West, at the NASA Ames Research Center in Mountain View, California. The existence of NSFNET and the FIXes allowed the ARPANET to be phased out in mid-1990.[12] Starting in August 1990 the NSFNET backbone supported the OSI Connectionless Network Protocol (CLNP) in addition to TCP/IP.[13] However, CLNP usage remained low when compared to TCP/IP. Traffic on the network continued its rapid growth, doubling every seven months. Projections indicated that the T1 backbone would become overloaded sometime in 1990. A critical routing technology, Border Gateway Protocol (BGP), originated during this period of Internet history. BGP allowed routers on the NSFNET backbone to differentiate routes originally learned via multiple paths. Prior to BGP, interconnection between IP network was inherently hierarchical, and careful planning was needed to avoid routing loops.[14] BGP turned the Internet into a meshed topology, moving away from the centric architecture which the ARPANET emphasized.

79

The 45-Mbit/s (T3) backbone

NFSNET

80

During 1991 the backbone was upgraded to 45 Mbit/s (T3) transmission speed and expanded to interconnect 16 nodes. The routers on the upgraded backbone were based on an IBM RS/6000 workstation running UNIX. Core nodes were located at MCI facilities with end nodes at the connected regional networks and supercomputing centers. Completed in November 1991, the transition from T1 to T3 did not go as smoothly as the transition from 56K to T1, took longer than planned, and as a result there was at times serious congestion on the overloaded T1 backbone. Following the transition to T3, portions of the T1 backbone were left in place to act as a backup for the new T3 backbone.

Packet Traffic on the NSFNET Backbone, January 1988 to June 1994

In anticipation of the T3 upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan based Merit Network. Under its cooperative agreement with NSF, Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS's first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was Chairman of the ANS Board of Directors. The new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.

Regional networks
In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks. The NSFNET regional networks were:[10][15] BARRNet, the Bay Area Regional Research Network in Palo Alto, California; CERFNET, California Education and Research Federation Network in San Diego, California, serving California and Nevada; CICNet, the Committee on Institutional Cooperation Network via the Merit Network in Ann Arbor, Michigan and later as part of the T3 upgrade via Argonne National Laboratory outside of Chicago, serving the Big Ten Universities and the University of Chicago in Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin; Merit/MichNet in Ann Arbor, Michigan serving Michigan, formed in 1966, still in operation as of 2012;[16] MIDnet in Lincoln, Nebraska serving Arkansas, Iowa, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota; NEARNET, the New England Academic and Research Network in Cambridge, Massachusetts, added as part of the upgrade to T3, serving Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, established in late 1988, operated by BBN under contract to MIT, BBN assumed responsibility for NEARNET on 1 July 1993;[17] NorthWestNet in Seattle, Washington, serving Alaska, Idaho, Montana, North Dakota, Oregon, and Washington, founded in 1987;[18] NYSERNet, New York State Education and Research Network in Ithaca, New York; JVNCNet, the John von Neumann National Supercomputer Center Network in Princeton, New Jersey, serving Delaware and New Jersey;

NFSNET SESQUINET, the Sesquicentennial Network in Houston, Texas, founded during the 150th anniversary of the State of Texas; SURAnet, the Southeastern Universities Research Association network in College Park, Maryland and later as part of the T3 upgrade in Atlanta, Georgia serving Alabama, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia, sold to BBN in 1994; and Westnet in Salt Lake City, Utah and Boulder, Colorado, serving Arizona, Colorado, New Mexico, Utah, and Wyoming.

81

Commercial traffic
The NSF's appropriations act authorized NSF to "foster and support the development and use of computer and other scientific and engineering methods and technologies, primarily for research and education in the sciences and engineering." This allowed NSF to support NSFNET and related networking initiatives, but only to the extent that that support was "primarily for research and education in the sciences and engineering."[19] And this in turn was taken to mean that use of NSFNET for commercial purposes was not allowed.
The NSFNET Backbone Services Acceptable Use Policy June 1992
[20]

1. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

General Principle
NSFNET Backbone services are provided to support open research and education in and among US research and instructional institutions, plus research arms of for-profit firms when engaged in open scholarly communication and research. Use for other purposes is not acceptable.

Specifically Acceptable Uses


Communication with foreign researchers and educators in connection with research or instruction, as long as any network that the foreign user employs for such communication provides reciprocal access to US researchers and educators. Communication and exchange for professional development, to maintain currency, or to debate issues in a field or subfield of knowledge. Use for disciplinary-society, university-association, government-advisory, or standards activities related to the user's research and instructional activities. Use in applying for or administering grants or contracts for research or instruction, but not for other fundraising or public relations activities. Any other administrative communications or activities in direct support of research and instruction. Announcements of new products or services for use in research or instruction, but not advertising of any kind. Any traffic originating from a network of another member agency of the Federal Networking Council if the traffic meets the acceptable use policy of that agency. Communication incidental to otherwise acceptable use, except for illegal or specifically unacceptable use.

Unacceptable Uses
Use for for-profit activities, unless covered by the General Principle or as a specifically acceptable use. Extensive use for private or personal business.

This statement applies to use of the NSFNET Backbone only. NSF expects that connecting networks will formulate their own use policies. The NSF Division of Networking and Communications Research and Infrastructure will resolve any questions about this Policy or its interpretation.

NFSNET

82

Acceptable Use Policy (AUP)


To ensure that NSF support was used appropriately, NSF developed an NSFNET Acceptable Use Policy (AUP) that outlined in broad terms the uses of NSFNET that were and were not allowed.[20] The AUP was revised several times to make it clearer and to allow the broadest possible use of NSFNET, consistent with Congress' wishes as expressed in the appropriations act. A notable feature of the AUP is that it talks about acceptable uses of the network that are not directly related to who or what type of organization is making that use. Use from for-profit organizations is acceptable when it is in support of open research and education. And some uses such as fundraising, advertising, public relations activities, extensive personal or private use, for-profit consulting, and all illegal activities are never acceptable, even when that use is by a non-profit college, university, K-12 school, or library. And while these AUP provisions seem quite reasonable, in specific cases they often proved difficult to interpret and enforce. NSF did not monitor the content of traffic that was sent over NSFNET or actively police the use of the network. And it did not require Merit or the regional networks to do so. NSF, Merit, and the regional networks did investigate possible cases of inappropriate use, when such use was brought to their attention.[21] An example may help to illustrate the problem. Is it acceptable for a parent to exchange e-mail with a child enrolled at a college or university, if that exchange uses the NSFNET backbone? It would be acceptable, if the subject of the e-mail was the student's instruction or a research project. Even if the subject was not instruction or research, the e-mail still might be acceptable as private or personal business as long as the use was not extensive.[22] The prohibition on commercial use of the NSFNET backbone[23] meant that some organizations could not connect to the Internet via regional networks that were connected to the NSFNET backbone, while to be fully connected other organizations (or regional networks on their behalf), including some non-profit research and educational institutions, would need to obtain two connections, one to an NSFNET attached regional network and one to a non-NSFNET attached network provider. In either case the situation was confusing and inefficient. It prevented economies of scale, increased costs, or both. And this slowed the growth of the Internet and its adoption by new classes of users, something no one was happy about.

Commercial ISPs, ANS CO+RE, and the CIX


During the period when NSFNET was being established, Internet service providers that allowed commercial traffic began to emerge, such as Alternet, PSINet, CERFNet, and others. The commercial networks in many cases were interconnected to the NSFNET and routed traffic over the NSFNET nominally accordingly to the NSFNET acceptable use policy[24] Additionally, these early commercial networks often directly interconnected with each other as well as, on a limited basis, with some of the regional Internet networks. In 1991, the Commercial Internet eXchange (CIX, pronounced "kicks") was created by PSINet, UUNET and CERFnet to provide a location at which multiple networks could exchange traffic free from traffic-based settlements and restrictions imposed by an acceptable use policy.[25] In 1991 a new ISP, ANS CO+RE (commercial plus research), raised concerns and unique questions regarding commercial and non-commercial interoperability policies. ANS CO+RE was the for-profit subsidiary of the non-profit Advanced Network and Services (ANS) that had been created earlier by the NSFNET partners, Merit, IBM, and MCI.[26] ANS CO+RE was created specifically to allow commercial traffic on ANSNet without jeopardizing its parent's non-profit status or violating any tax laws. The NSFNET Backbone Service and ANS CO+RE both used and shared the common ANSNet infrastructure. NSF agreed to allow ANS CO+RE to carry commercial traffic subject to several conditions: that the NSFNET Backbone Service was not diminished; that ANS CO+RE recovered at least the average cost of the commercial traffic traversing the network; and

NFSNET that any excess revenues recovered above the cost of carrying the commercial traffic would be placed into an infrastructure pool to be distributed by an allocation committee broadly representative of the networking community to enhance and extend national and regional networking infrastructure and support. For a time ANS CO+RE refused to connect to the CIX and the CIX refused to purchase a connection to ANS CO+RE. In May 1992 Mitch Kapor and Al Weis forged an agreement where ANS would connect to the CIX as a "trial" with the ability to disconnect at a moment's notice and without the need to join the CIX as a member.[27] This compromise resolved things for a time, but later the CIX started to block access from regional networks that had not paid the $10,000 fee to become members of the CIX.[28]

83

An unfortunate state of affairs


The creation of ANS CO+RE and its initial refusal to connect to the CIX was one of the factors that lead to the controversy described later in this article. Other issues had to do with: differences in the cultures of the non-profit research and education community and the for-profit community with ANS trying to be a member of both camps and not being fully accepted by either; differences of opinion about the best approach to take to open the Internet to commercial use and to maintain and encourage a fully interconnected Internet; and differences of opinion about the correct type and level of involvement in Internet networking initiatives by the public and the private sectors. For a time this unfortunate state of affairs kept the networking community as a whole from fully implementing the true vision for the Interneta world-wide network of fully interconnected TCP/IP networks allowing any connected site to communicate with any other connected site. These problems would not be fully resolved until a new network architecture was developed and the NSFNET Backbone Service was turned off in 1995.

Privatization and a new network architecture


The NSFNET Backbone Service was primarily used by academic and educational entities, and was a transitional network bridging the era of the ARPANET and CSNET into the modern Internet of today. On April 30, 1995, the NSFNET Backbone Service had been successfully transitioned to a new architecture[29] and the NSFNET backbone was decommissioned.[30] At this point there were still NSFNET programs, but there was no longer an NSFNET network or network service.

New network architecture, c. 1995

NFSNET

84

After the transition, network traffic was carried on any of several commercial backbone networks, internetMCI, PSINet, SprintLink, ANSNet, and others. Traffic between networks was exchanged at four Network Access Points or NAPs. The NAPs were located in New York (actually New Jersey), Washington, D.C., Chicago, and San Jose and run by Sprint, MFS Datanet, Ameritech, and Pacific Bell.[31] The NAPs were the forerunners of modern Internet exchange points.
NSF's very high speed Backbone Network The former NSFNET regional networks could connect to any of the Service (vBNS) new backbone networks or directly to the NAPs, but in either case they would need to pay for their own connections. NSF provided some funding for the NAPs and interim funding to help the regional networks make the transition, but did not fund the new backbone networks directly.

To help ensure the stability of the Internet during and immediately after the transition from NSFNET, NSF conducted a solicitation to select a Routing Arbiter (RA) and ultimately made a joint award to the Merit Network and USC's Information Science Institute to act as the RA. To continue its promotion of advanced networking technology the NSF conducted a solicitation to create a very high-speed Backbone Network Service (vBNS) that, like NSFNET before it, would focus on providing service to the research and education community. MCI won this award and created a 155 M-bit/sec (OC3c) and later a 622 M-bit/sec (OC12c) and 2.5 G-bit/sec (OC48c) ATM network to carry TCP/IP traffic primarily between the supercomputing centers and their users. NSF support[32] was available to organizations that could demonstrate a need for very high speed networking capabilities and wished to connect to the vBNS or to the Abilene Network, the high speed network operated by the University Corporation for Advanced Internet Development (UCAID, aka Internet2).[33] At the February 1994 regional techs meeting in San Diego, the group revised its charter[34] to include a broader base of network service providers, and subsequently adopted North American Network Operators' Group (NANOG) as its new name. Elise Gerich and Mark Knopper were the founders of NANOG and its first coordinators, followed by Bill Norton, Craig Labovitz, and Susan Harris.[35]

Controversy
For much of the period from 1987 to 1995, following the opening up of the Internet through NSFNET and in particular after the creation of the for-profit ANS CO+RE in May 1991, some Internet stakeholders[36] were concerned over the effects of privatization and the manner in which ANS, IBM, and MCI received a perceived competitive advantage in leveraging federal research money to gain ground in fields in which other companies allegedly were more competitive. The Cook Report on the Internet,[37] which still exists, evolved as one of its largest critics. Other writers, such as Chetly Zarko, a University of Michigan alumnus and freelance investigative writer, offered their own critiques.[38] On March 12, 1992 the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, held a hearing to review the management of NSFNET.[21] Witnesses at the hearing were asked to focus on the agreement(s) that NSF put in place for the operation of the NSFNET backbone, the foundation's plan for recompetition of those agreements, and to help the subcommittee explore whether the NSF's policies provided a level playing field for network service providers, ensured that the network was responsive to user needs, and provided for effective network management. The subcommittee heard from seven witnesses, asked them a number of questions, and received written statements from all seven as well as from three others. At the end of the hearing, speaking to the two witnesses from NSF, Dr. Nico Habermann, Assistant NSF Director for the Computer and Information Science and Engineering Directorate (CISE), and Dr. Stephen Wolff, Director of NSF's Division of Networking &

NFSNET Communications Research & Infrastructure (DNCRI), Representative Boucher, Chairman of the subcommittee, said: " I think you should be very proud of what you have accomplished. Even those who have some constructive criticism of the way that the network is presently managed acknowledge at the outset that you have done a terrific job in accomplishing the goal of this NSFNET, and its user-ship is enormously up, its cost to the users has come down, and you certainly have our congratulations for that excellent success." Subsequently the subcommittee drafted legislation, becoming law on October 23, 1992, which authorized the National Science Foundation to foster and support access by the research and education communities to computer networks which may be used substantially for purposes in addition to research and education in the sciences and engineering, if the additional uses will tend to increase the overall capabilities of the networks to support such research and education activities (that is to say, commercial traffic).[39] This legislation allowed, but did not require, NSF to repeal or modify its existing NSFNET Acceptable Use Policy (AUP)[20] which restricted network use to activities in support of research and education.[23] The hearing also led to a request from Rep. Boucher asking the NSF Inspector General to conduct a review of NSF's administration of NSFNET. The NSF Office of the Inspector General released its report on March 23, 1993.[26] The report concluded by: stating that "[i]n general we were favorably impressed with the NSFNET program and staff"; finding no serious problems with the administration, management, and use of the NSFNET Backbone Service; complimenting the NSFNET partners, saying that "the exchange of views among NSF, the NSFNET provider (Merit/ANS), and the users of NSFNET [via a bulletin board system], is truly remarkable in a program of the federal government"; and making 17 "recommendations to correct certain deficiencies and strengthen the upcoming re-solicitation."

85

References
[1] http:/ / www. nsf. gov/ about/ history/ nsf0050/ internet/ launch. htm [2] NSFNET: The Partnership That Changed The World (http:/ / www. nsfnet-legacy. org/ ), Web site for an event held to celebrate the NSFNET, November 2007 [3] The Internet - changing the way we communicate (http:/ / www. nsf. gov/ about/ history/ nsf0050/ internet/ internet. htm), the National Science Foundation's Internet history [4] The Merit Network, Inc. is an independent non-profit 501(c)(3) corporation governed by Michigan's public universities. Merit receives administrative services under an agreement with the University of Michigan. [5] (http:/ / www. mail-archive. com/ list@ifwp. org/ msg08868. html) [6] RFC 1118: The Hitchhikers Guide to the Internet (http:/ / tools. ietf. org/ html/ rfc1118), E. Krol, September 1989 [7] NSF 87-37: Project Solicitation for Management and Operation of the NSFNET Backbone Network, June 15, 1987. [8] InterNIC Review Paper (http:/ / www. codeontheroad. com/ papers/ InterNIC. Review. pdf) [9] NSFNET - National Science Foundation Network (http:/ / www. livinginternet. com/ i/ ii_nsfnet. htm) in the history section of the Living Internet (http:/ / www. livinginternet. com/ ) [10] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [11] Profile: At Home's Milo Medin (http:/ / www. wired. com/ science/ discoveries/ news/ 1999/ 01/ 17425), Wired, January 20, 1999 [12] "The Technology Timetable" (https:/ / babel. hathitrust. org/ cgi/ pt?seq=1& view=image& size=100& id=mdp. 39015035356347& u=1& num=40), Link Letter, Volume 7, No. 1 (July 1994), p.8, Merit/NSFNET Information Services, Merit Network, Ann Arbor [13] Link Letter (http:/ / babel. hathitrust. org/ cgi/ pt?id=mdp. 39015035356347;page=root;view=image;size=100;seq=1;num=1), Volume 4, No. 3 (Sept/Oct 1991), p. 1, NSFNET Information Services, Merit Network, Inc., Ann Arbor [14] "coprorations using BGP for advertising prefixes in mid-1990s" (http:/ / seclists. org/ nanog/ 2011/ May/ 478), e-mail to the NANOG list from Jessica Yu, 13 May 2011 [15] "NSFNET: The Community" (http:/ / www. nsfnet-legacy. org/ archives/ 06--Community. pdf), panel presentation slides, Doug Gale moderator, NSFNET: The Partnership That Changed The World, 29 November 2007 [16] "MeritWho, What, and Why, Part One: The Early Years, 1964-1983" (http:/ / www. merit. edu/ about/ history/ pdf/ MeritHistory. pdf), Eric M. Aupperle, Merit Network, Inc., in Library Hi Tech, vol. 16, No. 1 (1998) [17] "BBN to operate NEARnet" (http:/ / web. mit. edu/ newsoffice/ 1993/ bbn-0714. html), MIT News, 14 July 1993

NFSNET
[18] "About NorthWestNet" (http:/ / www. gutenberg. org/ files/ 40/ 40-ps. ps), NorthWestNet User Services Internet Resource Guide, NorthWestNet Academic Computing Consortium, Inc., 24 March 1992 accessed 3 July 2012 [19] March 16, 1992 memo from Mariam Leder, NSF Assistant General Counsel to Steven Wolff, Division Director, NSF DNCRI (included at page 128 of Management of NSFNET (http:/ / www. eric. ed. gov/ ERICWebPortal/ search/ recordDetails. jsp?ERICExtSearch_SearchValue_0=ED350986& searchtype=keyword& ERICExtSearch_SearchType_0=no& _pageLabel=RecordDetails& accno=ED350986& _nfls=false), a transcript of the March 12, 1992 hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, One Hundred Second Congress, Second Session, Hon. Rick Boucher, subcommittee chairman, presiding) [20] NSFNET Acceptable Use Policy (AUP) (http:/ / www. cybertelecom. org/ notes/ nsfnet. htm#aup), c. 1992 [21] Management of NSFNET (http:/ / www. eric. ed. gov/ ERICWebPortal/ search/ recordDetails. jsp?ERICExtSearch_SearchValue_0=ED350986& searchtype=keyword& ERICExtSearch_SearchType_0=no& _pageLabel=RecordDetails& accno=ED350986& _nfls=false), a transcript of the March 12, 1992 hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, One Hundred Second Congress, Second Session, Hon. Rick Boucher, subcommittee chairman, presiding [22] " I would dearly love to be able to exchange electronic mail with my son in college in Minnesota, but I feel that is probably not acceptable ", Steve Wolff, NSF DNCRI Director, speaking as a witness during the March 12, 1992 Management of NSFNET Congressional Hearing (page 124) (http:/ / www. eric. ed. gov/ ERICWebPortal/ search/ recordDetails. jsp?ERICExtSearch_SearchValue_0=ED350986& searchtype=keyword& ERICExtSearch_SearchType_0=no& _pageLabel=RecordDetails& accno=ED350986& _nfls=false) [23] Even after the appropriations act was amended in 1992 to give NSF more flexibility with regard to commercial traffic, NSF never felt that it could entirely do away with the AUP and its restrictions on commercial traffic, see the response to Recommendation 5 in NSF's response to the Inspector General's review (a April 19, 1993 memo from Frederick Bernthal, Acting Director, to Linda Sundro, Inspector General, that is included at the end of Review of NSFNET (http:/ / www. nsf. gov/ pubs/ stis1993/ oig9301/ oig9301. txt), Office of the Inspector General, National Science Foundation, 23 March 1993) [24] R. Adams UUNET/NSFNET interconnection email (http:/ / www. interesting-people. org/ archives/ interesting-people/ 200912/ msg00032. html) [25] The Commercial Internet eXchange Association Router Agreement (http:/ / www. farooqhussain. org/ projects/ CIX Router Timeline_0905. pdf), c. 2000 [26] Review of NSFNET (http:/ / www. nsf. gov/ pubs/ stis1993/ oig9301/ oig9301. txt), Office of the Inspector General, National Science Foundation, 23 March 1993 [27] "ANS CO+RE and CIX Agree to Interconnect" (http:/ / w2. eff. org/ effector/ effect02. 10), EFFector Online, Issue 2.10, June 9, 1992, Electronic Frontier Foundation, ISSN: 1062-9424 [28] A series of e-mail messages that talk about various aspects of the CIX as seen from MichNet, the regional network operated by Merit in the State of Michigan: 1June1992 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1992-06/ msg00019. html), 29June1992 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1992-06/ msg00015. html), 29Sep1992 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1992-09/ msg00021. html), 4Jan1994 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1994-01/ msg00000. html), 6Jan1994 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1994-01/ msg00011. html), and 10Jan1994 (http:/ / www. merit. edu/ mail. archives/ mjts/ 1994-01/ msg00016. html) [29] NSF Solicitation 93-52 (http:/ / w2. eff. org/ Infrastructure/ Govt_docs/ nsf_nren. rfp) - Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and the NREN(SM) Program, May 6, 1993 [30] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris, Ph.D., and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [31] E-mail regarding Network Access Points from Steve Wolff (NSF) to the com-priv list (http:/ / www. merit. edu/ mail. archives/ mjts/ 1994-03/ msg00001. html), sent 13:51 EST 2 March 1994 [32] NSF Program Solicitation 01-73: High Performance Network Connections for Science and Engineering Research (HPNC) (http:/ / www. nsf. gov/ publications/ pub_summ. jsp?ods_key=nsf0173), Advanced Networking Infrastructure and Research Program, Directorate for Computer and Information Science and Engineering, National Science Foundation, February 16, 2001, 16 pp. [33] E-mail regarding the launch of Internet2's Abillene network (http:/ / www. merit. edu/ mail. archives/ mjts/ 1999-02/ msg00024. html), Merit Joint Technical Staff, 25 February 1999 [34] Original 1994 NANOG Charter (http:/ / www. nanog. org/ governance/ charter/ 1994charter. php) [35] NANOG FAQ (http:/ / www. nanog. org/ about/ faq/ ) [36] Performance Systems International (PSI), AlterNet, Commercial Internet Exchange Association (CIX), Electronic Frontier Foundation (EFF), Gordon Cook, among others, see Cyber Telecom's Web page on "Internet History :: NSFNET" (http:/ / www. cybertelecom. org/ notes/ nsfnet. htm) [37] The Cook Report on the Internet (http:/ / www. cookreport. com) [38] "A Critical Look at the University of Michigan's Role in the 1987 Merit Agreement" (http:/ / www. cookreport. com/ index. php?option=com_content& view=article& id=216:310& catid=53:1995& Itemid=63), Chetly Zarko in The Cook Report on the Internet, January 1995, pp. 9-17 [39] Scientific and Advanced-Technology Act of 1992 (http:/ / thomas. loc. gov/ cgi-bin/ bdquery/ z?d102:S. 1146:), Public Law No: 102-476, 43 U.S.C. 1862(g)

86

NFSNET

87

External links
The Internet - the Launch of NSFNET (http://www.nsf.gov/about/history/nsf0050/internet/launch.htm), National Science Foundation NSFNET: A Partnership for High-Speed Networking, Final Report 1987-1995 (http://www.merit.edu/about/ history/pdf/NSFNET_final.pdf), Karen D. Frazer, Merit Network, Inc., 1995 NSF and the Birth of the Internet (http://www.nsf.gov/news/special_reports/nsf-net/), National Science Foundation, December 2007 NSFNET notes, summary, photos, reflections, and a video (http://hpwren.ucsd.edu/~hwb/NSFNET/), from Hans-Werner Braun, Co-Principal Investigator for the NSFNET Project at Merit Network, and later, Research Scientist at the University of California San Diego, and Adjunct Professor at San Diego State University "Fool Us Once Shame on YouFool Us Twice Shame on Us: What We Can Learn from the Privatizations of the Internet Backbone Network and the Domain Name System" (http://digitalcommons.law.wustl.edu/lawreview/ vol79/iss1/2), Jay P. Kesan and Rajiv C. Shah, Washington University Law Review, Volume 79, Issue 1 (2001) "The Rise of the Internet" (http://www.ibm.com/ibm100/us/en/icons/internetrise/), one of IBMs 100 Icons of Progress (http://www.ibm.com/ibm100/us/en/icons/), by Stephen Grillo, February 11, 2011, highlights IBM's contribution to NSFNET as part of its celebration of IBM's centennial year (http://www.ibm.com/ ibm100/us/en/) Merit Network: A history (http://www.merit.edu/about/history/) NSFNET Link Letter Archive (http://www.nic.funet.fi/pub/netinfo/NSFNET/Linkletter/), April 1988 (Vol. 1 No. 1) to July 1994 (Vol. 7 No. 1), text only, a web and FTP site provided by the Finnish IT center for science (http://www.csc.fi/english) Full copies of volumes 4-7, 1991-1994 (http://hdl.handle.net/2027/mdp.39015035356347) are also available from the Hathi Trust Digital Library Reflection on NSFNet (http://www.universalsubtitles.org/es/videos/ap3npBCf4nir/info/reflection-on-nsfnet/ )

TELENET

88

TELENET
Telenet was a commercial packet switched network which went into service in 1974.[1] It was the first packet-switched network service that was available to the general public.[2] Various commercial and government interests paid monthly fees for dedicated lines connecting their computers and local networks to this backbone network. Free public dialup access to Telenet, for those who wished to access these systems, was provided in hundreds of cities throughout the United States. The original founding company, Telenet Inc., was established by Bolt Beranek and Newman (BBN) and recruited Larry Roberts (former head of the ARPANet) as President of the company, and Barry Wessler. GTE acquired Telenet in 1979.[3] It was later acquired by Sprint and called "Sprintnet". Sprint migrated customers from Telenet to the modern-day Sprintlink[4] IP network, one of many networks composing today's Internet. Telenet had its first offices in downtown Washington DC, then moved to McLean, Virginia. It was acquired by GTE while in McLean, and then moved offices in Reston, Virginia. Under the various names, the company operated a public network, and also sold its packet switching equipment to other carriers and to large enterprise networks.

History
After establishing "value added carriers" was legalized in the U.S., Bolt Beranek and Newman (BBN) who were the private contractors for ARPANET set out to create a private sector version. In January 1975, Telenet Communications Corporation announced that they had acquired the necessary venture capital after a two year quest, and on August 16 of the same year they began operating the first public packet-switching network.[5] [6]

Coverage
Originally, the public network had switching nodes in seven US cities:[7] Washington, D.C. (network operations center as well as switching) Boston, Massachusetts New York, New York Chicago, Illinois Dallas, Texas San Francisco, California Los Angeles, California

The switching nodes were fed by Telenet Access Controller (TAC) terminal concentrators both colocated and remote from the switches. By 1980, there were over 1000 switches in the public network. At that time, the next largest network using Telenet switches was that of Southern Bell, which had approximately 250 switches.

Internal Network Technology


The initial network used statically-defined hop-by-hop routing, using Prime commercial minicomputers as switches, but then migrated to a purpose-built multiprocessing switch based on 6502 microprocessors. Among the innovations of this second-generation switch was a patented arbitrated bus interface that created a switched fabric among the microprocessors.[8] By contrast, a typical microprocessor-based system of the time used a bus; switched fabrics did not become common until about twenty years later, with the advent of PCI Express and HyperTransport. Most interswitch lines ran at 56 kbit/s, with a few, such as New York-Washington, at T1 (i.e., 1.544 Mbit/s). The main internal protocol was a proprietary variant on X.75; Telenet also ran standard X.75 gateways to other packet switching networks.

TELENET Originally, the switching tables could not be altered separately from the main executable code, and topology updates had to be made by deliberately crashing the switch code and forcing a reboot from the network management center. Improvements in the software allowed new tables to be loaded, but the network never used dynamic routing protocols. Multiple static routes, on a switch-by-switch basis, could be defined for fault tolerance. Network management functions continued to run on Prime minicomputers. Its X.25 host interface was the first in the industry and Telenet helped standardize X.25 in the CCITT.

89

Accessing the Network


Basic Asynchronous Access
Users could use modems on the Public Switched Telephone Network to dial TAC ports, calling either from "dumb" terminals or from computers emulating such terminals. Organizations with a large number of local terminals could install a TAC on their own site, which used a dedicated line, at up to 56 kbit/s, to connect to a switch at the nearest Telenet location. Dialup modems supported had a maximum speed of 1200 bit/s, and later 4800 bit/s.

Computer Access
Computers supporting the X.25 protocol could connect directly to switching centers. These connections ranged from 2.4 to 56 kbit/s.

Other Access Protocols


Telenet supported remote concentrators for IBM 3270 family intelligent terminals, which communicated, via X.25 to Telenet-written software that ran in IBM 370x series front-end processors. Telenet also supported Block Mode Terminal Interfaces (BMTI) for IBM Remote Job Entry terminals supporting the 2780/3780 and HASP Bisync protocols.

PC Pursuit
In the late 1980s, Telenet offered a service called PC Pursuit. For a flat monthly fee, customers could dial into the Telenet network in one city, then dial out on the modems in another city to access bulletin board systems and other services. PC Pursuit was popular among computer hobbyists because it sidestepped long-distance charges. In this sense, PC Pursuit was similar to the Internet. Cities accessible by PC Pursuit
City Code Area Code(s) AZPHO CAGLE CALAN CODEN CTHAR FLMIA GAATL ILCHI MABOS MIDET MNMIN 602 818 213 303 203 305 404 312, 815 617 313 612 City Phoenix, Arizona Glendale, California Los Angeles, California Denver, Colorado Hartford, Connecticut Miami, Florida Atlanta, Georgia Chicago, Illinois Boston, Massachusetts Detroit, Michigan Minneapolis, Minnesota

TELENET

90
NCRTP NJNEW NYNYO OHCLV ORPOR PAPHI TXDAL TXHOU WIMIL 919 201 212, 718 216 503 215 214, 817 713 414 Research Triangle Park, North Carolina Newark, New Jersey New York City Cleveland, Ohio Portland, Oregon Philadelphia, Pennsylvania Dallas, Texas Houston, Texas Milwaukee, Wisconsin

References
[1] C. J. P. Moschovitis, H. Poole, T. Schuyler, T. M. Senft, History of the Internet: A Chronology, 1843 to the Present, p. 79-80 (The Moschovitis Group, Inc 1999) [2] Stephen Segaller, NERDS 2.0.1: A Brief History of the Internet, p. 115 (TV Books Publisher 1998) [3] Robert Cannon. "Industry :: Genuity" (http:/ / www. cybertelecom. org/ industry/ genuity. htm). Cybertelecom. . Retrieved 2011-12-21. [4] "Sprintlink.net" (http:/ / www. sprintlink. net/ ). Sprintlink.net. . Retrieved 2011-12-21. [5] "Electronic post for switching data." Timothy Johnson. New Scientist. May 13, 1976 [6] Mathison, S.L. Roberts, L.G. ; Walker, P.M., The history of telenet and the commercialization of packet switching in the U.S. (http:/ / ieeexplore. ieee. org/ xpl/ articleDetails. jsp?arnumber=6194380), Communications Magazine, IEEE, May 2012 [7] Telenet inaugurates service (http:/ / portal. acm. org/ citation. cfm?id=1015671. 1015674& coll=GUIDE& dl=GUIDE& CFID=31545796& CFTOKEN=18757936),ACM Computer Communications Review, Stuart L. Mathison, 1975 [8] Byars, S. J.; Carr, WN (31 January), "Patent Bus Interface" (http:/ / patft. uspto. gov/ netacgi/ nph-Parser?Sect1=PTO2& Sect2=HITOFF& u=/ netahtml/ PTO/ search-adv. htm& r=8& f=G& l=50& d=PTXT& p=1& p=1& S1=908056& OS=908056& RS=908056), US Patent 4,802,161 (U.S. Patent and Trademark Office), , retrieved 2007-09-18

UUCP

91

UUCP
UUCP is an abbreviation of Unix-to-Unix Copy.[1] The term generally refers to a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers. Specifically, a command named uucp is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes uux (user interface for remote command execution), uucico (the communication program that performs the file transfers), uustat (reports statistics on recent activity), uuxqt (execute commands sent from remote machines), and uuname (reports the UUCP name of the local system). Although UUCP was originally developed on Unix and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including Microsoft's MS-DOS, Digital's VAX/VMS, Commodore's AmigaOS, classic Mac OS, and even CP/M.

Technology
UUCP can use several different types of physical connections and link layer protocols, but was most commonly used over dial-up connections. Before the widespread availability of Internet connectivity, computers were only connected by smaller private networks within a company or organization. They were also often equipped with modems so they could be used remotely from character-mode terminals via dial-up lines. UUCP uses the computers' modems to dial out to other computers, establishing temporary, point-to-point links between them. Each system in a UUCP network has a list of neighbor systems, with phone numbers, login names and passwords, etc. When work (file transfer or command execution requests) is queued for a neighbor system, the uucico program typically calls that system to process the work. The uucico program can also poll its neighbors periodically to check for work queued on their side; this permits neighbors without dial-out capability to participate. Today, UUCP is rarely used over dial-up links, but is occasionally used over TCP/IP.[2][3] One example of the current use of UUCP is in the retail industry by Epicor CRS Retail Systems [4] for transferring batch files between corporate and store systems via TCP and dial-up on SCO OpenServer, Red Hat Linux, and Microsoft Windows (with Cygwin). The number of systems involved, as of early 2006, ran between 1500 and 2000 sites across 60 enterprises. UUCP's longevity can be attributed to its low/zero cost, extensive logging, native failover to dialup, and persistent queue management.

History
UUCP was originally written at AT&T Bell Laboratories by Mike Lesk. By 1978 it was in use on 82 UNIX machines inside the Bell system, primarily for software distribution. It was released in 1979 as part of Version 7 Unix.[5] The original UUCP was rewritten by AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman. The rewrite is referred to as HDB or HoneyDanBer uucp, which was later enhanced, bug fixed, and repackaged as BNU UUCP ("Basic Network Utilities"). Each of these versions was distributed as proprietary software, which inspired Ian Lance Taylor to write a new free software version from scratch in 1991.[6] Taylor UUCP was released under the GNU General Public License and became the most stable and bug free version. In particular, Taylor UUCP addressed security holes which allowed some of the original internet worms to remotely execute unexpected shell commands. Taylor UUCP also incorporates features of all previous versions of UUCP, allowing it to communicate with any other version with the greatest level of compatibility and even use similar config file formats from other versions. UUCP was also implemented for non-UNIX operating systems, most-notably MS-DOS systems. Packages such as UUSLAVE/GNUUCP (John Gilmore, Garry Paxinos, Tim Pozar), UUPC (Drew Derbyshire) and FSUUCP (Christopher Ambler of IODesign), brought early Internet connectivity to personal computers, expanding the

UUCP network beyond the interconnected university systems. FSUUCP formed the basis for many BBS packages such as Galacticomm's Major BBS and Mustang Software's Wildcat! BBS to connect to the UUCP network and exchange email and Usenet traffic. As an example, UFGATE (John Galvin, Garry Paxinos, Tim Pozar) was a package that provided a gateway between networks running Fidonet and UUCP protocols. FSUUCP was notable for being the only other implementation of Taylor's enhanced 'i' protocol, a significant improvement over the standard 'g' protocol used by most UUCP implementations.

92

UUCP for mail routing


The uucp and uuxqt capabilities could be used to send email between machines, with suitable mail user interface and delivery agent programs. A simple UUCP mail address was formed from the adjacent machine name, an exclamation mark or bang, followed by the user name on the adjacent machine. For example, the address barbox!user would refer to user user on adjacent machine barbox. Mail could furthermore be routed through the network, traversing any number of intermediate nodes before arriving at its destination. Initially, this had to be done by specifying the complete path, with a list of intermediate host names separated by bangs. For example, if machine barbox is not connected to the local machine, but it is known that barbox is connected to machine foovax which does communicate with the local machine, the appropriate address to send mail to would be foovax!barbox!user. User barbox!user might publish their UUCP email address in a form such as !bigsite!foovax!barbox!user. This directs people to route their mail to machine bigsite (presumably a well-known and well-connected machine accessible to everybody) and from there through the machine foovax to the account of user user on barbox. Many users would suggest multiple routes from various large well-known sites, providing even better and perhaps faster connection service from the mail sender.

Bang path
An email address of this form was known as a bang path. Bang paths of eight to ten machines (or hops) were not uncommon in 1981, and late-night dial-up UUCP links would cause week-long transmission times. Bang paths were often selected by both transmission time and reliability, as messages would often get lost. Some hosts went so far as to try to "rewrite" the path, sending mail via "faster" routesthis practice tended to be frowned upon. The "pseudo-domain" ending .uucp was sometimes used to designate a hostname as being reachable by UUCP networking, although this was never formally in the Internet root as a top-level domain. This would not have made sense anyway, because the DNS system is only appropriate for hosts reachable directly by TCP/IP. Additionally, uucp as a community administers itself and does not mesh well with the administration methods and regulations governing the DNS; .uucp works where it needs to; some hosts punt mail out of SMTP queue into uucp queues on gateway machines if a .uucp address is recognized on an incoming SMTP connection Usenet traffic was originally transmitted over the UUCP protocol using bang paths. These are still in use within Usenet message format Path header lines. They now have only an informational purpose, and are not used for routing, although they can be used to ensure that loops do not occur. In general, this form of e-mail address has now been superseded by the "@ notation", even by sites still using UUCP.

UUCPNET and mapping


UUCPNET was the name for the totality of the network of computers connected through UUCP. This network was very informal, maintained in a spirit of mutual cooperation between systems owned by thousands of private companies, universities, and so on. Often, particularly in the private sector, UUCP links were established without official approval from the companies' upper management. The UUCP network was constantly changing as new systems and dial-up links were added, others were removed, etc.

UUCP The UUCP Mapping Project was a volunteer, largely successful effort to build a map of the connections between machines that were open mail relays and establish a managed namespace. Each system administrator would submit, by e-mail, a list of the systems to which theirs would connect, along with a ranking for each such connection. These submitted map entries were processed by an automatic program that combined them into a single set of files describing all connections in the network. These files were then published monthly in a newsgroup dedicated to this purpose. The UUCP map files could then be used by software such as "pathalias" to compute the best route path from one machine to another for mail, and to supply this route automatically. The UUCP maps also listed contact information for the sites, and so gave sites seeking to join UUCPNET an easy way to find prospective neighbors.

93

Connections with the Internet


Many UUCP hosts, particularly those at universities, were also connected to the Internet in its early years, and e-mail gateways between Internet SMTP-based mail and UUCP mail were developed. A user at a system with UUCP connections could thereby exchange mail with Internet users, and the Internet links could be used to bypass large portions of the slow UUCP network. A "UUCP zone" was defined within the Internet domain namespace to facilitate these interfaces. With this infrastructure in place, UUCP's strength was that it permitted a site to gain Internet e-mail and Usenet connectivity with only a dial-up modem link to another cooperating computer. This was at a time when true Internet access required a leased data line providing a connection to an Internet Point of Presence, both of which were expensive and difficult to arrange. By contrast, a link to the UUCP network could usually be established with a few phone calls to the administrators of prospective neighbor systems. Neighbor systems were often close enough to avoid all but the most basic charges for telephone calls.

Remote commands
uux is remote command execution over UUCP. The uux command is used to execute a command on a remote system, or to execute a command on the local system using files from remote systems. The command is run by the uucico daemon which is not instant.

Decline
UUCP usage began to die out with the rise of ISPs offering inexpensive SLIP and PPP services. The UUCP Mapping Project was formally shut down late in 2000. The UUCP protocol has now mostly been replaced by the Internet TCP/IP based protocols SMTP for mail and NNTP Usenet news. In July 2012, Dutch Internet provider XS4ALL closed down its UUCP service, claiming it was "probably one of the last providers in the world that still offered it"; it still had 13 users at that time, and new users had been refused for several years already.[7]

UUCP

94

Last uses and legacy


One surviving feature of UUCP is the chat file format, largely inherited by the expect software package. UUCP was in use over special-purpose high cost links (e.g. marine satellite links) long after its disappearance elsewhere,[8] and still remains in legacy use. In the mid 2000s, UUCP over TCP/IP (often encrypted, using the SSH protocol[3]) was proposed for use when a computer does not have any fixed IP addresses but is still willing to run a standard mail transfer agent (MTA) like Sendmail or Postfix. Bang paths are still in use within the Usenet network, though not for routing; they are used to record the nodes through which a message has passed, rather than to direct where it will go next. "Bang path" is also used as an expression for any explicitly specified routing path between network hosts. That usage is not necessarily limited to UUCP, IP routing, email messaging, or Usenet.

References
[1] (pdf) UNIX(TM) TIME-SHARING SYSTEM: UNIX PROGRAMMERS MANUAL, Seventh Edition, Volume 1 (http:/ / cm. bell-labs. com/ 7thEdMan/ v7vol1. pdf). Murray Hill, New Jersey: Bell Telephone Laboratories, Incorporated. January 1979. . Retrieved 2011-07-13. [2] Ian Lance Taylor (June 2003). "UUCP 'f' Protocol" (http:/ / www. airs. com/ ian/ uucp-doc/ uucp_7. html#SEC99). . Retrieved 2008-08-04. [3] Fabien Penso. "UUCPssh" (http:/ / uucpssh. org/ ). . Retrieved 2009-08-09 [dead as on 2010-01-07]. [4] http:/ / www. epicor. com/ www/ products/ retail/ RetailSolutions. htm [5] Version 7 Unix manual: "UUCP Implementation Description" by D. A. Nowitz, and "A Dial-Up Network of UNIX Systems" by D. A. Nowitz and M. E. Lesk (http:/ / cm. bell-labs. com/ 7thEdMan/ v7vol2b. pdf) [6] Ian Lance Taylor (September 1991). "Beta release of new UUCP package available" (http:/ / groups. google. com/ group/ comp. mail. uucp/ browse_thread/ thread/ a59ccd63afcade57). . Retrieved 2009-01-19. [7] Goodbye to UUCP (https:/ / blog. xs4all. nl/ 2012/ 07/ 30/ afscheid-van-uucp/ ), XS4ALL blog. (Dutch) [8] Randolph Bentson (August 1995). "Linux Goes To Sea" (http:/ / www. linuxjournal. com/ article/ 1111). . Retrieved 2009-02-21.

External links
Using & Managing UUCP. Ed Ravin, Tim O'Reilly, Dale Doughtery, and Grace Todino. 1996, O'Reilly & Associates, Inc. ISBN 1-56592-153-4 Mark Horton (1986). RFC 976: UUCP Mail Interchange Format Standard. Internet Engineering Task Force Requests for Comment. UUCP Internals Frequently Asked Questions (http://www.faqs.org/faqs/uucp-internals/) Setting up Taylor UUCP + qmail on FreeBSD 5.1 (http://ece.iisc.ernet.in/FAQ) Taylor UUCP (http://www.airs.com/ian/uucp.html) is a GPL licensed UUCP package. Taylor UUCP Documentation (http://www.airs.com/ian/uucp-doc/uucp.html) useful information about UUCP in general and various uucp protocols. The UUCP Project: History (http://www.uucp.org/history/) The UUCP Mapping Project (http://www.uucp.org/uumap/) UUHECNET - Hobbyist UUCP network that offers free feeds (http://www.uuhec.net/)

USENET

95

USENET
Usenet is a worldwide distributed Internet discussion system. It was developed from the general purpose UUCP architecture of the same name. Duke University graduate students Tom Truscott and Jim Ellis conceived the idea in 1979 and it was established in 1980.[1] Users read and post messages (called articles or posts, and collectively termed news) to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects, and is the precursor to the various Internet forums that are widely used today. Usenet can be superficially regarded as a hybrid between email and web forums. Discussions are threaded, with modern news reader software, as with web forums and BBSes, though posts are stored on the server sequentially.

One notable difference between a BBS or web forum and Usenet is the absence of a central server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store and forward messages to one another in so-called news feeds. Individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, or employer.

A diagram of Usenet servers and clients. The blue, green, and red dots on the servers represent the groups they carry. Arrows between servers indicate newsgroup group exchanges (feeds). Arrows between clients and servers indicate that a user is subscribed to a certain group and reads or submits articles.

Introduction
Usenet is one of the oldest computer network communications systems still in widespread use. It was conceived in 1979 and publicly established in 1980 at the University of North Carolina at Chapel Hill and Duke University,[1] over a decade before the World Wide Web was developed and the general public got access to the Internet. It was originally built on the "poor man's ARPANET," employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name Usenet emphasized its creators' hope that the USENIX organization would take an active role in its operation.[2] The articles that users post to Usenet are organized into topical categories called newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, [news:sci.math sci.math] and [news:sci.physics sci.physics] are within the sci hierarchy, for science. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read.[3] In most newsgroups, the majority of the articles are responses to some other article. The set of articles which can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads. When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and (if all goes well) eventually reaches every server in the network. The later peer-to-peer networks operate on a similar principle; but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Some have noted that this seems an inefficient protocol in the era of abundant high-speed network access.

USENET Usenet was designed under conditions when networks were much slower, and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.[4] Usenet has significant cultural importance in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ" and "spam".[5] The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages which have one or more specific recipients.[6] Today, Usenet has diminished in importance with respect to Internet forums, blogs and mailing lists. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages requires not a mail or web client, but a news client. The groups in alt.binaries are still widely used for data transfer.

96

ISPs, news servers, and newsfeeds


Many Internet service providers, and many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead. Some clients such as Mozilla Thunderbird and Outlook Express provide both abilities. Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer well because of the large amount of data involved, small customer base (compared to mainstream Internet services such as email and web access), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles that are not the ISP's fault). Some ISPs outsource news operation to specialist sites, which will usually appear to a user as though the ISP ran the server itself. Many sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy which largely carries software, music, videos and images, and accounts for over 99 percent of article data. There are also Usenet providers that specialize in offering service to users whose ISPs do not carry news, or that carry a restricted feed. See also news server operation for an overview of how news systems are implemented.

Newsreaders
Newsgroups are typically accessed with newsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act as clients to one or more news servers. Newsreaders are available for all major operating systems.[7] Modern mail clients or "communication suites" commonly also have an integrated newsreader. Often, however, these integrated clients are of low quality, compared to standalone newsreaders, and incorrectly implement Usenet protocols, standards and conventions. Many of these integrated clients, for example the one in Microsoft's Outlook Express, are disliked by purists because of their misbehavior.[8] With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another.[9][10] Google Groups[11] is one such web based front end and some web browsers can access Google Groups via news: protocol links directly.[12]

USENET

97

Moderated and unmoderated newsgroups


A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines.[13] Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan.[14] Historically, a mod.* hierarchy existed before Usenet reorganization.[15] Now, moderated newsgroups may appear in any hierarchy. Usenet newsgroups in the Big-8 hierarchy are created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated.[16] Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup. Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Creation of moderated newsgroups often becomes a hot subject of controversy, raising issues regarding censorship and the desire of a subset of users to form an intentional community.

Technical details
Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use a flooding algorithm which propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certain peer-to-peer character in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader. RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537. In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions of World Intellectual Property Organization treaty implementations, such as the United States Online Copyright Infringement Liability Limitation Act. On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on TCP Port 119 for standard, unprotected connections and on TCP port 563 for SSL encrypted connections which is offered only by a few sites.

USENET

98

Organization
The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The current Big Eight are: comp.* computer-related discussions (comp.software, comp.sys.amiga) humanities.* fine arts, literature, and philosophy (humanities.classics, humanities.design.misc) misc.* miscellaneous topics (misc.education, misc.forsale, misc.kids) news.* discussions and announcements about news (meaning Usenet, not current events) (news.groups, news.admin) rec.* recreation and entertainment (rec.music, rec.arts.movies) sci.* science related discussions (sci.psychology, sci.research) soc.* social discussions (soc.college.org, soc.culture.african) talk.* talk about various controversial topics (talk.religion, talk.politics, talk.origins) See also the Great Renaming. The alt.* hierarchy is not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in the alt.* hierarchy tend to be more specialized or specificfor example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books. Binaries are posted in alt.binaries.*, making it the largest of all the hierarchies. Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such as japan.*, malta.* and ne.* serve specific countries and regions such as Japan, Malta and New England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historical gnu.* hierarchy from the Free Software Foundation. Microsoft closed its newsserver in June 2010, providing support for its products over forums now.[17] Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others include alt as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems. Informal sub-hierarchy conventions also exist. *.answers are typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the *.answers group at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursiveto the point of some silliness in alt.*.

USENET

99

Binary content
Usenet was originally created to distribute text content encoded in the 7-bit ASCII character set. With the help of programs that encode 8-bit values into ASCII, it became practical to distribute binary files as content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic.

A visual example of the many complex steps required to prepare data to be uploaded to Usenet newsgroups. These steps must be done again in reverse to download data from Usenet.

The oldest widely used encoding method for binary content is uuencode, from the Unix UUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader. With the header extensions and the Base64 and Quoted-Printable MIME encodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems with metadata attached to files use specialized encoding formats. For Mac OS, both Binhex and special MIME types are used. Other lesser known encoding systems that may have been used at one time were BTOA, XX encoding, BOO, and USR encoding. In an attempt to reduce file transfer times, an informal file encoding known as yEnc was introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space. The standard method of uploading binary content to Usenet is to first archive the files into RAR archives (for large files usually in 15 MB, 50 MB or 100 MB parts) then create Parchive files. Parity files are used to recreate missing data. This is needed often, as not every part of the files reaches a server. These are all then encoded into yEnc and uploaded to the selected binary groups.

USENET Binary retention time Each newsgroup is generally allocated a certain amount of storage space for post content. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. If the flood is large enough, the beginning of the flood will begin to be deleted even before the last part of the flood has been posted. Binary newsgroups are only able to function reliably if there is sufficient This is a list of some of the biggest binary groups. With 1317+ days retention, the storage allocated to a group to allow (binary) Usenet storage (which binsearch.info indexes) is more than 9 petabytes (9000 [18] readers enough time to download all terabytes). parts of a binary posting before it is flushed out of the group's storage allocation. This was at one time how posting of undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including such spam floods, without deleting anything. The average length of time that posts are able to stay in the group before being deleted is commonly called the retention time. Generally the larger Usenet servers have enough capacity to archive several years of binary content even when flooded with new data at the maximum daily speed available. A good binaries service provider must not only accommodate users of fast connections (3 megabit) but also users of slow connections (256 kilobit or less) who need more time to download content over a period of several days or weeks. Major NSPs have a retention time of more than 4 years.[19] This results in more than 9 petabytes (9000 terabytes) of storage. [20] In part because of such long retention times, as well as growing Internet upload speeds, Usenet is also used by individual users to store backup data in a practice called Usenet backup, or uBackup.[21] While commercial providers offer more easy to use online backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the user to manually select, prepare and upload the data. Because anyone can potentially download the backup files, the data is typically encrypted. After the files are uploaded, the uploader does not have any control over them; the files are automatically copied to all Usenet providers, so there will be multiple copies of it spread over different geographical locations around the world desirable in a backup scheme.

100

USENET Legal issues While binary newsgroups can be used to distribute completely legal user-created works, open-source software, and public domain material, some binary groups are used to illegally distribute commercial software, copyrighted media, and obscene material. ISP-operated Usenet servers frequently block access to all alt.binaries.* groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under the DMCA Safe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders.[22] Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a special post cancellation message to be distributed to remove it from all servers, but many providers ignore cancel messages by standard policy, because they can be easily falsified and submitted by anyone.[23][24] For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders.[25] Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. Like SMTP email, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source.[26] In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by their network address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server.[27] Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as the Recording Industry Association of America.[28][29]

101

History
Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott.[30] In 1980, Usenet was connected to ARPANET through UC Berkeley which had connections to both Usenet and ARPANET. Michael Horton, the graduate student that set up the connection, began feeding mailing lists from the ARPANET into Usenet with the fa identifier. As a result, the number of people on Usenet increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.[31] After 32 years, the Usenet news service link at the University of North Carolina at Chapel Hill (news.unc.edu) was finally retired on February 4, 2011.

USENET

102

Network
UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.[32] As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news.[33] The name UUCPNET became the common name for the overall network. In addition to UUCP, early Usenet traffic was also exchanged with Fidonet and other dial-up BBS networks. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. The Network News Transfer Protocol, or NNTP, was introduced in 1985 to distribute Usenet articles over TCP/IP as a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP.[34]

Software
Early versions of Usenet used Duke's A News software. Soon, at UC Berkeley, Matt Glickman and Mark Horton produced an improved version called B News. With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed.[35]

Public venue
Usenet was the initial Internet community and the place for many of the most important public developments in the commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web,[36] where Linus Torvalds announced the Linux project,[37] and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag,[38] which revolutionized the World Wide Web by turning it into a graphical medium.

Internet jargon and history


Many jargon terms now in common use on the Internet originated or were popularized on Usenet.[39] Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet.[40] "Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." Gene Spafford, 1992

Decline in growth rate


Sascha Segan of PC Magazine said in 2008 "Usenet has been dying for years[...]" Segan said that some people pointed to the Eternal September in 1993 as the beginning of Usenet's decline. Segan said that the "eye candy" on the World Wide Web and the marketing funds spent by owners of websites convinced Internet users to use profit-making websites instead of Usenet servers. In addition, DejaNews and Google Groups made conversations searchable, and Segan said that this removed the obscurity of previously obscure Internet groups on Usenet. Segan

USENET explained that when pornographers and software pirates began putting large files on Usenet, by the late 1990s this caused Usenet disk space and traffic to increase. Internet service providers allocated space to Usenet libraries, and Internet service providers questioned why they needed to host space for pornography and pirated software. Segan said that the hosting of porn and pirated software was "likely when Usenet became truly doomed" and "[i]t's the porn that's putting nails in Usenet's coffin." AOL discontinued Usenet access in 2005. When the State of New York opened an investigation on child pornographers who used Usenet, many ISPs dropped all Usenet access or access to the alt. hierarchy. Segan concluded "It's hard to completely kill off something as totally decentralized as Usenet; as long as two servers agree to share the NNTP protocol, it'll continue on in some fashion. But the Usenet I mourn is long gone[...]"[41] In response, John Biggs of TechCrunch said "Is Usenet dead, as Sascha posits? I dont think so. As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on." Biggs added that while many Internet service providers terminated access, "the real pros know where to go to get their angst-filled, nit-picking, obsessive fix."[42] In May 2010, Duke University, whose implementation had kicked off Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs.[43][44]

103

Usenet traffic changes


Over time, the amount of Usenet traffic has steadily increased. As of 2010 the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day.[45] However, these averages are minuscule in comparison to the traffic in the binary groups.[46] Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups[45] in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows:

USENET

104

Daily Volume

Date

Source Altopia.com Altopia.com Altopia.com Altopia.com Altopia.com Altopia.com Altopia.com Altopia.com Altopia.com

4.5 GB 1996-12 9 GB 1997-07 12 GB 1998-01 26 GB 1999-01 82 GB 2000-01 181 GB 2001-01 257 GB 2002-01 492 GB 2003-01 969 GB 2004-01

1.30 TB 2004-09-30 Octanews.net 1.38 TB 2004-12-31 Octanews.net 1.52 TB 2005-01 Altopia.com

1.34 TB 2005-01-01 Octanews.net 1.30 TB 2005-01-01 Newsreader.com 1.81 TB 2005-02-28 Octanews.net 1.87 TB 2005-03-08 Newsreader.com 2.00 TB 2005-03-11 Various sources 2.27 TB 2006-01 2.95 TB 2007-01 3.07 TB 2008-01 Altopia.com Altopia.com Altopia.com

3.80 TB 2008-04-16 Newsdemon.com 4.60 TB 2008-11-01 Giganews.com 4.65 TB 2009-01 6.00 TB 2009-12 5.42 TB 2010-01 8.00 TB 2010-09 7.52 TB 2011-01 8.25 TB 2011-10 9.29 TB 2012-01 Altopia.com Newsdemon.com Altopia.com Newsdemon.com Altopia.com Thecubenet.com Altopia.com

In 2008, Verizon Communications, Time Warner Cable and Sprint Nextel signed an agreement with Attorney General of New York Andrew Cuomo to shut down access to sources of child pornography.[47] Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to the alt.* hierarchies. AT&T stopped access to the alt.binaries.* hierarchies. Cuomo never specifically named Usenet in his anti-child pornography campaign. David DeJean of PC World said that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008 AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP marketshare.[48] On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009.[49]

USENET AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing.[50] The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.[51] In August, 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009.[52][53] JANET(UK) announced it will discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative.[54] Microsoft announced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative.[55] Primary reasons cited for the discontinuance of Usenet service by general ISPs include the decline in volume of actual readers due to competition from blogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups.[56][57] Some ISPs did not include pressure from Attorney General of New York Andrew Cuomo's aggressive campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services.[58] ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.[59][60][61]

105

Archives
Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982.[62] Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever."[63] Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?"[64] by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18."[65] In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines." [66] In 1985, two news archiving systems and one RFC were posted to the Internet. The first system, called keepnews, by Mark M. Swenson of The University of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain.[67] The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem."[68] Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles with keyword lookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included was C code for the internal data structure of the system.[69] The desire to have a fulltext search index of archived news articles is not new either, one such request having been made in April 1991 by Alex Martelli who sought to "build some sort of keyword index for [the news archive]."[70] In early May, Mr. Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources."[71] Today, the archiving of Usenet has led to a fear of loss of privacy.[72] An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial.[73]

USENET

106

Archives by Google Groups and DejaNews


Web-based archiving of Usenet posts began in 1995 at Deja News with a very large, searchable database. In 2001, this database was acquired by Google.[74] Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others,[75] and were originally archived by Henry Spencer at the University of Toronto's Zoology department.[76] The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series[77] and Jrgen Christoffel from GMD.[78] The archive of posts from March 1995 onward was originally started by the company DejaNews (later Deja), which was purchased by Google in February 2001. Google began archiving Usenet posts for itself starting in the second week of August 2000. Already during the DejaNews era the archive had become a popular constant in Usenet culture, and remains so today.

References
[1] From Usenet to CoWebs: interacting with social information spaces, Christopher Lueg, Danyel Fisher, Springer (2003), ISBN 1-85233-532-7, ISBN 978-1-85233-532-8 [2] "Invitation to a General Access UNIX Network (http:/ / www. newsdemon. com/ first-official-announcement-usenet. php)", James Ellis and Tom Truscott, in First Official Announcement of USENET, NewsDemon (K&L Technologies, Inc), 1979 [3] Lehnert, Wendy G.; Kopec, Richard (2007). Web 101. Addison Wesley. p. 291. [4] "Store And Forward Communication: UUCP and FidoNet" (http:/ / www. cs. cmu. edu/ ~dga/ 15-849/ store_and_forward. html). Archived (http:/ / archive. is/ 20120630/ http:/ / www. cs. cmu. edu/ ~dga/ 15-849/ store_and_forward. html) from the original on 2012-06-30. .. Carnegie Mellon School of Computer Science. [5] "USENET Newsgroup Terms - SPAM" (http:/ / www. newsdemon. com/ usenet_term_spam. php). Archived (http:/ / archive. is/ 20120915/ http:/ / www. newsdemon. com/ usenet_term_spam. php) from the original on 2012-09-15. . [6] Kozierok, Charles M. (2005). The TCP/IP guide: a comprehensive, illustrated Internet protocols reference. No Starch Press. p. 1401. [7] "Open Directory Usenet Clients" (http:/ / www. dmoz. org/ Computers/ Software/ Internet/ Clients/ Usenet/ ). Dmoz.org. October 9, 2008. Archived (http:/ / archive. is/ 20120730/ http:/ / www. dmoz. org/ Computers/ Software/ Internet/ Clients/ Usenet/ ) from the original on 2012-07-30. . Retrieved December 14, 2010. [8] Jain, Dominik (July 30, 2006). "OE-QuoteFix Description" (http:/ / home. in. tum. de/ ~jain/ software/ oe-quotefix/ ). Archived (http:/ / archive. is/ 20120921/ http:/ / home. in. tum. de/ ~jain/ software/ oe-quotefix/ ) from the original on 2012-09-21. . Retrieved June 4, 2007. [9] "Improve-Usenet" (http:/ / improve-usenet. org). October 13, 2008. Archived (http:/ / archive. is/ 20120713/ http:/ / improve-usenet. org) from the original on 2012-07-13. . [10] "Improve-Usenet Comments" (http:/ / web. archive. org/ web/ 20080426174022/ http:/ / improve-usenet. org/ voices1. html). October 13, 2008. Archived from the original (http:/ / improve-usenet. org/ voices1. html) on April 26, 2008. . Retrieved June 29, 2009. [11] "Google Groups" (http:/ / groups. google. com/ ). Groups.google.com. Archived (http:/ / archive. is/ 20120525/ http:/ / groups. google. com/ ) from the original on 2012-05-25. . Retrieved December 14, 2010. [12] "News: links to Google Groups" (http:/ / mykzilla. blogspot. com/ 2008/ 11/ news-links-to-google-groups. html). Archived (http:/ / archive. is/ 20120712/ http:/ / mykzilla. blogspot. com/ 2008/ 11/ news-links-to-google-groups. html) from the original on 2012-07-12. . [13] "Who can force the moderators to obey the group charter?" (http:/ / www. big-8. org/ wiki/ Moderated_Newsgroups#Who_can_force_the_moderators_to_obey_the_group_charter. 3F). Big-8.org. Archived (http:/ / archive. is/ 20120804/ http:/ / www. big-8. org/ wiki/ Moderated_Newsgroups#Who_can_force_the_moderators_to_obey_the_group_charter. 3F) from the original on 2012-08-04. . Retrieved December 14, 2010. [14] "How does a group change moderators?" (http:/ / www. big-8. org/ wiki/ Moderated_Newsgroups#How_does_a_group_change_moderators. 3F). Big-8.org. Archived (http:/ / archive. is/ 20120719/ http:/ / www. big-8. org/ wiki/ Moderated_Newsgroups#How_does_a_group_change_moderators. 3F) from the original on 2012-07-19. . Retrieved December 14, 2010. [15] "Early Usenet Newsgroup Hierarchies" (http:/ / www. livinginternet. com/ u/ ui_early. htm). Livinginternet.com. October 25, 1990. Archived (http:/ / archive. is/ 20120921/ http:/ / www. livinginternet. com/ u/ ui_early. htm) from the original on 2012-09-21. . Retrieved December 14, 2010. [16] "How to Create a New Big-8 Newsgroup" (http:/ / www. big-8. org/ wiki/ How_to_Create_a_New_Big-8_Newsgroup). Big-8.org. July 7, 2010. Archived (http:/ / archive. is/ 20120722/ http:/ / www. big-8. org/ wiki/ How_to_Create_a_New_Big-8_Newsgroup) from the original on 2012-07-22. . Retrieved December 14, 2010. [17] Microsoft Responds to the Evolution of Communities (http:/ / www. microsoft. com/ communities/ newsgroups/ default. mspx), Announcement, undated. "Microsoft hitting 'unsubscribe' on newsgroups" (http:/ / news. cnet. com/ 8301-13860_3-20004109-56. html). Archived (http:/ / archive. is/ 20120712/ http:/ / news. cnet. com/ 8301-13860_3-20004109-56. html) from the original on 2012-07-12. ., CNET, May 4, 2010.

USENET
[18] "Usenet storage is more than 9 [[petabyte|petabytes (https:/ / www. binsearch. info/ groupinfo. php)] (9000 terabytes)"]. binsearch.info. Archived (http:/ / archive. is/ 20120921/ https:/ / www. binsearch. info/ groupinfo. php) from the original on 2012-09-21. . Retrieved June 5, 2012. [19] "Giganews FAQ - How long are articles available?" (http:/ / www. giganews. com/ faq. html#q0. 4). Giganews.com. Archived (http:/ / archive. is/ 20120904/ http:/ / www. giganews. com/ faq. html#q0. 4) from the original on 2012-09-04. . Retrieved October 23, 2012. [20] "9 petabyte of usenet storage on giganews.com" (http:/ / www. giganews. com/ blog/ 2011/ 05/ announcing-1000-days-retention-prize. html). giganews.com. Archived (http:/ / archive. is/ 20120921/ http:/ / www. giganews. com/ blog/ 2011/ 05/ announcing-1000-days-retention-prize. html) from the original on 2012-09-21. . Retrieved February 14, 2012. [21] "usenet backup (uBackup)" (http:/ / www. wikihow. com/ Backup-Your-Data-on-Usenet-(Ubackup)). Wikihow.com. Archived (http:/ / archive. is/ 20120918/ http:/ / www. wikihow. com/ Backup-Your-Data-on-Usenet-(Ubackup)) from the original on 2012-09-18. . Retrieved February 14, 2012. [22] "The [[Supernews_(Usenet_provider) SuperNews (http:/ / www. supernews. com/ docs/ dmca. html)] DMCA notifications page shows a typical example of Usenet provider DMCA takedown compliance."]. Archived (http:/ / archive. is/ 20120910/ http:/ / www. supernews. com/ docs/ dmca. html) from the original on 2012-09-10. . [23] "Cancel Messages FAQ" (http:/ / web. archive. org/ web/ 20071212175002/ http:/ / www. killfile. org/ faqs/ cancel. html). Archived from the original (http:/ / wiki. killfile. org/ projects/ usenet/ faqs/ cancel/ ) on December 12, 2007. . Retrieved June 29, 2009. "...Until authenticated cancels catch on, there are no options to avoid forged cancels and allow unforged ones..." [24] Microsoft knowledgebase article stating that many servers ignore cancel messages "Support.microsoft.com" (http:/ / support. microsoft. com/ kb/ q164420/ ). Archived (http:/ / archive. is/ 20120719/ http:/ / support. microsoft. com/ kb/ q164420/ ) from the original on 2012-07-19. . [25] "Microsoft Word - Surmacz.doc" (http:/ / www. measurement. sk/ 2005/ S1/ Surmacz. pdf) (PDF). . Retrieved December 14, 2010. [26] ...every part of a Usenet post may be forged apart from the left most portion of the "Path:" header... "By-users.co.uk" (http:/ / www. by-users. co. uk/ faqs/ email/ headers/ ). Archived (http:/ / archive. is/ 20120723/ http:/ / www. by-users. co. uk/ faqs/ email/ headers/ ) from the original on 2012-07-23. . [27] "tUPidfuk01@uunet.uu.net Better living through forgery (news:S)". [news:news.admin.misc news.admin.misc]. 1995-06-10. Retrieved June 08 2012. [28] "Giganews Privacy Policy" (http:/ / www. giganews. com/ legal/ privacy. html). Giganews.com. Archived (http:/ / archive. is/ 20120731/ http:/ / www. giganews. com/ legal/ privacy. html) from the original on 2012-07-31. . Retrieved December 14, 2010. [29] "Logging Policy" (http:/ / aioe. org/ index. php?logging-policy). Aioe.org. June 9, 2005. Archived (http:/ / archive. is/ 20120708/ http:/ / aioe. org/ index. php?logging-policy) from the original on 2012-07-08. . Retrieved December 14, 2010. [30] LaQuey, Tracy (1990). The User's directory of computer networks. Digital Press. p. 386. [31] Hauben, Michael and Hauben, Rhonda. Netizens: On the History and Impact of Usenet and the Internet, On the Early Days of Usenet: The Roots of the Cooperative Online Culture. First Monday vol. 3 num.August 8, 3 1998 [32] Haddadi, H. (2006). "Network Traffic Inference Using Sampled Statistics." University College London. [33] Horton, Mark (December 11, 1990). "Arachnet" (http:/ / communication. ucsd. edu/ bjones/ Usenet. Hist/ Nethist/ 0111. html). Archived (http:/ / archive. is/ 20120921/ http:/ / communication. ucsd. edu/ bjones/ Usenet. Hist/ Nethist/ 0111. html) from the original on 2012-09-21. . Retrieved June 4, 2007. [34] Huston, Geoff (1999). ISP survival guide: strategies for running a competitive ISP. Wiley. p. 439. [35] "Unix/Linux news servers" (http:/ / www. newsreaders. com/ unix/ servers. html). Newsreaders.com. Archived (http:/ / archive. is/ 20120905/ http:/ / www. newsreaders. com/ unix/ servers. html) from the original on 2012-09-05. . Retrieved December 14, 2010. [36] Tim Berners-Lee (August 6, 1991). "@cernvax.cern.ch WorldWideWeb: Summary (news:6487)". lt.hypertext alt.hypertext (news:a). Retrieved June 4, 2007. [37] Torvalds, Linus. "ug25.205708.9541@klaava.Helsinki.FI What would you like to see most in minix? (news:1991A)". [news:comp.os.minix comp.os.minix]. Retrieved September 9, 2006. [38] Marc Andreessen (March 15, 1993). "r14225600@wintermute.ncsa.uiuc.edu NCSA Mosaic for X 0.10 available. (news:MARCA. 93Ma)". [news:comp.infosystems.gopher, comp.infosystems.wais, comp.infosystems, alt.hypertext, comp.windows.x comp.infosystems.gopher, comp.infosystems.wais, comp.infosystems, alt.hypertext, comp.windows.x]. Retrieved June 4, 2007. [39] Kaltenbach, Susan (2000-12). "The Evolution of the Online Discourse Community" (http:/ / noonuniverse. com/ Linked_work/ online_discourse. pdf). . ""Verb Doubling: Doubling a verb may change its semantics, Soundalike Slang: Punning jargon, The -P convention: A LISPy way to form questions, Overgeneralization: Standard abuses of grammar, Spoken Inarticulations: Sighing and <*sigh*&rt;ing, Anthropomorphization: online components were named "Homunculi," daemons," etc., and there were also "confused" programs. Comparatives: Standard comparatives for design quality"" [40] Campbell, K.K. (October 1, 1994). "Chatting With Martha Siegel of the Internet's Infamous Canter & Siegel" (http:/ / web. archive. org/ web/ 20071125201904/ http:/ / w2. eff. org/ legal/ cases/ Canter_Siegel/ c-and-s_summary. article). Electronic Frontier Foundation. Archived from the original (http:/ / w2. eff. org/ legal/ cases/ Canter_Siegel/ c-and-s_summary. article) on November 25, 2007. . Retrieved September 24, 2010. [41] Sascha Segan (July 31, 2008). "R.I.P Usenet: 1980-2008" (http:/ / www. pcmag. com/ article2/ 0,2817,2326849,00. asp). PC Magazine: p.2. Archived (http:/ / archive. is/ 20120909/ http:/ / www. pcmag. com/ article2/ 0,2817,2326849,00. asp) from the original on 2012-09-09. . Retrieved May 8, 2011.

107

USENET
[42] " "Reports of Usenet's Death Are Greatly Exaggerated" (http:/ / techcrunch. com/ 2008/ 08/ 01/ the-reports-of-usenets-death-are-greatly-exaggerated/ ). Archived (http:/ / archive. is/ 20120716/ http:/ / techcrunch. com/ 2008/ 08/ 01/ the-reports-of-usenets-death-are-greatly-exaggerated/ ) from the original on 2012-07-16. .." TechCrunch. August 1, 2008. Retrieved on May 8, 2011. [43] Cara Bonnett (May 17, 2010). "A Piece of Internet History" (http:/ / news. duke. edu/ 2010/ 05/ usenet. html). Duke Today. Archived (http:/ / archive. is/ 20120711/ http:/ / news. duke. edu/ 2010/ 05/ usenet. html) from the original on 2012-07-11. . Retrieved May 24, 2010. [44] Andrew Orlowski (May 20, 2010). "Usenet's home shuts down today" (http:/ / www. theregister. co. uk/ 2010/ 05/ 20/ usenet_duke_server/ ). The Register. Archived (http:/ / archive. is/ 20120921/ http:/ / www. theregister. co. uk/ 2010/ 05/ 20/ usenet_duke_server/ ) from the original on 2012-09-21. . Retrieved May 24, 2010. [45] "Top 100 text newsgroups by postings" (http:/ / www. newsadmin. com/ top100tmsgs. asp). NewsAdmin. Archived (http:/ / archive. is/ 20120905/ http:/ / www. newsadmin. com/ top100tmsgs. asp) from the original on 2012-09-05. . Retrieved December 14, 2010. [46] "Top 100 binary newsgroups by postings" (http:/ / www. newsadmin. com/ top100bmsgs. asp). NewsAdmin. Archived (http:/ / archive. is/ 20120904/ http:/ / www. newsadmin. com/ top100bmsgs. asp) from the original on 2012-09-04. . Retrieved December 14, 2010. [47] Rosencrance, Lisa. " "3 top ISPs to block access to sources of child porn" (http:/ / www. computerworld. com/ action/ article. do?command=viewArticleBasic& articleId=9095778). Archived (http:/ / archive. is/ 20120722/ http:/ / www. computerworld. com/ action/ article. do?command=viewArticleBasic& articleId=9095778) from the original on 2012-07-22. .." Computer World. June 8, 2008. Retrieved on April 30, 2009. [48] DeJean, David. "Usenet: Not Dead Yet." PC World. Tuesday October 7, 2008. "2" (http:/ / www. pcworld. com/ businesscenter/ article/ 151989-2/ usenet_not_dead_yet. html). Archived (http:/ / archive. is/ 20120921/ http:/ / www. pcworld. com/ businesscenter/ article/ 151989-2/ usenet_not_dead_yet. html) from the original on 2012-09-21. .. Retrieved on April 30, 2009. [49] "ATT Announces Discontinuation of USENET Newsgroup Services" (http:/ / www. newsdemon. com/ blog/ 2009/ 06/ 09/ att-announces-discontinuation-of-usenet-newsgroup-services/ ). NewsDemon. June 9, 2009. Archived (http:/ / archive. is/ 20120921/ http:/ / www. newsdemon. com/ blog/ 2009/ 06/ 09/ att-announces-discontinuation-of-usenet-newsgroup-services/ ) from the original on 2012-09-21. . Retrieved June 18, 2009. [50] Hu, Jim. " "AOL shutting down newsgroups" (http:/ / news. cnet. com/ AOL-shutting-down-newsgroups/ 2100-1032_3-5550036. html). Archived (http:/ / archive. is/ 20120723/ http:/ / news. cnet. com/ AOL-shutting-down-newsgroups/ 2100-1032_3-5550036. html) from the original on 2012-07-23. .." CNet. January 25, 2005. Retrieved on May 1, 2009. [51] "AOL Pulls Plug on Newsgroup Service" (http:/ / www. betanews. com/ article/ AOL-Pulls-Plug-on-Newsgroup-Service/ 1106664611). Betanews.com. Archived (http:/ / archive. is/ 20120722/ http:/ / www. betanews. com/ article/ AOL-Pulls-Plug-on-Newsgroup-Service/ 1106664611) from the original on 2012-07-22. . Retrieved December 14, 2010. [52] Bode, Karl. " "Verizon To Discontinue Newsgroups September 30" (http:/ / www. dslreports. com/ shownews/ Verizon-To-Discontinue-Newsgroups-September-30-104227). Archived (http:/ / archive. is/ 20120731/ http:/ / www. dslreports. com/ shownews/ Verizon-To-Discontinue-Newsgroups-September-30-104227) from the original on 2012-07-31. .". DSLReports. August 31, 2009. Retrieved on October 24, 2009. [53] " "Verizon Newsgroup Service Has Been Discontinued" (http:/ / www22. verizon. com/ ResidentialHelp/ HighSpeed/ General Support/ Top Questions/ QuestionsOne/ 125159. htm). Archived (http:/ / archive. is/ 20120921/ http:/ / www22. verizon. com/ ResidentialHelp/ HighSpeed/ General+ Support/ Top+ Questions/ QuestionsOne/ 125159. htm) from the original on 2012-09-21. ." Verizon Central Support. Retrieved on October 24, 2009. [54] Ukerna.ac.uk (http:/ / www. ukerna. ac. uk/ services/ news/ index. html) [55] "Microsoft Responds to the Evolution of Communities" (http:/ / www. microsoft. com/ communities/ newsgroups/ default. mspx#EV). microsoft.com. Archived (http:/ / archive. is/ 20120904/ http:/ / www. microsoft. com/ communities/ newsgroups/ default. mspx#EV) from the original on 2012-09-04. . Retrieved September 1, 2011. [56] "AOL shutting down newsgroups" (http:/ / news. cnet. com/ AOL-shutting-down-newsgroups/ 2100-1032_3-5550036. html). cnet.com/. January 25, 2005. Archived (http:/ / archive. is/ 20120723/ http:/ / news. cnet. com/ AOL-shutting-down-newsgroups/ 2100-1032_3-5550036. html) from the original on 2012-07-23. . Retrieved September 1, 2011. [57] "Verizon To Discontinue Newsgroups" (http:/ / www. dslreports. com/ shownews/ Verizon-To-Discontinue-Newsgroups-September-30-104227). dslreports.com. August 31, 2009. Archived (http:/ / archive. is/ 20120731/ http:/ / www. dslreports. com/ shownews/ Verizon-To-Discontinue-Newsgroups-September-30-104227) from the original on 2012-07-31. . Retrieved September 1, 2011. [58] "The Comcast Newsgroups Service Discontinued" (http:/ / www. dslreports. com/ forum/ r21120658-The-Comcast-Newsgroups-Service-Discontinued). dslreports.com. dslreports.com. Archived (http:/ / archive. is/ 20120731/ http:/ / www. dslreports. com/ forum/ r21120658-The-Comcast-Newsgroups-Service-Discontinued) from the original on 2012-07-31. . Retrieved September 1, 2011. [59] "Cox to Drop Free Usenet Service June 30th" (http:/ / www. zeropaid. com/ news/ 88729/ cox-to-drop-free-usenet-service-june-30th/ ). Zeropaid.com. April 22, 2010. Archived (http:/ / archive. is/ 20120921/ http:/ / www. zeropaid. com/ news/ 88729/ cox-to-drop-free-usenet-service-june-30th/ ) from the original on 2012-09-21. . Retrieved September 3, 2011. [60] "Cox Discontinues Usenet, Starting In June" (http:/ / news. slashdot. org/ story/ 10/ 04/ 21/ 210224/ Cox-Discontinues-Usenet-Starting-In-June). Geeknet, Inc. April 21, 2010. Archived (http:/ / archive. is/ 20120921/ http:/ / news. slashdot. org/ story/ 10/ 04/ 21/ 210224/ Cox-Discontinues-Usenet-Starting-In-June) from the original on 2012-09-21. . Retrieved September 1, 2011.

108

USENET
[61] "Cox Communications and Atlantic Broadband Discontinue Usenet Access" (http:/ / www. thundernews. com/ blog/ tag/ isp/ ). thundernews.com. April 27, 2010. Archived (http:/ / archive. is/ 20120912/ http:/ / www. thundernews. com/ blog/ tag/ isp/ ) from the original on 2012-09-12. . Retrieved September 1, 2011. [62] "How to obtain back news items". December 21, 1982. bnews.spanky.138. [63] "Distributed archiving of netnews" (http:/ / groups. google. com/ group/ net. news/ msg/ e8145eace8aa0529?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120708/ http:/ / groups. google. com/ group/ net. news/ msg/ e8145eace8aa0529?dmode=source& output=gplain) from the original on 2012-07-08. . Retrieved December 14, 2010. [64] "Archive of netnews" (http:/ / groups. google. com/ group/ net. news/ msg/ a9ad42b15688b910?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120724/ http:/ / groups. google. com/ group/ net. news/ msg/ a9ad42b15688b910?dmode=source& output=gplain) from the original on 2012-07-24. . Retrieved December 14, 2010. [65] "Re: Archive of netnews" (http:/ / groups. google. com/ group/ net. news/ msg/ 87c9160730b36989?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120715/ http:/ / groups. google. com/ group/ net. news/ msg/ 87c9160730b36989?dmode=source& output=gplain) from the original on 2012-07-15. . Retrieved December 14, 2010. [66] "Automatic access to archives" (http:/ / groups. google. com/ group/ net. general/ msg/ a377e12ab1a829ff?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120712/ http:/ / groups. google. com/ group/ net. general/ msg/ a377e12ab1a829ff?dmode=source& output=gplain) from the original on 2012-07-12. . Retrieved December 14, 2010. [67] "keepnews -- A Usenet news archival system" (http:/ / groups. google. com/ group/ net. sources/ msg/ 737d29fd3af2d5ae?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120717/ http:/ / groups. google. com/ group/ net. sources/ msg/ 737d29fd3af2d5ae?dmode=source& output=gplain) from the original on 2012-07-17. . Retrieved December 14, 2010. [68] "YA News Archiver" (http:/ / groups. google. com/ group/ net. sources/ msg/ d5c0cec804f745dd?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120709/ http:/ / groups. google. com/ group/ net. sources/ msg/ d5c0cec804f745dd?dmode=source& output=gplain) from the original on 2012-07-09. . Retrieved December 14, 2010. [69] "RFC usenet article archive program with keyword lookup" (http:/ / groups. google. com/ group/ net. news/ msg/ f26190057834ed87?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120715/ http:/ / groups. google. com/ group/ net. news/ msg/ f26190057834ed87?dmode=source& output=gplain) from the original on 2012-07-15. . Retrieved December 14, 2010. [70] "Looking for fulltext indexing software for archived news" (http:/ / groups. google. com/ group/ news. software. nn/ msg/ 16381a7e3fd611d4?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120921/ http:/ / groups. google. com/ group/ news. software. nn/ msg/ 16381a7e3fd611d4?dmode=source& output=gplain) from the original on 2012-09-21. . Retrieved December 14, 2010. [71] "Summary: search for fulltext indexing software for archived news" (http:/ / groups. google. com/ group/ news. software. nn/ msg/ f503efa9034f1c1d?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120708/ http:/ / groups. google. com/ group/ news. software. nn/ msg/ f503efa9034f1c1d?dmode=source& output=gplain) from the original on 2012-07-08. . Retrieved December 14, 2010. [72] Segan, Sascha (January 1, 1970). "R.I.P Usenet: 1980-2008 - Usenet's Decline - Columns by PC Magazine" (http:/ / www. pcmag. com/ article2/ 0,2817,2326849,00. asp). Pcmag.com. Archived (http:/ / archive. is/ 20120909/ http:/ / www. pcmag. com/ article2/ 0,2817,2326849,00. asp) from the original on 2012-09-09. . Retrieved December 14, 2010. [73] Strawbridge, Matthew (2006). Netiquette: Internet Etiquette in the Age of the Blog. Software Reference. p. 53. [74] Cullen, Drew (February 12, 2001). "Google saves Deja.com Usenet service" (http:/ / www. theregister. co. uk/ 2001/ 02/ 12/ google_saves_deja_com_usenet/ ). Archived (http:/ / archive. is/ 20120921/ http:/ / www. theregister. co. uk/ 2001/ 02/ 12/ google_saves_deja_com_usenet/ ) from the original on 2012-09-21. .. The Register. [75] Wiseman, David. "Magi's NetNews Archive Involvement" (http:/ / www. csd. uwo. ca/ ~magi/ personal/ usenet. html), csd.uwo.ca [76] Mieszkowski, Katharine. " "The Geeks Who Saved Usenet" (http:/ / archive. salon. com/ tech/ feature/ 2002/ 01/ 07/ saving_usenet/ index. html). Archived (http:/ / archive. is/ 20120710/ http:/ / archive. salon. com/ tech/ feature/ 2002/ 01/ 07/ saving_usenet/ index. html) from the original on 2012-07-10. .", archive.salon.com (January 7, 2002). [77] Feldman, Ian. ""Usenet on a CD-ROM, no longer a fable"" (http:/ / db. tidbits. com/ article/ 3229). Archived (http:/ / archive. is/ 20120707/ http:/ / db. tidbits. com/ article/ 3229) from the original on 2012-07-07. ., "TidBITS" (February 10, 1992) [78] " "Google Groups Archive Information" (http:/ / groups. google. com/ group/ news. admin. misc/ msg/ 6dec509043e7b3c7?dmode=source& output=gplain). Archived (http:/ / archive. is/ 20120709/ http:/ / groups. google. com/ group/ news. admin. misc/ msg/ 6dec509043e7b3c7?dmode=source& output=gplain) from the original on 2012-07-09. ." (December 21, 2001)

109

USENET

110

Further reading
Bruce Jones, archiver (1997). USENET History mailing list archive (http://communication.ucsd.edu/bjones/ Usenet.Hist/Nethist/index.html) covering 19901997. communication.ucsd.edu Michael Hauben, Ronda Hauben, and Thomas Truscott (April 27, 1997). Netizens: On the History and Impact of Usenet and the Internet (Perspectives). Wiley-IEEE Computer Society P. ISBN0-8186-7706-6. Bryan Pfaffenberger (December 31, 1994). The USENET Book: Finding, Using, and Surviving Newsgroups on the Internet. Addison Wesley. ISBN0-201-40978-X. Kate Gregory, Jim Mann, Tim Parker, and Noel Estabrook (June 1995). Using Usenet Newsgroups. Que. ISBN0-7897-0134-0. Mark Harrison (July 1995). The USENET Handbook (Nutshell Handbook). O'Reilly. ISBN1-56592-101-1. Henry Spencer, David Lawrence (January 1998). Managing Usenet. O'Reilly. ISBN1-56592-198-4. Don Rittner (June 1997). Rittner's Field Guide to Usenet. MNS Publishing. ISBN0-937666-50-5. Konstan, J., Miller, B., Maltz, D., Herlocker, J., Gordon, L., and Riedl, J. (March 1997). "GroupLens: applying collaborative filtering to Usenet news" (http://portal.acm.org/citation.cfm?doid=245108.245126). Communications of the ACM 40 (3): 7787. doi:10.1145/245108.245126. Retrieved June 29, 2009. Miller, B., Riedl, J., and Konstan, J. (January 1997). "Experiences with GroupLens: Making Usenet useful again" (http://www.grouplens.org/papers/pdf/usenix97.pdf). Proceedings of the 1997 Usenix Winter Technical Conference. "20 Year Usenet Timeline" (http://www.google.com/googlegroups/archive_announce_20.html). Google. Retrieved June 27, 2006. "Web 2.0, Meet Usenet 1.0" (http://www.linux-mag.com/id/2675/). Linux Magazine. Retrieved February 13, 2007. Schwartz, Randal (June 15, 2006). "Web 2.0, Meet Usenet 1.0" (http://www.linux-mag.com/id/2675/). Retrieved June 4, 2007. Kleiner, Dmytri; Wyrick, Brian (January 29, 2007). "InfoEnclosure 2.0" (http://www.metamute.org/ InfoEnclosure-2.0). Retrieved June 4, 2007.

External links
Usenet information, software, and service providers (http://www.dmoz.org/Computers/Usenet/) at the Open Directory Project Usenet servers (http://www.dmoz.org/Computers/Software/Internet/Servers/Usenet/) at the Open Directory Project Public News Servers (http://www.dmoz.org/Computers/Usenet/Public_News_Servers/) at the Open Directory Project IETF working group USEFOR (http://tools.ietf.org/wg/usefor/) (USEnet article FORmat), tools.ietf.org son-of-1036 (http://purl.net/xyzzy/home/test/son-of-1.036) (287 KB, historical RFC 1036 bis draft), purl.net A-News Archive (http://quux.org:70/Archives/usenet-a-news/): Early Usenet news articles: 1981 to 1982., quux.org UTZoo Archive (http://www.skrenta.com/rt/utzoo-usenet/): 2,000,000 articles from early 1980s to July 1991 "Netscan" (http://web.archive.org/web/*/http://netscan.research.microsoft.com). Archived from the original (http://research.microsoft.com/en-us/groups/scg/) on June 29, 2009. Social Accounting Reporting Tool Living Internet (http://www.livinginternet.com/u/u.htm) A comprehensive history of the Internet, including Usenet. livinginternet.com Usenet Glossary (http://www.harley.com/usenet/usenet-tutorial/glossary.html) A comprehensive list of Usenet terminology

USENET "Big 8 Management Board" (http://web.archive.org/web/20080130015435/http://www.big-8.org/). Archived from the original (http://big-8.org/) on January 30, 2008. Group creation and maintenance in the Big 8 hierarchies. Michael Hauben (Fall 1992). "The Evolution Of Usenet News: The Poor Man's Arpanet" (http://www. virtualschool.edu/mon/Internet/HaubenEvolutionNetnews.html). Retrieved May 24, 2010. "Giganews Deconstructs Cuomo's Child Porn 'Crackdown'" (http://www.dslreports.com/shownews/ Giganews-Deconstructs-Cuomos-Child-Porn-Crackdown-98446). Retrieved September 1, 2011. "Latest Child Porn Fight Mostly Empty Rhetoric" (http://www.dslreports.com/shownews/96203). Retrieved September 1, 2011. "N.Y. AG says AOL will curb access to Usenet. It already did" (http://news.cnet.com/ 8301-13578_3-9988278-38.html?hhTest=1=rss&subj=news&tag=2547-1_3-0-20). Retrieved September 1, 2011. "Analysis of Cuomo Crusade shows little results." (http://www.giganews.com/blog/2008/09/ clearing-air-usenet-abuse-eliminating.html). Retrieved September 1, 2011.

111

X.25
X.25 is an ITU-T standard protocol suite for packet switched wide area network (WAN) communication. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the networking hardware, and leased lines, plain old telephone service connections or ISDN connections as physical links. X.25 is a family of protocols that was popular during the 1980s with telecommunications companies and in financial transaction systems such as automated teller machines. X.25 was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts[1] and finalized in a publication known as The Orange Book in 1976.[2]

X.25 network diagram.

While X.25 has been, to a large extent, replaced by less complex protocols, especially the Internet protocol (IP), the service is still used and available in niche and legacy applications.

History
X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model.[3] The protocol suite is designed as three conceptual layers, which correspond closely to the lower three layers of the seven-layer OSI model.[4] It also supports functionality not found in the OSI network layer.[5][6] X.25 was developed in the ITU-T (formerly CCITT) Study Group VII based upon a number of emerging data network projects.[7] Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecommunication systems. These books were published every fourth year with different-colored covers. The X.25 specification is only part of the larger set of X-Series[8] specifications on public data networks.[9]

X.25 The public data network was the common name given to the international collection of X.25 providers. Their combined network had large global coverage during the 1980s and into the 1990s.[10] Publicly-accessible X.25 networks (Compuserve, Tymnet, Euronet, PSS, Datapac, Datanet 1 and Telenet) were set up in most countries during the 1970s and 80s, to lower the cost of accessing various online services. Beginning in the early 1990s in North America, use of X.25 networks (predominated by Telenet and Tymnet)[10] began being replaced with Frame Relay service offered by national telephone companies.[11] Most systems that required X.25 now utilize TCP/IP, however it is possible to transport X.25 over IP when necessary [12] X.25 networks are still in use throughout the world. A variant called AX.25 is also used widely by amateur packet radio. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 (or ISDN BRI) connection for low volume applications such as point-of-sale terminals; but, the future of this service in the Netherlands is uncertain. Additionally X.25 is still under heavy use in the aeronautical business (especially in the Asian region) even though a transition to modern protocols like X.400 is without option as X.25 hardware becomes increasingly rare and costly. As recently as March 2006, the National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with Air Route Traffic Control Centers. France is one of the only countries that still has a commercial end-user service known as Minitel which is based on Videotex which in turn runs on X.25. In 2002 Minitel had about 9 million users, and in 2011 it still accounts for about 2 million users in France though France Tlcom has announced it will completely shut down the service by 30 June 2012.[13]

112

Architecture
The general concept of X.25 was to create a universal and global packet-switched network. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, as well as more efficient sharing of capital-intensive physical resources. The X.25 specification defines only the interface between a subscriber (DTE) and an X.25 network (DCE). X.75, a very similar protocol to X.25, defines the interface between two X.25 networks to allow connections to traverse two or more networks. X.25 does not specify how the network operates internallymany X.25 network implementations used something very similar to X.25 or X.75 internally, but others used quite different protocols internally. The ISO equivalent protocol to X.25, ISO 8208, is compatible with X.25, but additionally includes provision for two X.25 DTEs to be directly connected to each other with no network in between. By separating the Packet-Layer Protocol, ISO 8208 permits operation over additional networks such as ISO 8802 LLC2 (ISO LAN) and the OSI data link layer.[14] X.25 originally defined three basic protocol levels or architectural layers. In the original specifications these were referred to as levels and also had a level number, whereas all ITU-T X.25 recommendations and ISO 8208 standards released after 1984 refer to them as layers.[15] The layer numbers were dropped to avoid confusion with the OSI Model layers.[1] Physical layer: This layer specifies the physical, electrical, functional and procedural characteristics to control the physical link between a DTE and a DCE. Common implementations use X.21, EIA-232, EIA-449 or other serial protocols. Data link layer: The data link layer consists of the link access procedure for data interchange on the link between a DTE and a DCE. In its implementation, the Link Access Procedure, Balanced (LAPB) is a data link protocol that manages a communication session and controls the packet framing. It is a bit-oriented protocol that provides error correction and orderly delivery.

X.25 Packet layer: This layer defined a packet-layer protocol for exchanging control and user data packets to form a packet-switching network based on virtual calls, according to the Packet Layer Protocol. The X.25 model was based on the traditional telephony concept of establishing reliable circuits through a shared network, but using software to create "virtual calls" through the network. These calls interconnect "data terminal equipment" (DTE) providing endpoints to users, which looked like point-to-point connections. Each endpoint can establish many separate virtual calls to different endpoints. For a brief period, the specification also included a connectionless datagram service, but this was dropped in the next revision. The "fast select with restricted response facility" is intermediate between full call establishment and connectionless communication. It is widely used in query-response transaction applications involving a single request and response limited to 128 bytes of data carried each way. The data is carried in an extended call request packet and the response is carried in an extended field of the call reject packet, with a connection never being fully established. Closely related to the X.25 protocol are the protocols to connect asynchronous devices (such as dumb terminals and printers) to an X.25 network: X.3, X.28 and X.29. This functionality was performed using a Packet Assembler/Disassembler or PAD (also known as a Triple-X device, referring to the three protocols used).

113

Relation to the OSI Reference Model


Although X.25 predates the OSI Reference Model (OSIRM), the physical Layer of the OSI model corresponds to the X.25 physical layer, the data link layer to the X.25 data link layer, and the network layer to the X.25 packet layer.[9] The X.25 data link layer, LAPB, provides a reliable data path across a data link (or multiple parallel data links, multilink) which may not be reliable itself. The X.25 packet layer, provides the virtual call mechanisms, running over X.25 LAPB. The packet layer includes mechanisms to maintain virtual calls and to signal data errors in the event that the data link layer cannot recover from data transmission errors. All but the earliest versions of X.25 include facilities[16] which provide for OSI network layer Addressing (NSAP addressing, see below).[17]

User device support


X.25 was developed in the era of computer terminals connecting to host computers, although it also can be used for communications between computers. Instead of dialing directly into the host computer which would require the host to have its own pool of modems and phone lines, and require non-local callers to make long-distance calls the host could have an X.25 connection to a network service provider. Now dumb-terminal users could dial into the network's local PAD (Packet Assembly/Disassembly facility), a gateway device connecting modems and serial lines to the X.25 link as defined by the X.29 and X.3 standards.

Having connected to the PAD, the dumb-terminal user tells the PAD which host to connect to, by giving a phone-number-like address in the X.121 address format (or by giving a host name, if the service provider allows for names that map to X.121 addresses). The PAD then places an X.25 call to the host, establishing a virtual call. Note that X.25 provides for virtual calls, so appears to be a circuit switched network, even though in fact the data itself is packet switched internally, similar to the way TCP provides connections even though the underlying data is packet switched. Two X.25 hosts could, of course, call one another directly; no PAD is involved in this case. In theory, it doesn't matter whether the X.25 caller and X.25 destination are both connected to the same carrier, but in practice it was not always possible to make calls from one carrier to another.

A Televideo terminal model 925 made around 1982

X.25 For the purpose of flow-control, a sliding window protocol is used with the default window size of 2. The acknowledgements may have either local or end to end significance. A D bit (Data Delivery bit) in each data packet indicates if the sender requires end to end acknowledgement. When D=1, it means that the acknowledgement has end to end significance and must take place only after the remote DTE has acknowledged receipt of the data. When D=0, the network is permitted (but not required) to acknowledge before the remote DTE has acknowledged or even received the data. While the PAD function defined by X.28 and X.29 specifically supported asynchronous character terminals, PAD equivalents were developed to support a wide range of proprietary intelligent communications devices, such as those for IBM System Network Architecture (SNA).

114

Error control
Error recovery procedures at the packet layer assume that the data link layer is responsible for retransmitting data received in error. Packet layer error handling focuses on resynchronizing the information flow in calls, as well as clearing calls that have gone into unrecoverable states: Level 3 Reset packets, which re-initializes the flow on a virtual call (but does not break the virtual call) Restart packet, which clears down all virtual calls on the data link and resets all permanent virtual circuits on the data link

Addressing and virtual circuits


X.25 supports two types of virtual circuits, virtual calls (VC) and permanent virtual circuits (PVC). Virtual calls are established on an as-needed basis. For example, a VC is established when a call is placed and torn down after the call is complete. VCs are established through a call establishment and clearing procedure. On the other hand, permanent virtual circuits are preconfigured into the network.[18] PVCs are seldom torn down and thus provide a dedicated connection between end points.

An X.25 Modem once used to connect to the German Datex-P network.

VC may be established using X.121 addresses. The X.121 address consists of a three-digit data country code (DCC) plus a network digit, together forming the four-digit data network identification code (DNIC), followed by the national terminal number (NTN) of at most ten digits. Note the use of a single network digit, seemingly allowing for only 10 network carriers per country, but some countries are assigned more than one DCC to avoid this limitation. Networks often used fewer than the full NTN digits for routing, and made the spare digits available to the subscriber (sometimes called the sub-address) where they could be used to identify applications or for further routing on the subscribers networks. NSAP addressing facility was added in the X.25(1984) revision of the specification, and this enabled X.25 to better meet the requirements of OSI Connection Oriented Network Service (CONS).[19] Public X.25 networks were not required to make use of NSAP addressing, but, to support OSI CONS, were required to carry the NSAP addresses and other ITU-T specified DTE facilities transparently from DTE to DTE.[20] Later revisions allowed multiple addresses in addition to X.121 addresses to be carried on the same DTE-DCE interface: Telex addressing (F.69), PSTN addressing (E.163), ISDN addressing (E.164), Internet Protocol addresses (IANA ICP), and local IEEE 802.2 MAC addresses.[21] PVCs are permanently established in the network and therefore do not require the use of addresses for call setup. PVCs are identified at the subscriber interface by their logical channel identifier (see below). However, in practice not many of the national X.25 networks supported PVCs.

X.25 One DTE-DCE interface to an X.25 network has a maximum of 4095 logical channels on which it is allowed to establish virtual calls and permanent virtual circuits,[22] although networks are not expected to support a full 4095 virtual circuits.[23] For identifying the channel to which a packet is associated, each packet contains a 12 bit logical channel identifier made up of an 8-bit logical channel number and a 4-bit logical channel group number.[22] Logical channel identifiers remain assigned to a virtual circuit for the duration of the connection.[22] Logical channel identifiers identify a specific logical channel between the DTE (subscriber appliance) and the DCE (network), and only has local significance on the link between the subscriber and the network. The other end of the connection at the remote DTE is likely to have assigned a different logical channel identifier. The range of possible logical channels is split into 4 groups: channels assigned to permanent virtual circuits, assigned to incoming virtual calls, two-way (incoming or outgoing) virtual calls, and outgoing virtual calls.[24] (Directions refer to the direction of virtual call initiation as viewed by the DTEthey all carry data in both directions.)[25] The ranges allowed a subscriber to be configured to handle significantly differing numbers of calls in each direction while reserving some channels for calls in one direction. All International networks are required to implement support for permanent virtual circuits, two-way logical channels and one-way logical channels outgoing; one-way logical channels incoming is an additional optional facility.[26] DTE-DCE interfaces are not required to support more than one logical channel.[24] Logical channel identifier zero will not be assigned to a permanent virtual circuit or virtual call.[27] The logical channel identifier of zero is used for packets which don't relate to a specific virtual circuit (e.g. packet layer restart, registration, and diagnostic packets).

115

Billing
In public networks, X.25 was typically billed as a flat monthly service fee depending on link speed, and then a price-per-segment on top of this.[28] Link speeds varied, typically from 2400 bit/s up to 2 Mbit/s, although speeds above 64 kbit/s were uncommon in the public networks. A segment was 64 bytes of data (rounded up, with no carry-over between packets),[29] charged to the caller[30] (or callee in the case of reverse charged calls, where supported).[31] Calls invoking the Fast Select facility (allowing 128 bytes of data in call request, call confirmation and call clearing phases)[32] would generally attract an extra charge, as might use of some of the other X.25 facilities. PVCs would have a monthly rental charge and a lower price-per-segment than VCs, making them cheaper only where large volumes of data are passed.

X.25 packet types


Packet Type DCE DTE DTE DCE Call Request Call Accepted Clear Request Clear Confirmation Data Interrupt Interrupt Confirmation RR RNR REJ Reset Request Reset Confirmation Service VC PVC X X X X X X X X X X X X X X X X X X X X

Call setup and Clearing Incoming Call Call Connected Clear Indication Clear Confirmation Data and Interrupt Data Interrupt Interrupt Confirmation Flow Control and Reset RR RNR REJ Reset Indication Reset Confirmation

X.25

116
Restart Restart Indication Restart Confirmation Diagnostic Registration Diagnostic Registration Confirmation Registration Request Restart Request Restart Confirmation X X X X

X.25 details
The network may allow the selection of the maximal length in range 16 to 4096 octets (2n values only) per virtual circuit by negotiation as part of the call setup procedure. The maximal length may be different at the two ends of the virtual circuit. Data terminal equipment constructs control packets which are encapsulated into data packets. The packets are sent to the data circuit-terminating equipment, using LAPB Protocol. Data circuit-terminating equipment strips the layer-2 headers in order to encapsulate packets to the internal network protocol.

X.25 facilities
X.25 provides a set of user facilities defined and described in ITU-T Recommendation X.2.[33] The X.2 user facilities fall into five categories: essential facilities; additional facilities; conditional facilities; mandatory facilities; and, optional facilities.

X.25 also provides X.25 and ITU-T specified DTE optional user facilities defined and described in ITU-T Recommendation X.7.[34] The X.7 optional user facilities fall into four categories of user facilities that require: subscription only; subscription followed by dynamic invocation; subscription or dynamic invocation; and, dynamic invocation only.

X.25 protocol versions


The CCITT/ITU-T versions of the protocol specifications are for public data networks (PDN).[35] The ISO/IEC versions address additional features for private networks (e.g. local area networks (LAN) use) while maintaining compatibility with the CCITT/ITU-T specifications.[36] The user facilities and other features supported by each version of X.25 and ISO/IEC 8208 have varied from edition to edition.[37] Several major protocol versions of X.25 exist:[38] CCITT Recommendation X.25 (1976) Orange Book CCITT Recommendation X.25 (1980) Yellow Book CCITT Recommendation X.25 (1984) Red Book CCITT Recommendation X.25 (1988) Blue Book ITU-T Recommendation X.25 (1993) White Book[39] ITU-T Recommendation X.25 (1996) Grey Book[40]

The X.25 Recommendation allows many options for each network to choose when deciding which features to support and how certain operations are performed. This means each network needs to publish its own document giving the specification of its X.25 implementation, and most networks required DTE appliance manufacturers to

X.25 undertake protocol conformance testing, which included testing for strict adherence and enforcement of their network specific options. (Network operators were particularly concerned about the possibility of a badly behaving or misconfigured DTE appliance taking out parts of the network and affecting other subscribers.) Therefore, subscriber's DTE appliances have to be configured to match the specification of the particular network to which they are connecting. Most of these were sufficiently different to prevent interworking if the subscriber didn't configure their appliance correctly or the appliance manufacturer didn't include specific support for that network. In spite of protocol conformance testing, this often lead to interworking problems when initially attaching an appliance to a network. In addition to the CCITT/ITU-T versions of the protocol, four editions of ISO/IEC 8208 exist:[37] ISO/IEC 8208 : 1987, First Edition, compatible with X.25 (1980) and (1984) ISO/IEC 8208 : 1990, Second Edition, compatible with 1st Ed. and X.25 (1988). ISO/IEC 8208 : 1995, Third Edition, compatible with 2nd Ed. and X.25 (1993). ISO/IEC 8208 : 2000, Fourth Edition, compatible with 3rd Ed. and X.25 (1996).

117

References
[1] CCITT, Study Group VII, Draft Recommendation X-25, March 1976 [2] History of X.25, CCITT Plenary Assemblies and Book Colors (http:/ / www. itu. int/ ITU-T/ studygroups/ com17/ history. html) [3] (Friend et al. 1988, p.242) [4] (Friend et al. 1988, p.243) [5] ITU-T Recommendation X.28 (http:/ / www. itu. int/ rec/ T-REC-X. 28/ en/ ). [6] ITU-T Recommendation X.3 (http:/ / www. itu. int/ rec/ T-REC-X. 3/ en/ ). [7] "X.25 Virtual Circuits Transpac in France Pre-Internet Data Networking" (http:/ / remi. despres. free. fr/ Publications/ X25-TPC. html). . [8] X-Series recommendations (http:/ / www. itu. int/ rec/ T-REC-X/ en/ ) [9] (Friend et al. 1988, p.230) [10] (Schatt 1991, p.200). [11] (Schatt 1991, p.207). [12] Running X.25 over tcpip using Cisco routers (http:/ / www. techrepublic. com/ article/ running-x25-over-tcpip-on-cisco-routers/ 1056023), 1 Februari 2001, visited 4 April 2011 [13] (French) Presse, Agence France (21 July 2011). "Le Minitel disparatra en juin 2012 [Minitel will disappear in June 2012]" (http:/ / www. lefigaro. fr/ flash-actu/ 2011/ 07/ 21/ 97001-20110721FILWWW00446-le-minitel-disparaitra-en-juin-2012. php) (in French). Le Figaro. . [14] ISO 8208:2000 [15] ISO 8208, Annex B. [16] ITU-T Recommendation X.25 (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ ), G.3.2 Called address extension facility, pp. 141142. [17] ITU-T Recommendation X.223 (http:/ / www. itu. int/ rec/ T-REC-X. 223/ en/ ), Appendix II. [18] ITU-T Recommendation X.7 (04/2004) (http:/ / www. itu. int/ rec/ T-REC-X. 7-200404-I/ en/ ), pp. 1718. [19] ITU-T Recommendation X.223 (http:/ / www. itu. int/ rec/ T-REC-X. 223/ en/ ). [20] ITU-T Recommendation X.25 (10/96) (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ ), Annex G, p. 140. [21] ITU-T Recommendation X.213 (http:/ / www. itu. int/ rec/ T-REC-X. 213-200110-I/ en/ ), Annex A. [22] ITU-T Recommendation X.25 (10/96) (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ ), p. 45. [23] ITU-T Recommendation X.283 (12/97) (http:/ / www. itu. int/ rec/ T-REC-X. 283-199712-I/ en/ ), p. 42. [24] ITU-T Recommendation X.25 (10/96) (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ ), Annex A, pp. 119120. [25] ISO/IEC 8208 : 2000, Fourth Edition, p. 61. [26] ITU-T Recommendation X.2 (03/2000) (http:/ / www. itu. int/ rec/ T-REC-X. 2-200003-I/ en/ ), p. 4. [27] ISO/IEC 8208 : 2000, Fourth Edition, 3.7.1, p. 7. [28] ITU-T Recommendation D.11 (03/91) (http:/ / www. itu. int/ rec/ T-REC-D. 11-199103-I/ en/ ), p. 2. [29] ITU-T Recommendation D.12 (11/88) (http:/ / www. itu. int/ rec/ T-REC-D. 12-198811-I/ en/ ), p. 1. [30] ITU-T Recommendation X.7 (04/2004) (http:/ / www. itu. int/ rec/ T-REC-X. 7-200404-I/ en/ ), p. 42. [31] ITU-T Recommendation D.11 (03/91) (http:/ / www. itu. int/ rec/ T-REC-D. 11-199103-I/ en/ ), p. 3. [32] ITU-T Recommendation X.7 (04/2004) (http:/ / www. itu. int/ rec/ T-REC-X. 7-200404-I/ en/ ), p. 38. [33] ITU-T Recommendation X.2 (http:/ / www. itu. int/ rec/ T-REC-X. 2/ en/ ) [34] ITU-T Recommendation X.7 (http:/ / www. itu. int/ rec/ T-REC-X. 7/ en/ ) [35] ITU-T Recommendation X.25 (10/96) (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ ), Summary, p. v. [36] ISO/IEC 8208 : 2000, Fourth Edition, Section 1: Scope, p. 1.

X.25
[37] [38] [39] [40] ISO/IEC 8208 : 2000, Fourth Edition, Annex C. ITU-T Recommendation X.25 (http:/ / www. itu. int/ rec/ T-REC-X. 25/ en/ ). ITU-T Recommendation X.25 (1993) White Book (http:/ / www. itu. int/ rec/ T-REC-X. 25-199303-S/ en/ ) ITU-T Recommendation X.25 (1996) Grey Book (http:/ / www. itu. int/ rec/ T-REC-X. 25-199610-I/ en/ )

118

Further reading
Computer Communications, lecture notes by Prof. Chaim Ziegler PhD, Brooklyn College Motorola Codex (1992). The Basics Book of X.25 Packet Switching. The Basics Book Series (2nd ed.). Reading, MA: Addison-Wesley. ISBN0-201-56369-X. Deasington, Richard (1985). X.25 Explained. Computer Communications and Networking (2nd ed.). Chichester UK: Ellis Horwood. ISBN978-0-85312-626-3. Friend, George E.; Fike, John L.; Baker, H. Charles; Bellamy, John C. (1988). Understanding Data Communications (2nd ed.). Indianapolis: Howard W. Sams & Company. ISBN0-672-27270-9. Pooch, Udo W.; William H. Greene, Gary G. Moss (1983). Telecommunications and Networking. Boston: Little, Brown and Company. ISBN0-316-71498-4. Schatt, Stan (1991). Linking LANs: A Micro Manager's Guide. McGraw-Hill. ISBN0-8306-3755-9. Thorpe, Nicolas M.; Ross, Derek (1992). X.25 Made Easy. Prentice Hall. ISBN0-13-972183-5.

External links
Recommendation X.25 (10/96) (http://www.itu.int/rec/T-REC-X.25/en) at ITU-T Cisco X.25 Reference (http://www.cisco.com/en/US/docs/internetworking/technology/handbook/X25. html) An X.25 Networking Guide with comparisons to TCP/IP (http://www.farsite.com/X.25/X.25_info/X.25. htm) X.25 Directory & Informational Resource (http://softtechinfo.com/network/x25.html) RFCs and other resources by Open Directory (http://search.dmoz.org/cgi-bin/search?search=X.25)

119

Today's Internet
"Internet" or "internet"?
Publishers have different conventions regarding the capitalization of "Internet" or "internet", when referring to the Internet/internet, as distinct from generic internets, or internetworks. Since the widespread deployment of the Internet Protocol Suite in the early 1980s, the Internet standards-setting bodies and technical infrastructure organizations, such as the Internet Engineering Task Force (IETF), the Internet Society, the Internet Corporation for Assigned Names and Numbers (ICANN), the World Wide Web Consortium, and others, have consistently spelled the name of the world-wide network, the Internet, with an initial capital letter and treated it as a proper noun in the English language. Before the transformation of the ARPANET into the modern Internet, the term internet in its lower case spelling was a common short form of the term internetwork, and this spelling and use may still be found in discussions of networking. Many publications today disregard the historical development and use the term in its common noun spelling, arguing that it has become a generic medium of communication.

Name as Internet versus generic internets


The Internet standards community has historically differentiated between the Internet and an internet (or internetwork), the first being treated as a proper noun with a capital letter, and the latter as a common noun with lower-case first letter. An internet is any internetwork or inter-connected Internet Protocol networks. The distinction is evident in a large number of the Request for Comments documents from the early 1980s, when the transition from the ARPANET to the Internet was in progress, although it was not applied with complete uniformity.[1][2] Another example is IBM's TCP/IP Tutorial and Technical Overview (ISBN 0-7384-2165-0) from 1989, which stated that: The words internetwork and internet is [sic] simply a contraction of the phrase interconnected network. However, when written with a capital "I," the Internet refers to the worldwide set of interconnected networks. Hence, the Internet is an internet, but the reverse does not apply. The Internet is sometimes called the connected Internet. The Internet/internet distinction fell out of common use after the Internet Protocol Suite was widely deployed in commercial networks in the 1990s. In the RFC documents that defined the evolving Internet Protocol (IP) standards, the term was introduced as a noun adjunct, apparently a shortening of "internetworking"[3] and is mostly used in this way. As the impetus behind IP grew, it became more common to regard the results of internetworking as entities of their own, and internet became a noun, used both in a generic sense (any collection of computer networks connected through internetworking) and in a specific sense (the collection of computer networks that internetworked with ARPANET, and later NSFNET, using the IP standards, and that grew into the connectivity service we know today). In its generic sense, internet is a common noun, a synonym for internetwork; therefore, it has a plural form (first appearing in the RFC series in RFC 870, RFC 871, and RFC 872), and is not capitalized. In its specific sense, it is a proper noun, and therefore, without a plural form and traditionally capitalized.

"Internet" or "internet"?

120

Argument for common noun usage


In 2002, a New York Times column said that Internet has been changing from a proper noun to a generic term.[4] Words for new technologies, such as Phonograph in the 19th century, are sometimes capitalized at first, later becoming uncapitalized.[4] In 1999, another column suggested that Internet might, like some other commonly used proper nouns, lose its capital letter.[5] Capitalization of the word as an adjective also varies. Some guides specify that the word should be capitalized as a noun but not capitalized as an adjective, e.g., "internet resources".[6][7]

Usage examples
Examples of media publications and news outlets that capitalize the term include The New York Times, the Associated Press, Time, and The Times of India. In addition, many peer-reviewed journals and professional publications such as Communications of the ACM capitalize "Internet," and this style guideline is also specified by the American Psychological Association in its electronic media spelling guide. The Modern Language Association's MLA Handbook does not specifically mention capitalization of Internet, but its consistent practice is to capitalize it.[8] More recently, a significant number of publications have switched to not capitalizing the noun internet. Among them are The Economist, the Financial Times, The Times, the Guardian, the Observer[9] and the Sydney Morning Herald. As of 2011, most publications using "internet" appear to be located outside of North America, but the gap is closing. Wired News, an American news source, adopted the lower-case spelling in 2004.[10] Around April 2010, CNN shifted its house style to adopt the lowercase spelling. As Internet connectivity has expanded, it has started to be seen as a service similar to television, radio, and telephone, and the word has come to be used in this way (e.g. "I have the internet at home" and "I found it on the internet").

References
[1] RFC 871 (1982) "The 'network' composed of the concatenation of such subnets is sometimes called 'a catenet,' though more oftenand less picturesquelymerely 'an internet.'" [2] RFC 872 (1982) "[TCP's] next most significant property is that it is designed to operate in a 'catenet' (also known as the, or an, 'internet')" [3] The form first occurring in the RFC series is "internetworking protocol," RFC 604: "Four of the reserved link numbers are hereby assigned for experimental use in the testing of an internetworking protocol." The first use of "internet" is in RFC 675, in the form "internet packet". [4] Schwartz, John (29 December 2002). "Who Owns the Internet? You and i Do" (http:/ / www. nytimes. com/ 2002/ 12/ 29/ weekinreview/ 29SCHW. html). The New York Times. . Retrieved 2009-04-19. "Allan M. Siegal, a co-author of The New York Times Manual of Style and Usage and an assistant managing editor at the newspaper, said that 'there is some virtue in the theory' that Internet is becoming a generic term, 'and it would not be surprising to see the lowercase usage eclipse the uppercase within a few years.'" [5] Wilbers, Stephen (13 September 1999). "Errors put a wall between you and your readers". Orange County Register (Santa Ana, California): p.c.20. "If you like being ahead of the game, you might prefer to spell Internet and Web as internet and web, but according to standard usage they should be capitalized. Keep in mind, however, that commonly used proper nouns sometimes lose their capital letters over time and that Internet and Web may someday go the way of the french fry." [6] E.g. "MIT Libraries House Style" (http:/ / libstaff. mit. edu/ publications/ housestyle. html). MIT Libraries Staff Web. Last updated 14 August 2008. . Retrieved 2009-04-19. [7] Donovan, Melissa (16 November 2007). "Capitalization" (http:/ / www. writingforward. com/ grammar/ capitalization). Writing Forward. . Retrieved 2009-04-19. [8] MLA Handbook for Writers of Research Papers (Seventh Edition ed.). New York: Modern Language Association of America. 2009. ISBN9781603290241. [9] "Guardian and Observer style guide" (http:/ / www. guardian. co. uk/ styleguide/ i#id-3026449). Guardian News and Media Limited. . Retrieved 2008-04-19. "internet, net, web, world wide web. See websites." [10] Long, Tony (16 August 2004). "It's Just the 'internet' Now" (http:/ / www. wired. com/ culture/ lifestyle/ news/ 2004/ 08/ 64596). Wired. . Retrieved 2009-04-19. "... what the internet is: another medium for delivering and receiving information."

"Internet" or "internet"?

121

External links
Internet, Web, and Other Post-Watergate Concerns (http://www.chicagomanualofstyle.org/CMS_FAQ/ InternetWebandOtherPost-WatergateConcerns/InternetWebandOtherPost-WatergateConcerns16.html), The Chicago Manual of Style

Internet Protocol Suite


The Internet protocol suite is the set of communications protocols used for the Internet and similar networks, and generally the most popular protocol stack for wide area networks. It is commonly known as TCP/IP, because of its most important protocols: Transmission Control Protocol (TCP) and Internet Protocol (IP), which were the first networking protocols defined in this standard. It is occasionally known as the DoD model due to the foundational influence of the ARPANET in the 1970s (operated by DARPA, an agency of the United States Department of Defense). TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. It has four abstraction layers, each with its own protocols.[1][2] From lowest to highest, the layers are: 1. 2. 3. 4. The link layer (commonly Ethernet) contains communication technologies for a local network. The internet layer (IP) connects local networks, thus establishing internetworking. The transport layer (TCP) handles host-to-host communication. The application layer (for example HTTP) contains all protocols for specific data communications services on a process-to-process level (for example how a web browser communicates with a web server).

The TCP/IP model and related protocols are maintained by the Internet Engineering Task Force (IETF).

History
Early research
The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and Diagram of the first internetworked connection recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the

Internet Protocol Suite

122

hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The network's design included the recognition it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their A Stanford Research Institute packet radio van, local characteristics, thereby solving Kahn's initial problem. One site of the first three-way internetworked transmission. popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string." As a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested. A computer called a router is provided with an interface to each network. It forwards packets back and forth between them.[3] Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Specification
From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.[4] A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4. The last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.[5]

Adoption
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking.[6] In 1985, the Internet Architecture Board held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference was held, focusing on network interoperability via further adoption of TCP/IP. It was founded by Dan Lynch, an early Internet activist. From the beginning, it was attended by large corporations, such as IBM and DEC. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled. IBM, ATT and DEC were the first major corporations to adopt TCP/IP, despite having competing internal protocols (SNA, XNS, etc.). In IBM, from 1984, Barry Appelman's group did TCP/IP development. (Appelman later moved to AOL to be the head of all its development efforts.) They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies began offering TCP/IP stacks for DOS and MS Windows, such as the company FTP Software, and the Wollongong Group.[7] The first VM/CMS TCP/IP stack came from the University of Wisconsin.[8]

Internet Protocol Suite Back then, most of these TCP/IP stacks were written single-handedly by a few talented programmers. For example, John Romkey of FTP Software was the author of the MIT PC/IP package.[9] John Romkey's PC/IP implementation was the first IBM PC TCP/IP stack. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively.[10] The spread of TCP/IP was fueled further in June 1989, when AT&T agreed to put into the public domain the TCP/IP code developed for UNIX. Various vendors, including IBM, included this code in their own TCP/IP stacks. Many companies sold TCP/IP stacks for Windows until Microsoft released its own TCP/IP stack in Windows 95. This event was a little late in the evolution of the Internet, but it cemented TCP/IP's dominance over other protocols, which eventually disappeared. These protocols included IBM's SNA, OSI, Microsoft's native NetBIOS, and Xerox' XNS.

123

Key architectural principles


An early architectural document, RFC 1122, emphasizes architectural principles over layering.[11] End-to-end principle: This principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.[12] Robustness Principle: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." [13] "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features." [14]

Internet Protocol Suite

124

Layers in the Internet protocol suite


The Internet protocol suite uses encapsulation to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level. The "layers" of the protocol suite near the top are logically closer to the user application, while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the nitty-gritty detail of transmitting bits over, for example, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol. Even when the layers are examined, the assorted architectural documentsthere is no single architectural model such as ISO 7498, the Open Systems Interconnection (OSI) modelhave fewer and less rigidly defined layers than the OSI model, and thus provide an easier fit for real-world protocols. In point of fact, one frequently referenced document, RFC 1958, does not contain a stack of layers. The lack of emphasis on layering is a strong difference between the IETF and OSI approaches. It only refers to the existence of the "internetworking layer" and generally to "upper layers"; this document was intended as a 1996 "snapshot" of the architecture: "The Internet and its

Two Internet hosts connected via two routers and the corresponding layers used at each hop. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. Every other detail of the communication is hidden from each process. The underlying mechanisms that transmit data between the host computers are located in the lower protocol layers.

Encapsulation of application data descending through the layers described in RFC 1122

Internet Protocol Suite architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technology's success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture." RFC 1122, entitled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows: Application layer (process-to-process): This is the scope within which applications create user data and communicate this data to other processes or applications on another or the same host. The communications partners are often called peers. This is where the "higher level" protocols such as SMTP, FTP, SSH, HTTP, etc. operate. Transport layer (host-to-host): The transport layer constitutes the networking regime between two network hosts, either on the local network or on remote networks separated by routers. The transport layer provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. This is where flow-control, error-correction, and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections between Internet hosts. Internet layer (internetworking): The internet layer has the task of exchanging datagrams across network boundaries. It is therefore also referred to as the layer that establishes internetworking, indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next IP router that has the connectivity to a network closer to the final data destination. Link layer: This layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer describes the protocols used to describe the local network topology and the interfaces needed to effect transmission of Internet layer datagrams to next-neighbor hosts. (cf. the OSI data link layer). The Internet protocol suite and the layered protocol stack design were in use before the OSI model was established. Since then, the TCP/IP model has been compared with the OSI model in books and classrooms, which often results in confusion because the two models use different assumptions, including about the relative importance of strict layering. This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not, to provide. Again, the original OSI model was extended to include connectionless services (OSIRM CL).[15] For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layer implementations must choose whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver acknowledges the reception of the packet). This model lacks the formalism of the OSI model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark, "We reject: kings, presidents and voting. We believe in: rough consensus and running code." Criticisms of this model, which have been made with respect to the OSI model, often do not consider ISO's later extensions to that model. 1. For multiaccess links with their own addressing systems (e.g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the internal organization of the network layer (IONL).[16] 2. ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) [17]

125

Internet Protocol Suite 3. The SSL/TLS library operates above the transport layer (uses TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture. 4. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI model. The following is a description of each layer in the TCP/IP networking model starting from the lowest level.

126

Link layer
The link layer is the networking scope of the local network connection to which a host is attached. This regime is called the link in Internet literature. This is the lowest component layer of the Internet protocols, as TCP/IP is designed to be hardware independent. As a result TCP/IP is able to be implemented on top of virtually any hardware networking technology. The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on a given link can be controlled both in the software device driver for the network card, as well as on firmware or specialized chipsets. These will perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to data link addressing, such as Media Access Control (MAC), however all other aspects below that level are implicitly assumed to exist in the link layer, but are not explicitly defined. This is also the layer where packets may be selected to be sent over a virtual private network or other networking tunnel. In this scenario, the link layer data may be considered application data which traverses another instantiation of the IP stack for transmission or reception over another IP connection. Such a connection, or virtual link, may be established with a transport protocol or even an application scope protocol that serves as a tunnel in the link layer of the protocol stack. Thus, the TCP/IP model does not dictate a strict hierarchical encapsulation sequence.

Internet layer
The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing.[18] In the Internet protocol suite, the Internet Protocol performs two basic functions: Host addressing and identification: This is accomplished with a hierarchical addressing system (see IP address). Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by sending them to the next network node (router) closer to the final destination. The internet layer is not only agnostic of application data structures at the transport layer, but it also does not distinguish between operation of the various transport layer protocols. So, IP can carry data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively. Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic information about IP transmission) and IGMP (used to manage IP Multicast data) are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model. The internet layer only provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts computers, and to

Internet Protocol Suite locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated by the standardization of Internet Protocol version 6 (IPv6) in 1998, and beginning production implementations in approximately 2006.

127

Transport layer
The transport layer establishes host-to-host connectivity, meaning it handles the details of data transmission that are independent of the structure of user data and the logistics of exchanging information for any particular specific purpose. Its responsibility includes end-to-end message transfer independent of the underlying network, along with error control, segmentation, flow control, congestion control, and application addressing (port numbers). End to end message transmission or connecting applications at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The transport layer can be thought of as a transport mechanism, e.g., a vehicle with the responsibility to make sure that its contents (passengers/goods) reach their destination safely and soundly, unless another protocol layer is responsible for safe delivery. The layer simply establishes a basic data channel that an application uses in its task-specific data exchange. For this purpose the layer establishes the concept of the port, a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service announcements or directory services. Since IP provides only a best effort delivery, the transport layer is the first layer of the TCP/IP stack to offer reliability. IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC). For example, the TCP is a connection-oriented protocol that addresses numerous reliability issues to provide a reliable byte stream: data arrives in-order data has minimal error (i.e. correctness) duplicate data is discarded lost/discarded packets are resent includes traffic congestion control

The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented not byte-stream-oriented like TCP and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications. User Datagram Protocol is a connectionless datagram protocol. Like IP, it is a best effort, "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is designed for real-time data such as streaming audio and video. The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications. (See List of TCP and UDP port numbers.)

Internet Protocol Suite

128

Application layer
The application layer contains the higher-level protocols used by most applications for network communication. Examples of application layer protocols include the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP).[19] Data coded according to application layer protocols are then encapsulated into one or (occasionally) more transport layer protocols (such as TCP or UDP), which in turn use lower layer protocols to effect actual data transfer. Since the IP stack defines no layers between the application and transport layers, the application layer must include any protocols that act like the OSI's presentation and session layer protocols. This is usually done through libraries. Application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. As noted above, layers are not necessarily clearly defined in the Internet protocol suite. Application layer protocols are most often associated with clientserver applications, and the commoner servers have specific ports assigned to them by the IANA: HTTP has port 80; Telnet has port 23; etc. Clients, on the other hand, tend to use ephemeral ports, i.e. port numbers assigned at random from a range set aside for the purpose. Transport and lower level layers are largely unconcerned with the specifics of application layer protocols. Routers and switches do not typically "look inside" the encapsulated traffic to see what kind of application protocol it represents, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications do try to determine what's inside, as with the Resource Reservation Protocol (RSVP). It's also sometimes necessary for Network Address Translation (NAT) facilities to take account of the needs of particular application layer protocols. (NAT allows hosts on private networks to communicate with the outside world via a single visible IP address using port forwarding, and is an almost ubiquitous feature of modern domestic broadband routers).

Layer names and number of layers in the literature


The following table shows various networking models. The number of layers varies between three and seven.
[20] Kurose, [21] Forouzan Five layers [22] Comer, [23] Kozierok Four+one layers Stallings [24] Tanenbaum [25] RFC 1122, Cisco Internet STD Academy[26] 3 (1989) Four layers Four layers Mike Padlipsky's 1982 "Arpanet Reference Model" (RFC 871) Three layers OSI model

Five layers

Five layers

Seven layers OSI model

"Five-layer Internet model" or "TCP/IP protocol suite" Application

"TCP/IP 5-layer "TCP/IP reference model" model"

"TCP/IP 5-layer reference model"

"Internet model"

"Internet model"

"Arpanet reference model"

Application

Application

Application

Application

Application

Application/Process

Application Presentation Session

Transport

Transport

Host-to-host or transport Internet Network access

Transport

Transport

Transport

Host-to-host

Transport

Network Data link

Internet Data link (Network interface) (Hardware)

Internet Data link

Internet Link

Internetwork Network interface Network interface

Network Data link

Physical

Physical

Physical

Physical

Internet Protocol Suite Some of the networking models are from textbooks, which are secondary sources that may contravene the intent of RFC 1122 and other IETF primary sources.[27]

129

OSI and TCP/IP layering differences


The three top layers in the OSI modelthe application layer, the presentation layer and the session layerare not distinguished separately in the TCP/IP model where it is just the application layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can run safely over the best-effort UDP transport. Different authors have interpreted the RFCs differently, about whether the link layer (and the TCP/IP model) covers OSI model layer 1 (physical layer) issues, or if a hardware layer is assumed below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model, since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2. The session layer roughly corresponds to the Telnet virtual terminal functionality, which is part of text based protocols such as the HTTP and SMTP TCP/IP model application layer protocols. It also corresponds to TCP and UDP port numbering, which is considered as part of the transport layer in the TCP/IP model. Some functions that would have been performed by an OSI presentation layer are realized at the Internet application layer using the MIME standard, which is used in application layer protocols such as HTTP and SMTP. The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".[27] Conflicts are apparent also in the original OSI model, ISO 7498, when not considering the annexes to this model (e.g., ISO 7498/4 Management Framework), or the ISO 8648 Internal Organization of the Network layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are neatly defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP. IETF protocols can be encapsulated recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunneling at the network layer.

Implementations
No specific hardware or software implementation is required by the protocols or the layered model, so there are many. Most computer operating systems in use today, including all consumer-targeted systems, include a TCP/IP implementation. A minimally acceptable implementation includes the following protocols, listed from most essential to least essential: IP, ARP, ICMP, UDP, TCP and sometimes IGMP. In principle, it is possible to support only one transport protocol, such as UDP, but this is rarely done, because it limits usage of the whole implementation. IPv6, beyond its own version of ARP (NDP), ICMP (ICMPv6) and IGMP (IGMPv6), has some additional required functions, and often is accompanied by an integrated IPSec security layer. Other protocols could be easily added later (possibly being implemented entirely in userspace), such as DNS for resolving domain names to IP addresses, or DHCP for automatically configuring network interfaces. Normally, application programmers are concerned only with interfaces in the application layer and often also in the transport layer, while the layers below are services provided by the TCP/IP stack in the operating system. Most IP

Internet Protocol Suite implementations are accessible to programmers through sockets and APIs. Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems, and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines. Microcontroller firmware in the network adapter typically handles link issues, supported by driver software in the operational system. Non-programmable analog and digital electronics are normally in charge of the physical components below the link layer, typically using an application-specific integrated circuit (ASIC) chipset for each network interface or other physical standard. High-performance routers are to a large extent based on fast non-programmable digital electronics, carrying out link level switching.

130

References
[1] [2] [3] [4] [5] [6] RFC 1122, Requirements for Internet Hosts Communication Layers, R. Braden (ed.), October 1989 RFC 1123, Requirements for Internet Hosts Application and Support, R. Braden (ed.), October 1989 RFC 1812, Requirements for IP Version 4 Routers, F. Baker (June 1995) RFC 675, Specification of Internet Transmission Control Protocol, V. Cerf et al. (December 1974) Internet History (http:/ / www. livinginternet. com/ i/ ii. htm) Ronda Hauben. "From the ARPANET to the Internet" (http:/ / www. columbia. edu/ ~rh120/ other/ tcpdigest_paper. txt). TCP Digest (UUCP). . Retrieved 2007-07-05.

[7] Wollongong (http:/ / support. microsoft. com/ kb/ 108007) [8] A Short History of Internet Protocols at CERN (http:/ / www. weblab. isti. cnr. it/ education/ ssfs/ lezioni/ slides/ archives/ cern. htm) [9] About | "romkey" (http:/ / www. romkey. com/ about/ ) [10] Barry Appelman [11] Architectural Principles of the Internet (ftp:/ / ftp. rfc-editor. org/ in-notes/ rfc1958. txt), RFC 1958, B. Carpenter, June 1996 [12] Rethinking the design of the Internet: The end to end arguments vs. the brave new world (http:/ / www. csd. uoc. gr/ ~hy558/ papers/ Rethinking_2001. pdf), Marjory S. Blumenthal, David D. Clark, August 2001 [13] p.23 INTERNET PROTOCOL DARPA INTERNET PROGRAM PROTOCOL SPECIFICATION September 1981 Jon Postel Editor (http:/ / www. ietf. org/ rfc/ rfc0791. txt?number=791) [14] Requirements for Internet Hosts -- Communication Layers p.13 October 1989 R. Braden, Editor (http:/ / tools. ietf. org/ html/ rfc1122#page-12) [15] [ OSI: Reference Model Addendum 1: Connectionless-mode Transmission,ISO7498/AD1],ISO7498/AD1, May 1986 [16] Information processing systems -- Open Systems Interconnection -- Internal organization of the Network Layer (http:/ / www. iso. org/ iso/ home/ store/ catalogue_tc/ catalogue_detail. htm?csnumber=16011), ISO 8648:1988. [17] Information processing systems -- Open Systems Interconnection -- Basic Reference Model -- Part 4: Management framework (http:/ / www. iso. org/ iso/ home/ store/ catalogue_tc/ catalogue_detail. htm?csnumber=14258), ISO 7498-4:1989. [18] IP Packet Structure (http:/ / www. comsci. us/ datacom/ ippacket. html) [19] TCP/IP Illustrated: the protocols (http:/ / www. kohala. com/ start/ tcpipiv1. html), ISBN 0-201-63346-9, W. Richard Stevens, February 1994 [20] James F. Kurose, Keith W. Ross, Computer Networking: A Top-Down Approach, 2008, ISBN 0-321-49770-8 (http:/ / www. pearsonhighered. com/ educator/ academic/ product/ 0,,0321497708,00+ en-USS_01DBC. html) [21] Behrouz A. Forouzan, Data Communications and Networking, 2003 (http:/ / books. google. com/ books?id=U3Gcf65Pu9IC& printsec=frontcover& dq=forouzan+ "computer+ networks"& ei=RPZ9SOCvMofctAO02di0AQ& hl=en& sig=ACfU3U2Hh_n83pPtf5uCreCih0HnWvNcxg#PPA29,M1) [22] Douglas E. Comer, Internetworking with TCP/IP: Principles, Protocols and Architecture, Pearson Prentice Hall 2005, ISBN 0-13-187671-6 (http:/ / books. google. com/ books?id=jonyuTASbWAC& pg=PA155& hl=sv& source=gbs_toc_r& cad=0_0& sig=ACfU3U18gHAia1pU_Pxn-rhkCnH1v70M6Q#PPA161,M1) [23] Charles M. Kozierok, "The TCP/IP Guide", No Starch Press 2005 (http:/ / books. google. com/ books?id=Pm4RgYV2w4YC& pg=PA131& dq="TCP/ IP+ model+ layers"& lr=& hl=sv& sig=ACfU3U3ofMwYAbZfGz1BmAXc2oNNFC2b8A#PPA129,M1) [24] William Stallings, Data and Computer Communications, Prentice Hall 2006, ISBN 0-13-243310-9 (http:/ / books. google. com/ books?id=c_AWmhkovR0C& pg=PA35& dq="internet+ layer"+ "network+ access+ layer"& ei=-O99SI3EJo32sgOQpPThDw& hl=en& sig=ACfU3U38aXznzeAnQdbLcPFXfCgxAd4lFg) [25] Andrew S. Tanenbaum, Computer Networks, Prentice Hall 2002, ISBN 0-13-066102-3 (http:/ / books. google. com/ books?id=Pd-z64SJRBAC& pg=PA42& vq=internet+ layer& dq=networks& hl=sv& source=gbs_search_s& sig=ACfU3U3DHANeIz0sOsd5NK4VXSrgNFYVAw#PPA42,M1) [26] Mark Dye, Mark A. Dye, Wendell, Network Fundamentals: CCNA Exploration Companion Guide, 2007, ISBN 1-58713-208-7

Internet Protocol Suite


[27] R. Bush; D. Meyer (December 2002), Some Internet Architectural Guidelines and Philosophy (http:/ / www. ietf. org/ rfc/ rfc3439. txt), Internet Engineering Task Force,

131

Further reading
Douglas E. Comer. Internetworking with TCP/IP - Principles, Protocols and Architecture. ISBN 86-7991-142-9 Joseph G. Davies and Thomas F. Lee. Microsoft Windows Server 2003 TCP/IP Protocols and Services. ISBN 0-7356-1291-9 Forouzan, Behrouz A. (2003). TCP/IP Protocol Suite (2nd ed.). McGraw-Hill. ISBN0-07-246060-1. Craig Hunt TCP/IP Network Administration. O'Reilly (1998) ISBN 1-56592-322-7 Maufer, Thomas A. (1999). IP Fundamentals. Prentice Hall. ISBN0-13-975483-0. Ian McLean. Windows(R) 2000 TCP/IP Black Book. ISBN 1-57610-687-X Ajit Mungale Pro .NET 1.1 Network Programming. ISBN 1-59059-345-6 W. Richard Stevens. TCP/IP Illustrated, Volume 1: The Protocols. ISBN 0-201-63346-9 W. Richard Stevens and Gary R. Wright. TCP/IP Illustrated, Volume 2: The Implementation. ISBN 0-201-63354-X W. Richard Stevens. TCP/IP Illustrated, Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols. ISBN 0-201-63495-3 Andrew S. Tanenbaum. Computer Networks. ISBN 0-13-066102-3 Clark, D. (1988). "The Design Philosophy of the DARPA Internet Protocols" (http://www.cs.princeton.edu/ ~jrex/teaching/spring2005/reading/clark88.pdf). SIGCOMM '88 Symposium proceedings on Communications architectures and protocols (ACM): 106114. doi:10.1145/52324.52336. Retrieved 2011-10-16.

External links
Internet History (http://www.livinginternet.com/i/ii.htm)Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf and Kahn). RFC 675 (http://www.ietf.org/rfc/rfc0675.txt) - Specification of Internet Transmission Control Program, December 1974 Version TCP/IP State Transition Diagram (http://www.night-ray.com/TCPIP_State_Transition_Diagram.pdf) (PDF) RFC 1180 A TCP/IP Tutorial - from the Internet Engineering Task Force (January 1991) TCP/IP FAQ (http://www.itprc.com/tcpipfaq/) The TCP/IP Guide (http://www.tcpipguide.com/free/) - A comprehensive look at the protocols and the procedures/processes involved A Study of the ARPANET TCP/IP Digest (http://www.columbia.edu/~rh120/other/tcpdigest_paper.txt) TCP/IP Sequence Diagrams (http://www.eventhelix.com/RealtimeMantra/Networking/) The Internet in Practice (http://www.searchandgo.com/articles/internet/internet-practice-4.php) TCP/IP - Directory & Informational Resource (http://softtechinfo.com/network/tcpip.html) Daryl's TCP/IP Primer (http://www.ipprimer.com/) - Intro to TCP/IP LAN administration, conversational style Introduction to TCP/IP (http://www.linux-tutorial.info/MContent-142) TCP/IP commands from command prompt (http://blog.webgk.com/2007/10/ dns-tcpip-commands-from-command-prompt.html) cIPS (http://sourceforge.net/projects/cipsuite/) Robust TCP/IP stack for embedded devices without an Operating System

Internet access

132

Internet access
Internet access is the means by which individual terminals, computers, mobile devices, and local area networks are connected to the global Internet. Internet access is usually sold by Internet Service Providers (ISPs) that use many different technologies offering a wide range of data rates to the end user. Consumer use first became popular through dial-up connections in the 1980s and 1990s. By the first decade of the 21st century, many consumers had switched away from dial-up to dedicated connections, most Internet access products were being marketed using the term "broadband", and broadband penetration was being treated as a key economic indicator.[1][2]

History
The Internet began as a network funded by the U.S. government to support projects within the government and at universities and research laboratories in the US - but grew over time to include most of the world's large universities and the research arms of many technology companies.[3][4][5] Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted.[6] In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s and grew to support 100 and 1000 Mbit/s, while modem data rates grew from 1200 and 2400 bit/s in the 1980s, to 28 and 56 kbit/s by the mid to late 1990s. Initially dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers (NASs) supporting the Serial Line Internet Protocol (SLIP) and later the Point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users, subject only to limitations imposed by the lower data rates available using dial-up. Broadband Internet access, often shortened to just broadband and also known as high-speed Internet access, are services that provide bit-rates considerably higher than that available using a 56 kbit/s modem. In the U.S. National Broadband Plan of 2009, the Federal Communications Commission (FCC) defined broadband access as "Internet access that is always on and faster than the traditional dial-up access",[7] although the FCC has defined it differently through the years.[8] The term broadband was originally a reference to multi-frequency communication, as opposed to narrowband or baseband. Broadband is now a marketing term that telephone, cable, and other companies use to sell their more expensive higher data rate products.[9] Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not hog phone lines.[10] Broadband provides improved access to Internet services such as: Faster world wide web browsing Faster downloading of documents, photographs, videos, and other large files Telephony, radio, television, and videoconferencing Virtual private networks and remote system administration Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive

In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue.[11] In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries[1] and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.[12]

Internet access The broadband technologies in widest use are ADSL and cable Internet access. Newer technologies include VDSL and optical fibre extended closer to the subscriber in both telephone and cable plants. Fibre-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless and satellite Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless, e.g., Motorola Canopy. Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.

133

Availability
In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations for connecting users' laptops to local area networks (LANs). Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based. Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. And Mobile broadband access allows smart phones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made.

Data rates
Data rate units (SI) Unit Kilobit per second Megabit/s Gigabit/s Terabit/s Petabit/s Unit Kilobyte per second (103) Megabyte/s (106) (103) (106) (109) Symbol Bits kbit/s Mbit/s Gbit/s 1,000 bit/s 1,000 kbit/s 1,000 Mbit/s 1,000 Gbit/s 1,000 Tbit/s Bytes 125 bytes/s 125 kB/s 125 MB/s 125 GB/s 125 TB/s Bytes 1,000 bytes/s 1,000 kB/s

(1012) Tbit/s (1015) Pbit/s

Symbol Bits kB/s MB/s 8,000 bit/s 8,000 kbit/s

Internet access

134
Gigabyte/s Terabyte/s Petabyte/s GB/s 8,000 Mbit/s 1,000 MB/s 8,000 Gbit/s 1,000 GB/s 8,000 Tbit/s 1,000 TB/s

(109)

(1012) TB/s (1015) PB/s

The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection to from 220 (V.42bis) to 320 (V.44) kbit/s.[13] However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s.[14] Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64kbit/s up to 4.0Mbit/s.[15] In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2Mbit/s.[16] A 2006 Organization for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256kbit/s.[1] And in 2010 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 4 Mbit/s downstream (from the Internet to the users computer) and 1Mbit/s upstream (from the users computer to the Internet).[17] The trend is to raise the threshold of the broadband definition as higher data rate services become available.[18] The higher data rate dial-up modems and many broadband services are "asymmetric"supporting much higher data rates for download (toward the user) than for upload (toward the Internet). Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer.[19] Actual end-to-end data rates can be lower due to a number of factors.[20] Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user. Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high quality streaming video can require high data rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live videoeffectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.

Internet access

135

Technologies
Access technologies generally use a modem, which converts digital data to analog for transmission over analog networks such as the telephone and cable networks.[10]

Local Area Networks


Local area networks (LANs) provide Internet access to computers and other devices in a limited area such as a home, school, computer laboratory, or office building, usually at relatively high data rates that typically range from 10 to 1000 Mbit/s.[21] There are wired and wireless LANs. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used in the past. Most Internet access today is through a LAN, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this begs the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections.

Dial-up access
Dial-up access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection.[22] Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet).[10]

Broadband access
The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. These technologies use wires or fiber optic cables in contrast to wireless broadband described later. Multilink dial-up Multilink dial-up provides increased bandwidth by bonding two or more dial-up connections together and treating them as a single data channel.[23] It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking - and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking.[24]

Internet access Integrated Services Digital Network (ISDN) Integrated Services Digital Network (ISDN), a switched telephone service capable of transporting voice and digital data, is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies.[25] Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9Mbit/s. Leased lines Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created.[26] T-carrier technology dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0) to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and 1,500 kbit/s. T-carrier lines require special termination equipment that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP.[27] In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s). Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high data rate digital bit streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s. Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (622.080 Mbit/s), OC-48c (2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams.[26] The 1, 10, 40, and 100 Gigabit Ethernet (GbE, 10GbE, 40GbE, and 100GbE) IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to 40km.[28] Cable Internet access Cable Internet or cable modem access provides Internet access via Hybrid Fiber Coaxial wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. In a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means usually fiber optic cable or digital satellite and microwave transmissions.[29] Like DSL, broadband cable provides a continuous connection with an ISP. Downstream, the direction toward the user, bit rates can be as much as 400Mbit/s for business connections, and 100Mbit/s for residential service in some countries. Upstream traffic, originating at the user, ranges from 384kbit/s to more than 20Mbit/s. Broadband cable access tends to service fewer business customers because existing

136

Internet access television cable networks tend to service residential buildings and commercial buildings do not always include wiring for coaxial cable networks.[30] In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted.[29] Digital subscriber line (DSL, ADSL, SDSL, and VDSL)
DSL technologies Standard ADSL ANSI T1.413 Issue 2 ITU G.992.1 (G.DMT) ITU G.992.2 (G.Lite) ITU G.992.3 ITU G.992.4 ITU G.992.3 Annex J ITU G.992.3 Annex L

137

ADSL2

ADSL2+ ITU G.992.5 ITU G.992.5 Annex M HDSL HDSL2 IDSL MSDSL PDSL RADSL SDSL SHDSL UDSL VDSL VDSL2 ITU G.993.1 ITU G.993.2 ITU G.991.2 ITU G.991.1

Digital Subscriber Line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication.[10] These frequency bands are subsequently separated by filters installed at the customer's premises. DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean Asymmetric Digital Subscriber Line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256kbit/s to 20Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e. in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric.[31] With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal.[32] Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1)[33] is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires[34] and up to 85 Mbit/s down- and upstream on coaxial cable.[35] VDSL is capable of supporting applications

Internet access such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection. VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL.[36] Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases. DSL Rings DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400Mbit/s.[37] Fiber to the home Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN).[38] These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, POTS) for final delivery to customers.[39] Australia has already begun rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses.[40] Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country).,[41][42][43][44] Power-line Internet Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s.[45] Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all powerline protocols must detect existing usage and avoid interfering with it.[45] Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer.[45] In the U.S. a transformer serves a small clusters of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than in a comparable European city.[46] ATM and Frame Relay Asynchronous Transfer Mode (ATM) and Frame Relay are wide area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates.[47][48]

138

Internet access While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did.

139

Wireless broadband access


Wireless broadband is used to provide both fixed and mobile Internet access. Wi-Fi Wi-Fi is the popular name for a "wireless local area network" that uses one of the IEEE 802.11 standards. It is a trademark of the Wi-Fi Alliance. Individual homes and businesses often use Wi-Fi to connect laptops and smart phones to the Internet. Wi-Fi Hotspots may be found in coffee shops and various other public establishments. Wi-Fi is used to create campus-wide and city-wide wireless networks.[49][50][51]
Wi-Fi logo Wi-Fi networks are built using one or more wireless routers called Access Points. "Ad hoc" computer to computer Wi-Fi" networks are also possible. The Wi-Fi network is connected to the larger Internet using DSL, cable modem, and other Internet access technologies. Data rates range from 6 to 600 Mbit/s. Wi-Fi service range is fairly short, typically 20 to 250 meters or from 65 to 820 feet. Both data rate and range are quite variable depending on the Wi-Fi protocol, location, frequency, building construction, and interference from other devices.[52] Using directional antennas and with careful engineering Wi-Fi can be extended to operate over distances of up to several kilometers, see Wireless ISP below.

Wireless ISP Wireless ISPs typically employ low-cost 802.11 Wi-Fi radio systems to link up remote locations over great distances, but may use other higher-power radio communications systems as well. Traditional 802.11b is an unlicensed omnidirectional service designed to span between 100 and 150 meters (300 to 500ft). By focusing the radio signal using a directional antenna 802.11b can operate reliably over a distance of many kilometres (miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are significantly slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems.[53] Rural Wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. There are currently a number of companies that provide this service.[54] Motorola Canopy and other proprietary technologies offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX.

Internet access WiMAX

140

Worldwide Interoperability for Microwave Access


WiMAX Forum logo

WiMAX (Worldwide Interoperability for Microwave Access) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. WiMAX enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL".[55] The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates.[56] Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi local area network (LAN). WiMAX signals also penetrate building walls much more effectively than Wi-Fi. Satellite broadband Satellites can provide fixed, portable, and mobile Internet access. It is among the most expensive forms of broadband Internet access, but may be the only choice available in remote areas.[57] Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. Satellite communication typically requires a clear line of sight, will not work well through trees and other vegetation, is adversely affected by moisture, rain, and snow (known as rain fade), and may require a fairly large, carefully aimed, directional antenna. Satellites in geostationary Earth orbit (GEO) operate in a fixed position Satellite Internet access via VSAT in Ghana 35,786km (22,236 miles) above the earth's equator. Even at the speed of light (about 300,000km/s or 186,000 miles per second), it takes a quarter of a second for a radio signal to travel from the earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies can make some applications, such as video conferencing, voice over IP, multiplayer games, and remote control of equipment, that require a real-time response impracticable via satellite. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the earth's polar regions.[58] HughesNet and ViaSat are GEO systems. Satellites in Low Earth orbit (LEO, below 2000km or 1243 miles) and Medium earth orbit (MEO, between 2000 and 35,786km or 1,243 and 22,236 miles) are less common, operate at lower altitudes, and are not fixed in their position above the earth. Lower altitudes allow lower latencies and make real-time interactive Internet applications feasible. LEO systems include Globalstar and Iridium. The O3b Satellite Constellation is a proposed MEO system with a latency of 125 ms. COMMStellation is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7ms.

Internet access Mobile broadband Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers to computers, mobile phones (called "cell phones" in North America and South Africa), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.[59] Roughly every ten years new mobile phone technology and infrastructure involving a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations:
Second generation (2G) from 1991: first mobile data services GSM CSD (2G): 9.6 kbit/s GSM GPRS (2.5G): 56 to 115 kbit/s GSM EDGE (2.75G): up to 237 kbit/s Third generation (3G) from 2001: UMTS W-CDMA: 0.4 Mbit/s down and up UMTS HSPA: 14.4 Mbit/s down; 5.8 Mbit/s up UMTS TDD: 16 Mbit/s down and up CDMA2000 1xRTT: 0.3 Mbit/s down; 0.15 Mbit/s up CDMA2000 EV-DO: 2.5 to 4.9 Mbit/s down; 0.15 to 1.8 up GSM EDGE-Evolution: 1.6 Mbit/s down; 0.5 Mbit/s up Fourth generation (4G) from 2006: HSPA+: 21 to 672 Mbit/s down; 5.8 to 168 Mbit/s up Mobile WiMAX (802.16): 37 to 365 Mbit/s down; 17 to 376 Mbit/s up LTE: 100 to 300 Mbit/s down; 50 to 75 Mbit/s up LTE-Advanced: 100 Mbit/s moving at higher speeds to 1 Gbit/s not moving or moving at low speeds MBWA: (802.20): 80 Mbit/s

141

The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates. WiMAX (described in more detail above) was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDMA2000 EV-DO and MBWA (Mobile Broadband Wireless Access) are no longer being actively developed. In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage.[60] Local Multipoint Distribution Service Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26GHz and 29GHz.[61] Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s.[62] Distance is typically limited to about 1.5 miles (2.4km), but links of up to 5 miles (8km) from the base station are possible in some circumstances.[63] LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards.

Pricing
Dial-up users pay the costs for making local or long distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access.

Internet access Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access. With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80-90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03.[64] Some ISPs estimate that about 5% of their users consume about 50% of the total bandwidth.[65] To ensure these high-bandwidth users do not slow down the network for everyone, some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps.[66][67] In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps.[65] Time Warner experimented with usage-based pricing in Beaumont, Texas.[68] An effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned.[69]

142

Growth in number of users


Worldwide Internet users 2006 2011a

World population 6.5 billion 7 billion Not using the Internetb Using the Internetb Users in the developing worldb Users in the developed worldb Users in Chinab
a

82% 18% 8% 10% 2%

65% 35% 22% 13% 8%

Estimate. b Share of world population. [60] Source: International Telecommunications Union.

Internet users by region 2006b Africa Americas Arab States Asia and Pacific Commonwealth of Independent States Europe
a

2011a,b 13% 56% 29% 27% 48% 74%

3% 39% 11% 11% 13% 50%

Estimate. b Share of regional population. [60] Source: International Telecommunications Union.

Internet access Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.45 billion in 2011.[60] With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia,[70] Africa, Latin America, the Caribbean, and the Middle East. There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011.[71] In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.[60]

143

Digital Divide
Despite its tremendous growth, Internet access is not distributed equally within or between [60][72] countries. The digital divide refers to the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access. The gap between people with Internet access and those without is one of many aspects of the List of countries by number of Internet usersInternet users in 2010 as a percentage of a country's populationSource: International Telecommunications Union. digital divide.[73] Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. Low-income, rural, and minority populations have received special scrutiny as the technological "have-nots."[74] Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011.[75] In countries such as North Korea and Cuba there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet.[76] The U.S. trade embargo is another barrier limiting Internet access in Cuba.[77] In the United States, billions of dollars has been invested in efforts to narrow the digital divide and bring Internet access to more people in low-income and rural areas of the United States. The Obama administration has continued this commitment to narrowing the digital divide through the use of stimulus funding.[74] The National Center for Education Statistics reported that 98% of all U.S. classroom computers had Internet access in 2008 with roughly one computer with Internet access available for every three students. The percentage and ratio of students to computers was the same for rural schools (98% and 1 computer for every 2.9 students).[78] Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access.[60] When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).[79][80] Internet access has changed the way in which many people think and has become an integral part of peoples economic, political, and social lives. Providing Internet access to more people in the world allow will them to take advantage of the political, social, economic, educational, and career opportunities available over the Internet.[72] Several of the 67 principles adopted at the World Summit on the Information Society convened by the United

Internet access Nations in Geneva in 2003, directly address the digital divide.[81] To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world.

144

Rural access
One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[82] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100Mbit/s service.[19] Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas.[83] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[84] The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[85]

Access as a human right


Several countries have adopted laws that make Internet access a right by requiring the state to work to ensure that Internet access is broadly available and/or preventing states from unreasonably restricting an individual's access to information and the Internet: Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web."[86] Estonia: In 2000, the parliament passed a law declaring Internet access a fundamental human right and launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the 21st century.[87] Finland: By July 2010, every person in Finland was to have the right to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection will be a legal right.[88] France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings[89] Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to

Internet access electronically transmitted information.[90] Spain: Starting in 2011, Telefnica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.[91] In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights:[] [92] 1. We, the representatives of the peoples of the world, assembled in Geneva from 1012 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centred, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights. 3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs. The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating: 4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers."[92] A poll of 27,973 adults in 26 countries, including 14,306 Internet users,[93] conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right.[94] 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion.[95] The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access:[96] 67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an enabler of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates. 78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating

145

Internet access intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights. 79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest. 85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population. These statements, opinions, and recommendations have led to the suggestion that Internet access itself is or should become a fundamental human right.[97][98]

146

Natural Disasters and Access


Natural disasters disrupt internet access in profound ways. This is importantnot only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary to disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages.[99] One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable.[100] At Hurricane Katrinas peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisianas networks were disrupted.[101] Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at network edges where important emergency organizations such as hospitals and government agencies are mostly located.[100] Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service.[100] The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted.[102] A second way natural disasters destroy internet connectivity is by severing submarine cablesfiber-optic cables placed on the ocean floor that provide international internet connection. The 2006 undersea earthquake near Taiwan (Richter scale 7.2) cut six out of seven international cables connected to that country and caused a tsunami that wiped out one of its cable and landing stations.[103][104] The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe.[105] With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012.[106][107] AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone.[108] This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram.[109][110]

Internet access

147

References
[1] The 34 OECD countries are: Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States. OECD members (http:/ / www. oecd. org/ pages/ 0,3417,en_36734052_36761800_1_1_1_1_1,00. html), accessed 31 April 2012 [2] "OECD Broadband Report Questioned" (http:/ / www. websiteoptimization. com/ bw/ 0705/ ). Website Optimization. . Retrieved June 6, 2009. [3] Ben Segal (1995). A Short History of Internet Protocols at CERN (http:/ / www. cern. ch/ ben/ TCPHIST. html). . [4] Rseaux IP Europens (RIPE) [5] "Internet History in Asia" (http:/ / www. apan. net/ meetings/ busan03/ cs-history. htm). 16th APAN Meetings/Advanced Network Conference in Busan. . Retrieved 25 December 2005. [6] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [7] "What is Broadband?" (http:/ / www. broadband. gov/ about_broadband. html/ ). The National Broadband Plan. US Federal Communications Commission. . Retrieved July 15, 2011. [8] "Inquiry Concerning the Deployment of Advanced Telecommunications Capability to All Americans in a Reasonable and Timely Fashion, and Possible Steps to Accelerate Such Deployment Pursuant to Section 706 of the Telecommunications Act of 1996, as Amended by the Broadband Data Improvement Act" (http:/ / transition. fcc. gov/ Daily_Releases/ Daily_Business/ 2010/ db0806/ FCC-10-148A1. pdf). GN Docket No. 10-159, FCC-10-148A1. Federal Communications Commission. August 6, 2010. . Retrieved July 12, 2011. [9] Naveen Bisht; James Connor. "Broadband to the Home: Trends and Issues" (http:/ / books. google. com/ books?id=ipmF3npkMngC& pg=PA1). Broadband Services, Applications, and Networks: Enabling Technologies and Business Models. International Engineering Consortium. p.1. ISBN978-1-931695-24-4. . [10] "How Broadband Works" (http:/ / www. explainthatstuff. com/ howbroadbandworks. html), Chris Woodford, Explain that Stuff, 20 August 2008. Retrieved 19 January. [11] Jeffrey A. Hart; Robert R. Reed; Franois Bar (November 1992). "The building of the internet: Implications for the future of broadband networks". Telecommunications Policy 16 (8): 666689. doi:10.1016/0308-5961(92)90061-S. [12] The Future of the Internet Economy: A Statistical Profile (http:/ / www. oecd. org/ dataoecd/ 24/ 5/ 48255770. pdf), Organization for Economic Co-Operation and Development (OECD), June 2011 [13] Willdig, Karl; Patrik Chen (August 1994). "What You Need to Know about Modems" (http:/ / fndcg0. fnal. gov/ Net/ modm8-94. txt). . Retrieved 2008-03-02. [14] Mitronov, Pavel (2001-06-29). "Modem compression: V.44 against V.42bis" (http:/ / www. digit-life. com/ articles/ compressv44vsv42bis/ ). Digit-Life.com. . Retrieved 2008-03-02. [15] "Birth of Broadband" (http:/ / www. itu. int/ osg/ spu/ publications/ birthofbroadband/ faq. html). ITU. September 2003. . Retrieved July 12, 2011. [16] "Recommendation I.113, Vocabulary of Terms for Broadband aspects of ISDN" (http:/ / www. itu. int/ rec/ dologin_pub. asp?lang=e& id=T-REC-I. 113-199706-I!!PDF-E). ITU-T. June 1997 (originally 1988). . Retrieved 19 July 2011. [17] "Sixth Broadband Deployment Report" (http:/ / hraunfoss. fcc. gov/ edocs_public/ attachmatch/ FCC-10-129A1. pdf). FCC. . Retrieved July 23, 2010. [18] Patel, Nilay (March 19, 2008). "FCC redefines "broadband" to mean 768kbit/s, "fast" to mean "kinda slow"" (http:/ / www. engadget. com/ 2008/ 03/ 19/ fcc-redefines-broadband-to-mean-768kbps-fast-to-mean-kinda/ ). Engadget. . Retrieved June 6, 2009. [19] "Virgin Medias ultrafast 100Mb broadband now available to over four million UK homes" (http:/ / mediacentre. virginmedia. com/ Stories/ Virgin-Media-s-ultrafast-100Mb-broadband-now-available-to-over-four-million-UK-homes-211c. aspx). News release. Virgin Media. June 10, 2011. . Retrieved August 18, 2011. [20] Tom Phillips (August 25, 2010). "'Misleading' BT broadband ad banned" (http:/ / www. metro. co. uk/ tech/ 839014-misleading-bt-broadband-ad-banned). UK Metro. . Retrieved July 24, 2011. [21] Gary A. Donahue (June 2007). Network Warrior (http:/ / shop. oreilly. com/ product/ 9780596101510. do). O'Reilly. p.600. ISBN0-596-10151-1. . [22] Dean, Tamara (2010). Network+ Guide to Networks, 5th Ed. [23] "Bonding: 112K, 168K, and beyond " (http:/ / www. 56k. com/ reports/ bonding. shtml), 56K.com [24] "Diamond 56k Shotgun Modem" (http:/ / www. maximumpc. com/ article/ features/ top_tech_blunders_10_products_massively_failed), maximumpc.com [25] William Stallings (1999). ISDN and Broadband ISDN with Frame Relay and ATM (http:/ / www. pearsonhighered. com/ educator/ product/ ISDN-and-Broadband-ISDN-with-Frame-Relay-and-ATM-4E/ 9780139737442. page) (4th ed.). Prentice Hall. p.542. ISBN0139737448. . [26] Telecommunications and Data Communications Handbook (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470396075. html), Ray Horak, 2nd edition, Wiley-Interscience, 2008, 791 p., ISBN 0-470-39607-5 [27] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . pp 312-315.

Internet access
[28] "IEEE 802.3 Ethernet Working Group" (http:/ / www. ieee802. org/ 3/ ), web page, IEEE 802 LAN/MAN Standards Committee, accessed 8 May 2012 [29] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . p 322. [30] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . p 323. [31] "ADSL Theory" (http:/ / whirlpool. net. au/ wiki/ ?tag=ADSL_Theory), Australian broadband news and information, Whirlpool, accessed 3 May 2012 [32] "SDSL" (http:/ / docwiki. cisco. com/ wiki/ Digital_Subscriber_Line#SDSL), Internetworking Technology Handbook, Cisco DocWiki, 17 December 2009, accessed 3 May 2012 [33] "KPN starts VDSL trials" (http:/ / www. kpn. com/ artikel. htm?contentid=2895). KPN. . [34] "VDSL Speed" (http:/ / computer. howstuffworks. com/ vdsl2. htm). HowStuffWorks. . [35] "Industrial VDSL Ethernet Extender Over Coaxial Cable, ED3331" (http:/ / www. etherwan. com/ Product/ ViewProduct. asp?View=64). EtherWAN. . [36] "New ITU Standard Delivers 10x ADSL Speeds: Vendors applaud landmark agreement on VDSL2" (http:/ / www. itu. int/ newsroom/ press_releases/ 2005/ 06. html). News release (International Telecommunication Union). 27 May 2005. . Retrieved 22 September 2011. [37] Sturgeon, Jamie (October 18, 2010). "A smarter route to high-speed Net" (http:/ / www. financialpost. com/ entrepreneur/ smarter+ route+ high+ speed/ 3687154/ story. html). FP Entrepreneur (National Post). . Retrieved January 7, 2011. [38] "FTTH Council - Definition of Terms" (http:/ / ftthcouncil. eu/ documents/ Reports/ FTTH-Definitions-Revision_January_2009. pdf). FTTH Council. January 9, 2009. . Retrieved September 1, 2011. [39] "FTTx Primer" (http:/ / www. fiopt. com/ primer. php), Fiopt Communication Services (Calgary), July 2008 [40] "Big gig: NBN to be 10 times faster" (http:/ / www. abc. net. au/ news/ 2010-08-12/ big-gig-nbn-to-be-10-times-faster/ 941408), Emma Rodgers, ABC News, Australian Broadcasting Corporation, 12 August 2010 [41] "Italy gets fiber back on track" (http:/ / www. telecomseurope. net/ content/ italy-gets-fiber-back-track), Michael Carroll, TelecomsEMEA.net, 20 September 2010 [42] "Pirelli Broadband Solutions, the technology partner of fastweb network Ngan" (http:/ / www. freevoipcallsolution. com/ 2010/ 08/ pirelli-broadband-solutions-technology. html), 2 August 2010 [43] "Telecom Italia rolls out 100 Mbps FTTH services in Catania" (http:/ / www. fiercetelecom. com/ story/ telecom-italia-rolls-out-100-mbps-ftth-services-catania/ 2010-11-03?utm_medium=rss& utm_source=rss), Sean Buckley, FierceTelecom, 3 November 2010 [44] "SaskTel Announces 2011 Network Investment and Fiber to the Premises Program" (http:/ / www. sasktel. com/ about-us/ news/ 2011-news-releases/ sasktel-announces-2011-network-investment-and-fiber-to-the-premises. html), SaskTel, Saskatchewan Telecommunications Holding Corporation, 5 April 2011 [45] "How Broadband Over Powerlines Works" (http:/ / computer. howstuffworks. com/ bpl. htm), Robert Valdes, How Stuff Works, accessed 5 May 2012 [46] "North American versus European distribution systems" (http:/ / electrical-engineering-portal. com/ north-american-versus-european-distribution-systems), Edvard, Technical articles, Electrical Engineering Portal, 17 November 2011 [47] B-ISDN asynchronous transfer mode functional characteristics (http:/ / www. itu. int/ rec/ dologin_pub. asp?lang=e& id=T-REC-I. 150-199902-I!!PDF-E& type=items), ITU-T Recommendation I.150, February 1999, International Telecommunications Union [48] "Frame Relay" (http:/ / searchenterprisewan. techtarget. com/ definition/ frame-relay), Margaret Rouse, TechTarget, September 2005 [49] "Wi-Fi (wireless networking technology)" (http:/ / www. britannica. com/ EBchecked/ topic/ 1473553/ Wi-Fi). Encyclopdia Britannica. . Retrieved 2010-02-03. [50] Lemstra, Wolter; Hayes, Vic; Groenewegen, John (2010), The Innovation Journey of Wi-Fi: The Road To Global Success, Cambridge University Press, ISBN0-521-19971-9. [51] Discover and Learn (http:/ / www. wi-fi. org/ discover-and-learn), The Wi-Fi Alliance, , retrieved 6 May 2012. [52] "802.11n Delivers Better Range" (http:/ / www. wi-fiplanet. com/ tutorials/ article. php/ 3680781). Wi-Fi Planet. 2007-05-31. . [53] Joshua Bardwell; Devin Akin (2005). Certified Wireless Network Administrator Official Study Guide (http:/ / books. google. com/ books?id=QnMunBGVDuMC& printsec=frontcover& dq=cwna+ official+ study+ guide& hl=en& ei=EJaXTpSaFMPSiALTu4HCDQ& sa=X& oi=book_result& ct=result& resnum=1& ved=0CDAQ6AEwAA#v=onepage& q& f=false) (Third ed.). McGraw-Hill. p.418. ISBN978-0-07-225538-6. . [54] "Member Directory" (http:/ / www. wispa. org/ member-directory), Wireless Internet Service Providers Association (WISPA), accessed 5 May 2012 [55] "WiMax Forum - Technology" (http:/ / www. wimaxforum. org/ technology/ ). . Retrieved 2008-07-22. [56] Carl Weinschenk (April 16, 2010). "Speeding Up WiMax". IT Business Edge. "Today the initial WiMax system is designed to provide 30 to 40 megabit-per-second data rates." [57] "Internet in the Sky" (http:/ / iml. jou. ufl. edu/ projects/ Fall99/ Coffey/ ), D.J. Coffey, accessed 8 May 2012 [58] "How does satellite Internet operate?" (http:/ / computer. howstuffworks. com/ question606. htm), How Stuff Works, Retrieved 5 March 2009.

148

Internet access
[59] Mustafa Ergen (2009). Mobile Broadband: including WiMAX and LTE (http:/ / www. springerlink. com/ content/ 978-0-387-68189-4). Springer Science+Business Media. ISBN978-0-387-68189-4. . [60] "The World in 2011: ITC Facts and Figures" (http:/ / www. itu. int/ ITU-D/ ict/ facts/ 2011/ material/ ICTFactsFigures2011. pdf), International Telecommunications Unions (ITU), Geneva, 2011 [61] "Local Multipoint Distribution Service (LDMS)" (http:/ / www. cse. wustl. edu/ ~jain/ cis788-99/ ftp/ lmds/ index. html), Vinod Tipparaju, November 23, 1999 [62] "LMDS: Broadband Out of Thin Air " (http:/ / www. angelfire. com/ nd/ ramdinchacha/ DEC00. html), Niraj K Gupta, from My Cell, Voice & Data, December 2000 [63] "Review and Analysis of Local Multipoint Distribution System (LMDS) to Deliver Voice, Data, Internet, and Video Services" (http:/ / www. ijest. info/ docs/ IJEST09-01-01. pdf), S.S. Riaz Ahamed, International Journal of Engineering Science and Technology, Vol. 1(1), October 2009, pp. 1-7 [64] "What is a fair price for Internet service?" (http:/ / www. theglobeandmail. com/ news/ technology/ gadgets-and-gear/ hugh-thompson/ what-is-a-fair-price-for-internet-service/ article1890596/ ), Hugh Thompson, Globe and Mail (Toronto), 1 February 2011 [65] Hansell, Saul (January 17, 2008). "Time Warner: Download Too Much and You Might Pay $30 a Movie" (http:/ / bits. blogs. nytimes. com/ 2008/ 01/ 17/ time-warner-download-too-much-and-you-might-pay-30-a-movie/ ?ref=technology). The New York Times. . Retrieved June 6, 2009. [66] "On- and Off-Peak Quotas" (http:/ / www. comparebroadband. com. au/ article_64_On--and-Off-Peak-Quotas. htm), Compare Broadband, 12 July 2009 [67] Cauley, Leslie (April 20, 2008). "Comcast opens up about how it manages traffic" (http:/ / abcnews. go. com/ Technology/ Story?id=4692338& page=1). ABC News. . Retrieved June 6, 2009. [68] Lowry, Tom (March 31, 2009). "Time Warner Cable Expands Internet Usage Pricing" (http:/ / www. businessweek. com/ technology/ content/ mar2009/ tc20090331_726397. htm?campaign_id=rss_daily). BusinessWeek. . Retrieved June 6, 2009. [69] Axelbank, Evan (April 16, 2009). "Time Warner Drops Internet Plan" (http:/ / rochesterhomepage. net/ fulltext?nxd_id=85011). Rochester Homepage. . Retrieved December 6, 2010. [70] "The lives of Asian youth" (http:/ / www. synovate. com/ changeagent/ index. php/ site/ full_story/ the_lives_of_asian_youth/ ), Change Agent, August 2005 [71] Giga.com (http:/ / gigaom. com/ 2010/ 07/ 09/ worldwide-broadband-subscribers/ ) Nearly Half a Billion Broadband Subscribers [72] Amir Hatem Ali, A. (2011). "The power of social media in developing nations" (http:/ / harvardhrj. com/ wp-content/ uploads/ 2009/ 09/ 185-220. pdf), Human Rights Journal, Harvard Law School, Vol. 24, Issue 1 (2011), pp. 185-219 [73] Wattal, S.; Yili Hong; Mandviwalla, M.; Jain, A., "Technology Diffusion in the Society: Analyzing Digital Divide in the Context of Social Class (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=5718600)", Proceedings of the 44th Hawaii International Conference on System Sciences (HICSS), pp.1-10, 47 January 2011, ISBN 978-0-7695-4282-9 [74] McCollum, S., "Getting Past the 'Digital Divide'" (http:/ / www. tolerance. org/ magazine/ number-39-spring-2011/ getting-past-digital-divide), Teaching Tolerance, No. 39 (Spring 2011), pp. 46-49, and Education Digest, Vol. 77 No. 2 (October 2011), pp. 52-55 [75] Definitions of World Telecommunication/ICT Indicators, March 2010 (http:/ / www. itu. int/ ITU-D/ ict/ material/ TelecomICT_Indicators_Definition_March2010_for_web. pdf), International Telecommunication Union, March 2010. Accessed on 21 October 2011. [76] Zeller Jr, Tom (October 23, 2006). "LINK BY LINK; The Internet Black Hole That Is North Korea" (http:/ / query. nytimes. com/ gst/ fullpage. html?res=9E0CEEDF173FF930A15753C1A9609C8B63& n=Top/ Reference/ Times Topics/ People/ K/ Kim Jong Il). The New York Times. . Retrieved May 5, 2010. [77] The state of the Internet in Cuba, January 2011 (http:/ / som. csudh. edu/ fac/ lpress/ cuba/ chapters/ lpdraft2. docx), Larry Press, Professor of Information Systems at California State University, January 2011 [78] "Table 108: Number and internet access of instructional computers and rooms in public schools, by selected school characteristics: Selected years, 1995 through 2008" (http:/ / nces. ed. gov/ programs/ digest/ d10/ tables/ dt10_108. asp), 2010 Tables and Figures, National Center for Education Statistics, U.S. Department of Education, August 2010, accessed 28 April 2012 [79] "Changes in Cuba: From Fidel to Raul Castro" (http:/ / books. google. com/ books?id=Q2qQZfkOCNsC& pg=PA114& lpg=PA114& dq=Private+ ownership+ of+ computers+ in+ Cuba& source=bl& ots=bKMn5ZraA6& sig=8CcYmtODxcyXSr9LxtjatH_vkdE& hl=en& ei=ydWPTuKbLcaWtweR_qCNDA& sa=X& oi=book_result& ct=result& resnum=4& ved=0CDAQ6AEwAw#v=onepage& q=Private ownership computers& f=false), Perceptions of Cuba: Canadian and American policies in comparative perspective, Lana Wylie, University of Toronto Press Incorporated, 2010, p. 114, ISBN 978-1-4426-4061-0 [80] "Cuba to keep internet limits" (http:/ / www. allbusiness. com/ media-telecommunications/ internet-www/ 11795551-1. html). Agence France-Presse (AFP). 9 February 2009. . [81] "Declaration of Principles" (http:/ / www. itu. int/ wsis/ docs/ geneva/ official/ dop. html), WSIS-03/GENEVA/DOC/4-E, World Summit on the Information Society, Geneva, 12 December 2003 [82] Scott, Aaron (August 11, 2011). "Trends in broadband adoption" (http:/ / www. pewinternet. org/ Reports/ 2010/ Home-Broadband-2010/ Part-1/ Broadband-adoption-among-African-Americans-grew-significantly-between-2009-and-2010. aspx). Home Broadband 2010. Pew Internet & American Life Project. . Retrieved December 23, 2011. [83] Wireless World: Wi-Fi now in rural areas (http:/ / www. physorg. com/ news71497509. html) July 7, 2006

149

Internet access
[84] "Tegola project linking Skye, Knoydart and Loch Hourne" (http:/ / www. tegola. org. uk). . Retrieved 2010-03-16. [85] "Broadband for Rural Nova Scotia" (http:/ / www. gov. ns. ca/ econ/ broadband/ ), Economic and Rural Development, Nova Soctia, Canada, access 27 April 2012 [86] "Judgement 12790 of the Supreme Court" (http:/ / 200. 91. 68. 20/ pj/ scij/ busqueda/ jurisprudencia/ jur_texto_sentencia. asp?nValor2=483874& tem1=013141& param7=0& lResultado=3& nValor1=1& strTipM=T& strLib=LIB), File 09-013141-0007-CO, 30 July 2010. ( English translation (http:/ / www. google. com/ translate_c?langpair=en& u=http:/ / 200. 91. 68. 20/ pj/ scij/ busqueda/ jurisprudencia/ jur_texto_sentencia. asp?nValor2=483874& tem1=013141& param7=0& lResultado=3& nValor1=1& strTipM=T& strLib=LIB)) [87] "Estonia, where being wired is a human right" (http:/ / www. csmonitor. com/ 2003/ 0701/ p07s01-woeu. html), Colin Woodard, Christian Science Monitor, 1 July 2003 [88] "Finland makes 1Mb broadband access a legal right" (http:/ / news. cnet. com/ 8301-17939_109-10374831-2. html), Don Reisinger, CNet News, 14 October 2009 [89] "Top French Court Declares Internet Access 'Basic Human Right'" (http:/ / www. foxnews. com/ story/ 0,2933,525993,00. html). London Times (Fox News). 12 June 2009. . [90] Constitution of Greece As revised by the parliamentary resolution of May 27th 2008 of the VIIIth Revisionary Parliament (http:/ / www. hellenicparliament. gr/ UserFiles/ f3c70a23-7696-49db-9148-f24dce6a27c8/ 001-156 aggliko. pdf), English language translation, Hellenic Parliament [91] Sarah Morris (17 November 2009). "Spain govt to guarantee legal right to broadband" (http:/ / www. reuters. com/ article/ idUSLH61554320091117). Reuters. . [92] Klang, Mathias; Murray, Andrew (2005). Human Rights in the Digital Age (http:/ / www. psypress. com/ 9781904385318). Routledge. p.1. . [93] For the BBC poll Internet users are those who used the Internet within the previous six months. [94] "BBC Internet Poll: Detailed Findings" (http:/ / news. bbc. co. uk/ 1/ shared/ bsp/ hi/ pdfs/ 08_03_10_BBC_internet_poll. pdf), BBC World Service, 8 March 2010 [95] "Internet access is 'a fundamental right'" (http:/ / news. bbc. co. uk/ 2/ hi/ 8548190. stm), BBC News, 8 March 2010 [96] "VI. Conclusions and recommendations" (http:/ / www2. ohchr. org/ english/ bodies/ hrcouncil/ docs/ 17session/ A. HRC. 17. 27_en. pdf), Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, Human Rights Council, Seventeenth session Agenda item 3, United Nations General Assembly, 16 May 2011 [97] "Can the Internet be a Human Right?" (http:/ / www. du. edu/ gsis/ hrhw/ volumes/ 2004/ best-2004. pdf), Michael L. Best, Human rights & Human Welfare, Vol. 4 (2004) [98] Kravets, David (June 3, 2011). "U.N. Report Declares Internet Access a Human Right" (http:/ / www. wired. com/ threatlevel/ 2011/ 06/ internet-a-human-right/ ). Wired. . [99] Measuring the Resilience of the Global Internet Infrastructure System (http:/ / www. stevens-tech. edu/ csr/ fileadmin/ csr/ Publications/ Omer_Measuring_the_Resilience_of_the_Global_Internet__Infrastructure. pdf), 2009 3rd Annual IEEE Systems Conference, 156-162. [100] Inference of Network-Service Disruption upon Natural Disasters (http:/ / users. ece. gatech. edu/ ~jic/ katrina. pdf), accessed 12/05/12. [101] Impact of Hurricane Katrina on Internet Infrastructure (http:/ / www. renesys. com/ tech/ presentations/ pdf/ Renesys-Katrina-Report-9sep2005. pdf), Renesys Report, 9 September 2005, accessed 12/05/2012. [102] Cisco trucks help restore internet after disasters (http:/ / abclocal. go. com/ kgo/ story?section=news/ business& id=8867345), ABC News report, 30 October 2012, accessed 12/05/2012. [103] Taiwans Earthquake and Tsunami Caused Internet accesss Interference (http:/ / www. telkom. co. id/ media-corner/ press-release/ taiwan-s-earthquake-and-tsunami-caused-internet-access-s-interference. html), Telkom Indonesia Press Release, 27 December 2006, accessed 12/05/2012. [104] Impact of Taiwan Earthquake on Internet Access (http:/ / www. ust. hk/ itsc/ channel/ 2007feb/ earthquake. html), Choy, C. (2007). Channel, The Hong Kong University of Science & Technology, 46. Accessed 12/05/2012. [105] Understanding and Mitigating Catastrophic Disruption and Attack (http:/ / www. noblis. org/ NewsPublications/ Publications/ TechnicalPublications/ SigmaJournal/ Documents/ Sigma_RE_UnderstandingAndMitigating. pdf), Masi, D., Smith E., Fischer M. Telecommunications and Cybersecurity, Noblis. Accessed 12/05/2012. [106] Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region (http:/ / aws. amazon. com/ message/ 65648/ ), AWS message, 29 April 2011, accessed 12/05/2012. [107] [ https:/ / aws. amazon. com/ message/ 67457/ Summary of the AWS Service Event in the US East Region], AWS message, 2 July 2012, accessed 12/05/2012. [108] AWS is down: Why the sky is falling (http:/ / justinsb. posterous. com/ aws-down-why-the-sky-is-falling), justinsb's posterous, 21 April 2011, accessed 12/05/2012. [109] Amazon Web Services June 2012 Outage Explained (http:/ / cloud-computing-today. com/ 2012/ 06/ 18/ amazon-web-services-june-2012-outage-explained/ ), Cloud Computing Today, 18 June 2012, accessed 12/05/2012. [110] Will Natural Disasters Kill the Cloud? (http:/ / crashcloud. com/ will-natural-disasters-kill-cloud/ ), CrashCloud, 21 August 2012, accessed 12/05/2012.

150

Internet access

151

External links
European broadband (http://ec.europa.eu/information_society/eeurope/i2010/digital_divide/index_en. htm#European_broadband_portal) Corporate vs. Community Internet (http://www.alternet.org/story/22216/), AlterNet, June 14, 2005, - on the clash between US cities' attempts to expand municipal broadband and corporate attempts to defend their markets Broadband data (http://www.google.com/publicdata/directory#!q=broadband), from Google public data US National Broadband Maps (http://broadbandmap.gov)

Broadband Internet access


Internet access is the means by which individual terminals, computers, mobile devices, and local area networks are connected to the global Internet. Internet access is usually sold by Internet Service Providers (ISPs) that use many different technologies offering a wide range of data rates to the end user. Consumer use first became popular through dial-up connections in the 1980s and 1990s. By the first decade of the 21st century, many consumers had switched away from dial-up to dedicated connections, most Internet access products were being marketed using the term "broadband", and broadband penetration was being treated as a key economic indicator.[1][2]

History
The Internet began as a network funded by the U.S. government to support projects within the government and at universities and research laboratories in the US - but grew over time to include most of the world's large universities and the research arms of many technology companies.[3][4][5] Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted.[6] In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s and grew to support 100 and 1000 Mbit/s, while modem data rates grew from 1200 and 2400 bit/s in the 1980s, to 28 and 56 kbit/s by the mid to late 1990s. Initially dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers (NASs) supporting the Serial Line Internet Protocol (SLIP) and later the Point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users, subject only to limitations imposed by the lower data rates available using dial-up. Broadband Internet access, often shortened to just broadband and also known as high-speed Internet access, are services that provide bit-rates considerably higher than that available using a 56 kbit/s modem. In the U.S. National Broadband Plan of 2009, the Federal Communications Commission (FCC) defined broadband access as "Internet access that is always on and faster than the traditional dial-up access",[7] although the FCC has defined it differently through the years.[8] The term broadband was originally a reference to multi-frequency communication, as opposed to narrowband or baseband. Broadband is now a marketing term that telephone, cable, and other companies use to sell their more expensive higher data rate products.[9] Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not hog phone lines.[10] Broadband provides improved access to Internet services such as: Faster world wide web browsing Faster downloading of documents, photographs, videos, and other large files Telephony, radio, television, and videoconferencing Virtual private networks and remote system administration Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive

Broadband Internet access In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue.[11] In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries[1] and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.[12] The broadband technologies in widest use are ADSL and cable Internet access. Newer technologies include VDSL and optical fibre extended closer to the subscriber in both telephone and cable plants. Fibre-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless and satellite Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless, e.g., Motorola Canopy. Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.

152

Availability
In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations for connecting users' laptops to local area networks (LANs). Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based. Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. And Mobile broadband access allows smart phones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made.

Data rates

Broadband Internet access


Data rate units (SI) Unit Kilobit per second Megabit/s Gigabit/s Terabit/s Petabit/s Unit Kilobyte per second (103) Megabyte/s Gigabyte/s Terabyte/s Petabyte/s (106) (109) (103) (106) (109) Symbol Bits kbit/s Mbit/s Gbit/s 1,000 bit/s 1,000 kbit/s 1,000 Mbit/s 1,000 Gbit/s 1,000 Tbit/s Bytes 125 bytes/s 125 kB/s 125 MB/s 125 GB/s 125 TB/s Bytes 1,000 bytes/s 1,000 kB/s

153

(1012) Tbit/s (1015) Pbit/s

Symbol Bits kB/s MB/s GB/s 8,000 bit/s 8,000 kbit/s

8,000 Mbit/s 1,000 MB/s 8,000 Gbit/s 1,000 GB/s 8,000 Tbit/s 1,000 TB/s

(1012) TB/s (1015) PB/s

The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection to from 220 (V.42bis) to 320 (V.44) kbit/s.[13] However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s.[14] Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64kbit/s up to 4.0Mbit/s.[15] In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2Mbit/s.[16] A 2006 Organization for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256kbit/s.[1] And in 2010 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 4 Mbit/s downstream (from the Internet to the users computer) and 1Mbit/s upstream (from the users computer to the Internet).[17] The trend is to raise the threshold of the broadband definition as higher data rate services become available.[18] The higher data rate dial-up modems and many broadband services are "asymmetric"supporting much higher data rates for download (toward the user) than for upload (toward the Internet). Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer.[19] Actual end-to-end data rates can be lower due to a number of factors.[20] Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user. Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high quality

Broadband Internet access streaming video can require high data rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live videoeffectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.

154

Technologies
Access technologies generally use a modem, which converts digital data to analog for transmission over analog networks such as the telephone and cable networks.[10]

Local Area Networks


Local area networks (LANs) provide Internet access to computers and other devices in a limited area such as a home, school, computer laboratory, or office building, usually at relatively high data rates that typically range from 10 to 1000 Mbit/s.[21] There are wired and wireless LANs. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used in the past. Most Internet access today is through a LAN, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this begs the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections.

Dial-up access
Dial-up access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection.[22] Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet).[10]

Broadband Internet access

155

Broadband access
The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. These technologies use wires or fiber optic cables in contrast to wireless broadband described later. Multilink dial-up Multilink dial-up provides increased bandwidth by bonding two or more dial-up connections together and treating them as a single data channel.[23] It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking - and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking.[24] Integrated Services Digital Network (ISDN) Integrated Services Digital Network (ISDN), a switched telephone service capable of transporting voice and digital data, is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies.[25] Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9Mbit/s. Leased lines Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created.[26] T-carrier technology dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0) to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and 1,500 kbit/s. T-carrier lines require special termination equipment that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP.[27] In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s). Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high data rate digital bit streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s. Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (622.080 Mbit/s), OC-48c (2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams.[26] The 1, 10, 40, and 100 Gigabit Ethernet (GbE, 10GbE, 40GbE, and 100GbE) IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to 40km.[28]

Broadband Internet access Cable Internet access Cable Internet or cable modem access provides Internet access via Hybrid Fiber Coaxial wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. In a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means usually fiber optic cable or digital satellite and microwave transmissions.[29] Like DSL, broadband cable provides a continuous connection with an ISP. Downstream, the direction toward the user, bit rates can be as much as 400Mbit/s for business connections, and 100Mbit/s for residential service in some countries. Upstream traffic, originating at the user, ranges from 384kbit/s to more than 20Mbit/s. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings and commercial buildings do not always include wiring for coaxial cable networks.[30] In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted.[29] Digital subscriber line (DSL, ADSL, SDSL, and VDSL)
DSL technologies Standard ADSL ANSI T1.413 Issue 2 ITU G.992.1 (G.DMT) ITU G.992.2 (G.Lite) ITU G.992.3 ITU G.992.4 ITU G.992.3 Annex J ITU G.992.3 Annex L

156

ADSL2

ADSL2+ ITU G.992.5 ITU G.992.5 Annex M HDSL HDSL2 IDSL MSDSL PDSL RADSL SDSL SHDSL UDSL VDSL VDSL2 ITU G.993.1 ITU G.993.2 ITU G.991.2 ITU G.991.1

Digital Subscriber Line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication.[10] These frequency bands are subsequently separated by filters installed at the customer's premises.

Broadband Internet access DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean Asymmetric Digital Subscriber Line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256kbit/s to 20Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e. in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric.[31] With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal.[32] Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1)[33] is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires[34] and up to 85 Mbit/s down- and upstream on coaxial cable.[35] VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection. VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL.[36] Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases. DSL Rings DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400Mbit/s.[37] Fiber to the home Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN).[38] These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, POTS) for final delivery to customers.[39] Australia has already begun rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses.[40] Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country).,[41][42][43][44] Power-line Internet Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s.[45] Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all powerline protocols must detect existing usage and avoid interfering with it.[45] Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer.[45] In the U.S. a transformer serves a small clusters of from one to a few houses. In

157

Broadband Internet access Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than in a comparable European city.[46] ATM and Frame Relay Asynchronous Transfer Mode (ATM) and Frame Relay are wide area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates.[47][48] While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did.

158

Wireless broadband access


Wireless broadband is used to provide both fixed and mobile Internet access. Wi-Fi Wi-Fi is the popular name for a "wireless local area network" that uses one of the IEEE 802.11 standards. It is a trademark of the Wi-Fi Alliance. Individual homes and businesses often use Wi-Fi to connect laptops and smart phones to the Internet. Wi-Fi Hotspots may be found in coffee shops and various other public establishments. Wi-Fi is used to create campus-wide and city-wide wireless networks.[49][50][51]
Wi-Fi logo Wi-Fi networks are built using one or more wireless routers called Access Points. "Ad hoc" computer to computer Wi-Fi" networks are also possible. The Wi-Fi network is connected to the larger Internet using DSL, cable modem, and other Internet access technologies. Data rates range from 6 to 600 Mbit/s. Wi-Fi service range is fairly short, typically 20 to 250 meters or from 65 to 820 feet. Both data rate and range are quite variable depending on the Wi-Fi protocol, location, frequency, building construction, and interference from other devices.[52] Using directional antennas and with careful engineering Wi-Fi can be extended to operate over distances of up to several kilometers, see Wireless ISP below.

Wireless ISP Wireless ISPs typically employ low-cost 802.11 Wi-Fi radio systems to link up remote locations over great distances, but may use other higher-power radio communications systems as well. Traditional 802.11b is an unlicensed omnidirectional service designed to span between 100 and 150 meters (300 to 500ft). By focusing the radio signal using a directional antenna 802.11b can operate reliably over a distance of many kilometres (miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are significantly slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems.[53] Rural Wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. There are currently a number of companies that provide this service.[54] Motorola Canopy and other proprietary technologies offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX.

Broadband Internet access WiMAX

159

Worldwide Interoperability for Microwave Access


WiMAX Forum logo

WiMAX (Worldwide Interoperability for Microwave Access) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. WiMAX enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL".[55] The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates.[56] Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi local area network (LAN). WiMAX signals also penetrate building walls much more effectively than Wi-Fi. Satellite broadband Satellites can provide fixed, portable, and mobile Internet access. It is among the most expensive forms of broadband Internet access, but may be the only choice available in remote areas.[57] Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. Satellite communication typically requires a clear line of sight, will not work well through trees and other vegetation, is adversely affected by moisture, rain, and snow (known as rain fade), and may require a fairly large, carefully aimed, directional antenna. Satellites in geostationary Earth orbit (GEO) operate in a fixed position Satellite Internet access via VSAT in Ghana 35,786km (22,236 miles) above the earth's equator. Even at the speed of light (about 300,000km/s or 186,000 miles per second), it takes a quarter of a second for a radio signal to travel from the earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies can make some applications, such as video conferencing, voice over IP, multiplayer games, and remote control of equipment, that require a real-time response impracticable via satellite. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the earth's polar regions.[58] HughesNet and ViaSat are GEO systems. Satellites in Low Earth orbit (LEO, below 2000km or 1243 miles) and Medium earth orbit (MEO, between 2000 and 35,786km or 1,243 and 22,236 miles) are less common, operate at lower altitudes, and are not fixed in their position above the earth. Lower altitudes allow lower latencies and make real-time interactive Internet applications feasible. LEO systems include Globalstar and Iridium. The O3b Satellite Constellation is a proposed MEO system with a latency of 125 ms. COMMStellation is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7ms.

Broadband Internet access Mobile broadband Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers to computers, mobile phones (called "cell phones" in North America and South Africa), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.[59] Roughly every ten years new mobile phone technology and infrastructure involving a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations:
Second generation (2G) from 1991: first mobile data services GSM CSD (2G): 9.6 kbit/s GSM GPRS (2.5G): 56 to 115 kbit/s GSM EDGE (2.75G): up to 237 kbit/s Third generation (3G) from 2001: UMTS W-CDMA: 0.4 Mbit/s down and up UMTS HSPA: 14.4 Mbit/s down; 5.8 Mbit/s up UMTS TDD: 16 Mbit/s down and up CDMA2000 1xRTT: 0.3 Mbit/s down; 0.15 Mbit/s up CDMA2000 EV-DO: 2.5 to 4.9 Mbit/s down; 0.15 to 1.8 up GSM EDGE-Evolution: 1.6 Mbit/s down; 0.5 Mbit/s up Fourth generation (4G) from 2006: HSPA+: 21 to 672 Mbit/s down; 5.8 to 168 Mbit/s up Mobile WiMAX (802.16): 37 to 365 Mbit/s down; 17 to 376 Mbit/s up LTE: 100 to 300 Mbit/s down; 50 to 75 Mbit/s up LTE-Advanced: 100 Mbit/s moving at higher speeds to 1 Gbit/s not moving or moving at low speeds MBWA: (802.20): 80 Mbit/s

160

The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates. WiMAX (described in more detail above) was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDMA2000 EV-DO and MBWA (Mobile Broadband Wireless Access) are no longer being actively developed. In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage.[60] Local Multipoint Distribution Service Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26GHz and 29GHz.[61] Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s.[62] Distance is typically limited to about 1.5 miles (2.4km), but links of up to 5 miles (8km) from the base station are possible in some circumstances.[63] LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards.

Pricing
Dial-up users pay the costs for making local or long distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access.

Broadband Internet access Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access. With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80-90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03.[64] Some ISPs estimate that about 5% of their users consume about 50% of the total bandwidth.[65] To ensure these high-bandwidth users do not slow down the network for everyone, some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps.[66][67] In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps.[65] Time Warner experimented with usage-based pricing in Beaumont, Texas.[68] An effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned.[69]

161

Growth in number of users


Worldwide Internet users 2006 2011a

World population 6.5 billion 7 billion Not using the Internetb Using the Internetb Users in the developing worldb Users in the developed worldb Users in Chinab
a

82% 18% 8% 10% 2%

65% 35% 22% 13% 8%

Estimate. b Share of world population. [60] Source: International Telecommunications Union.

Internet users by region 2006b Africa Americas Arab States Asia and Pacific Commonwealth of Independent States Europe
a

2011a,b 13% 56% 29% 27% 48% 74%

3% 39% 11% 11% 13% 50%

Estimate. b Share of regional population. [60] Source: International Telecommunications Union.

Broadband Internet access Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.45 billion in 2011.[60] With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia,[70] Africa, Latin America, the Caribbean, and the Middle East. There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011.[71] In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.[60]

162

Digital Divide
Despite its tremendous growth, Internet access is not distributed equally within or between [60][72] countries. The digital divide refers to the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access. The gap between people with Internet access and those without is one of many aspects of the List of countries by number of Internet usersInternet users in 2010 as a percentage of a country's populationSource: International Telecommunications Union. digital divide.[73] Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. Low-income, rural, and minority populations have received special scrutiny as the technological "have-nots."[74] Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011.[75] In countries such as North Korea and Cuba there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet.[76] The U.S. trade embargo is another barrier limiting Internet access in Cuba.[77] In the United States, billions of dollars has been invested in efforts to narrow the digital divide and bring Internet access to more people in low-income and rural areas of the United States. The Obama administration has continued this commitment to narrowing the digital divide through the use of stimulus funding.[74] The National Center for Education Statistics reported that 98% of all U.S. classroom computers had Internet access in 2008 with roughly one computer with Internet access available for every three students. The percentage and ratio of students to computers was the same for rural schools (98% and 1 computer for every 2.9 students).[78] Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access.[60] When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).[79][80] Internet access has changed the way in which many people think and has become an integral part of peoples economic, political, and social lives. Providing Internet access to more people in the world allow will them to take advantage of the political, social, economic, educational, and career opportunities available over the Internet.[72] Several of the 67 principles adopted at the World Summit on the Information Society convened by the United

Broadband Internet access Nations in Geneva in 2003, directly address the digital divide.[81] To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world.

163

Rural access
One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[82] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100Mbit/s service.[19] Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas.[83] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[84] The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[85]

Access as a human right


Several countries have adopted laws that make Internet access a right by requiring the state to work to ensure that Internet access is broadly available and/or preventing states from unreasonably restricting an individual's access to information and the Internet: Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web."[86] Estonia: In 2000, the parliament passed a law declaring Internet access a fundamental human right and launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the 21st century.[87] Finland: By July 2010, every person in Finland was to have the right to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection will be a legal right.[88] France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings[89] Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to

Broadband Internet access electronically transmitted information.[90] Spain: Starting in 2011, Telefnica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.[91] In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights:[] [92] 1. We, the representatives of the peoples of the world, assembled in Geneva from 1012 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centred, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights. 3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs. The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating: 4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers."[92] A poll of 27,973 adults in 26 countries, including 14,306 Internet users,[93] conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right.[94] 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion.[95] The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access:[96] 67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an enabler of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates. 78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating

164

Broadband Internet access intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights. 79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest. 85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population. These statements, opinions, and recommendations have led to the suggestion that Internet access itself is or should become a fundamental human right.[97][98]

165

Natural Disasters and Access


Natural disasters disrupt internet access in profound ways. This is importantnot only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary to disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages.[99] One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable.[100] At Hurricane Katrinas peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisianas networks were disrupted.[101] Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at network edges where important emergency organizations such as hospitals and government agencies are mostly located.[100] Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service.[100] The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted.[102] A second way natural disasters destroy internet connectivity is by severing submarine cablesfiber-optic cables placed on the ocean floor that provide international internet connection. The 2006 undersea earthquake near Taiwan (Richter scale 7.2) cut six out of seven international cables connected to that country and caused a tsunami that wiped out one of its cable and landing stations.[103][104] The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe.[105] With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012.[106][107] AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone.[108] This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram.[109][110]

Broadband Internet access

166

References
[1] The 34 OECD countries are: Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States. OECD members (http:/ / www. oecd. org/ pages/ 0,3417,en_36734052_36761800_1_1_1_1_1,00. html), accessed 31 April 2012 [2] "OECD Broadband Report Questioned" (http:/ / www. websiteoptimization. com/ bw/ 0705/ ). Website Optimization. . Retrieved June 6, 2009. [3] Ben Segal (1995). A Short History of Internet Protocols at CERN (http:/ / www. cern. ch/ ben/ TCPHIST. html). . [4] Rseaux IP Europens (RIPE) [5] "Internet History in Asia" (http:/ / www. apan. net/ meetings/ busan03/ cs-history. htm). 16th APAN Meetings/Advanced Network Conference in Busan. . Retrieved 25 December 2005. [6] "Retiring the NSFNET Backbone Service: Chronicling the End of an Era" (http:/ / www. merit. edu/ networkresearch/ projecthistory/ nsfnet/ nsfnet_article. php), Susan R. Harris and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996 [7] "What is Broadband?" (http:/ / www. broadband. gov/ about_broadband. html/ ). The National Broadband Plan. US Federal Communications Commission. . Retrieved July 15, 2011. [8] "Inquiry Concerning the Deployment of Advanced Telecommunications Capability to All Americans in a Reasonable and Timely Fashion, and Possible Steps to Accelerate Such Deployment Pursuant to Section 706 of the Telecommunications Act of 1996, as Amended by the Broadband Data Improvement Act" (http:/ / transition. fcc. gov/ Daily_Releases/ Daily_Business/ 2010/ db0806/ FCC-10-148A1. pdf). GN Docket No. 10-159, FCC-10-148A1. Federal Communications Commission. August 6, 2010. . Retrieved July 12, 2011. [9] Naveen Bisht; James Connor. "Broadband to the Home: Trends and Issues" (http:/ / books. google. com/ books?id=ipmF3npkMngC& pg=PA1). Broadband Services, Applications, and Networks: Enabling Technologies and Business Models. International Engineering Consortium. p.1. ISBN978-1-931695-24-4. . [10] "How Broadband Works" (http:/ / www. explainthatstuff. com/ howbroadbandworks. html), Chris Woodford, Explain that Stuff, 20 August 2008. Retrieved 19 January. [11] Jeffrey A. Hart; Robert R. Reed; Franois Bar (November 1992). "The building of the internet: Implications for the future of broadband networks". Telecommunications Policy 16 (8): 666689. doi:10.1016/0308-5961(92)90061-S. [12] The Future of the Internet Economy: A Statistical Profile (http:/ / www. oecd. org/ dataoecd/ 24/ 5/ 48255770. pdf), Organization for Economic Co-Operation and Development (OECD), June 2011 [13] Willdig, Karl; Patrik Chen (August 1994). "What You Need to Know about Modems" (http:/ / fndcg0. fnal. gov/ Net/ modm8-94. txt). . Retrieved 2008-03-02. [14] Mitronov, Pavel (2001-06-29). "Modem compression: V.44 against V.42bis" (http:/ / www. digit-life. com/ articles/ compressv44vsv42bis/ ). Digit-Life.com. . Retrieved 2008-03-02. [15] "Birth of Broadband" (http:/ / www. itu. int/ osg/ spu/ publications/ birthofbroadband/ faq. html). ITU. September 2003. . Retrieved July 12, 2011. [16] "Recommendation I.113, Vocabulary of Terms for Broadband aspects of ISDN" (http:/ / www. itu. int/ rec/ dologin_pub. asp?lang=e& id=T-REC-I. 113-199706-I!!PDF-E). ITU-T. June 1997 (originally 1988). . Retrieved 19 July 2011. [17] "Sixth Broadband Deployment Report" (http:/ / hraunfoss. fcc. gov/ edocs_public/ attachmatch/ FCC-10-129A1. pdf). FCC. . Retrieved July 23, 2010. [18] Patel, Nilay (March 19, 2008). "FCC redefines "broadband" to mean 768kbit/s, "fast" to mean "kinda slow"" (http:/ / www. engadget. com/ 2008/ 03/ 19/ fcc-redefines-broadband-to-mean-768kbps-fast-to-mean-kinda/ ). Engadget. . Retrieved June 6, 2009. [19] "Virgin Medias ultrafast 100Mb broadband now available to over four million UK homes" (http:/ / mediacentre. virginmedia. com/ Stories/ Virgin-Media-s-ultrafast-100Mb-broadband-now-available-to-over-four-million-UK-homes-211c. aspx). News release. Virgin Media. June 10, 2011. . Retrieved August 18, 2011. [20] Tom Phillips (August 25, 2010). "'Misleading' BT broadband ad banned" (http:/ / www. metro. co. uk/ tech/ 839014-misleading-bt-broadband-ad-banned). UK Metro. . Retrieved July 24, 2011. [21] Gary A. Donahue (June 2007). Network Warrior (http:/ / shop. oreilly. com/ product/ 9780596101510. do). O'Reilly. p.600. ISBN0-596-10151-1. . [22] Dean, Tamara (2010). Network+ Guide to Networks, 5th Ed. [23] "Bonding: 112K, 168K, and beyond " (http:/ / www. 56k. com/ reports/ bonding. shtml), 56K.com [24] "Diamond 56k Shotgun Modem" (http:/ / www. maximumpc. com/ article/ features/ top_tech_blunders_10_products_massively_failed), maximumpc.com [25] William Stallings (1999). ISDN and Broadband ISDN with Frame Relay and ATM (http:/ / www. pearsonhighered. com/ educator/ product/ ISDN-and-Broadband-ISDN-with-Frame-Relay-and-ATM-4E/ 9780139737442. page) (4th ed.). Prentice Hall. p.542. ISBN0139737448. . [26] Telecommunications and Data Communications Handbook (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470396075. html), Ray Horak, 2nd edition, Wiley-Interscience, 2008, 791 p., ISBN 0-470-39607-5 [27] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . pp 312-315.

Broadband Internet access


[28] "IEEE 802.3 Ethernet Working Group" (http:/ / www. ieee802. org/ 3/ ), web page, IEEE 802 LAN/MAN Standards Committee, accessed 8 May 2012 [29] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . p 322. [30] Dean, Tamara (2009). Network+ Guide to Networks (http:/ / www. cengage. com/ search/ productOverview. do?N=0& Ntk=P_Isbn13& Ntt=9781423902454) (5th ed.). Course Technology, Cengage Learning. ISBN1-4239-0245-9. . p 323. [31] "ADSL Theory" (http:/ / whirlpool. net. au/ wiki/ ?tag=ADSL_Theory), Australian broadband news and information, Whirlpool, accessed 3 May 2012 [32] "SDSL" (http:/ / docwiki. cisco. com/ wiki/ Digital_Subscriber_Line#SDSL), Internetworking Technology Handbook, Cisco DocWiki, 17 December 2009, accessed 3 May 2012 [33] "KPN starts VDSL trials" (http:/ / www. kpn. com/ artikel. htm?contentid=2895). KPN. . [34] "VDSL Speed" (http:/ / computer. howstuffworks. com/ vdsl2. htm). HowStuffWorks. . [35] "Industrial VDSL Ethernet Extender Over Coaxial Cable, ED3331" (http:/ / www. etherwan. com/ Product/ ViewProduct. asp?View=64). EtherWAN. . [36] "New ITU Standard Delivers 10x ADSL Speeds: Vendors applaud landmark agreement on VDSL2" (http:/ / www. itu. int/ newsroom/ press_releases/ 2005/ 06. html). News release (International Telecommunication Union). 27 May 2005. . Retrieved 22 September 2011. [37] Sturgeon, Jamie (October 18, 2010). "A smarter route to high-speed Net" (http:/ / www. financialpost. com/ entrepreneur/ smarter+ route+ high+ speed/ 3687154/ story. html). FP Entrepreneur (National Post). . Retrieved January 7, 2011. [38] "FTTH Council - Definition of Terms" (http:/ / ftthcouncil. eu/ documents/ Reports/ FTTH-Definitions-Revision_January_2009. pdf). FTTH Council. January 9, 2009. . Retrieved September 1, 2011. [39] "FTTx Primer" (http:/ / www. fiopt. com/ primer. php), Fiopt Communication Services (Calgary), July 2008 [40] "Big gig: NBN to be 10 times faster" (http:/ / www. abc. net. au/ news/ 2010-08-12/ big-gig-nbn-to-be-10-times-faster/ 941408), Emma Rodgers, ABC News, Australian Broadcasting Corporation, 12 August 2010 [41] "Italy gets fiber back on track" (http:/ / www. telecomseurope. net/ content/ italy-gets-fiber-back-track), Michael Carroll, TelecomsEMEA.net, 20 September 2010 [42] "Pirelli Broadband Solutions, the technology partner of fastweb network Ngan" (http:/ / www. freevoipcallsolution. com/ 2010/ 08/ pirelli-broadband-solutions-technology. html), 2 August 2010 [43] "Telecom Italia rolls out 100 Mbps FTTH services in Catania" (http:/ / www. fiercetelecom. com/ story/ telecom-italia-rolls-out-100-mbps-ftth-services-catania/ 2010-11-03?utm_medium=rss& utm_source=rss), Sean Buckley, FierceTelecom, 3 November 2010 [44] "SaskTel Announces 2011 Network Investment and Fiber to the Premises Program" (http:/ / www. sasktel. com/ about-us/ news/ 2011-news-releases/ sasktel-announces-2011-network-investment-and-fiber-to-the-premises. html), SaskTel, Saskatchewan Telecommunications Holding Corporation, 5 April 2011 [45] "How Broadband Over Powerlines Works" (http:/ / computer. howstuffworks. com/ bpl. htm), Robert Valdes, How Stuff Works, accessed 5 May 2012 [46] "North American versus European distribution systems" (http:/ / electrical-engineering-portal. com/ north-american-versus-european-distribution-systems), Edvard, Technical articles, Electrical Engineering Portal, 17 November 2011 [47] B-ISDN asynchronous transfer mode functional characteristics (http:/ / www. itu. int/ rec/ dologin_pub. asp?lang=e& id=T-REC-I. 150-199902-I!!PDF-E& type=items), ITU-T Recommendation I.150, February 1999, International Telecommunications Union [48] "Frame Relay" (http:/ / searchenterprisewan. techtarget. com/ definition/ frame-relay), Margaret Rouse, TechTarget, September 2005 [49] "Wi-Fi (wireless networking technology)" (http:/ / www. britannica. com/ EBchecked/ topic/ 1473553/ Wi-Fi). Encyclopdia Britannica. . Retrieved 2010-02-03. [50] Lemstra, Wolter; Hayes, Vic; Groenewegen, John (2010), The Innovation Journey of Wi-Fi: The Road To Global Success, Cambridge University Press, ISBN0-521-19971-9. [51] Discover and Learn (http:/ / www. wi-fi. org/ discover-and-learn), The Wi-Fi Alliance, , retrieved 6 May 2012. [52] "802.11n Delivers Better Range" (http:/ / www. wi-fiplanet. com/ tutorials/ article. php/ 3680781). Wi-Fi Planet. 2007-05-31. . [53] Joshua Bardwell; Devin Akin (2005). Certified Wireless Network Administrator Official Study Guide (http:/ / books. google. com/ books?id=QnMunBGVDuMC& printsec=frontcover& dq=cwna+ official+ study+ guide& hl=en& ei=EJaXTpSaFMPSiALTu4HCDQ& sa=X& oi=book_result& ct=result& resnum=1& ved=0CDAQ6AEwAA#v=onepage& q& f=false) (Third ed.). McGraw-Hill. p.418. ISBN978-0-07-225538-6. . [54] "Member Directory" (http:/ / www. wispa. org/ member-directory), Wireless Internet Service Providers Association (WISPA), accessed 5 May 2012 [55] "WiMax Forum - Technology" (http:/ / www. wimaxforum. org/ technology/ ). . Retrieved 2008-07-22. [56] Carl Weinschenk (April 16, 2010). "Speeding Up WiMax". IT Business Edge. "Today the initial WiMax system is designed to provide 30 to 40 megabit-per-second data rates." [57] "Internet in the Sky" (http:/ / iml. jou. ufl. edu/ projects/ Fall99/ Coffey/ ), D.J. Coffey, accessed 8 May 2012 [58] "How does satellite Internet operate?" (http:/ / computer. howstuffworks. com/ question606. htm), How Stuff Works, Retrieved 5 March 2009.

167

Broadband Internet access


[59] Mustafa Ergen (2009). Mobile Broadband: including WiMAX and LTE (http:/ / www. springerlink. com/ content/ 978-0-387-68189-4). Springer Science+Business Media. ISBN978-0-387-68189-4. . [60] "The World in 2011: ITC Facts and Figures" (http:/ / www. itu. int/ ITU-D/ ict/ facts/ 2011/ material/ ICTFactsFigures2011. pdf), International Telecommunications Unions (ITU), Geneva, 2011 [61] "Local Multipoint Distribution Service (LDMS)" (http:/ / www. cse. wustl. edu/ ~jain/ cis788-99/ ftp/ lmds/ index. html), Vinod Tipparaju, November 23, 1999 [62] "LMDS: Broadband Out of Thin Air " (http:/ / www. angelfire. com/ nd/ ramdinchacha/ DEC00. html), Niraj K Gupta, from My Cell, Voice & Data, December 2000 [63] "Review and Analysis of Local Multipoint Distribution System (LMDS) to Deliver Voice, Data, Internet, and Video Services" (http:/ / www. ijest. info/ docs/ IJEST09-01-01. pdf), S.S. Riaz Ahamed, International Journal of Engineering Science and Technology, Vol. 1(1), October 2009, pp. 1-7 [64] "What is a fair price for Internet service?" (http:/ / www. theglobeandmail. com/ news/ technology/ gadgets-and-gear/ hugh-thompson/ what-is-a-fair-price-for-internet-service/ article1890596/ ), Hugh Thompson, Globe and Mail (Toronto), 1 February 2011 [65] Hansell, Saul (January 17, 2008). "Time Warner: Download Too Much and You Might Pay $30 a Movie" (http:/ / bits. blogs. nytimes. com/ 2008/ 01/ 17/ time-warner-download-too-much-and-you-might-pay-30-a-movie/ ?ref=technology). The New York Times. . Retrieved June 6, 2009. [66] "On- and Off-Peak Quotas" (http:/ / www. comparebroadband. com. au/ article_64_On--and-Off-Peak-Quotas. htm), Compare Broadband, 12 July 2009 [67] Cauley, Leslie (April 20, 2008). "Comcast opens up about how it manages traffic" (http:/ / abcnews. go. com/ Technology/ Story?id=4692338& page=1). ABC News. . Retrieved June 6, 2009. [68] Lowry, Tom (March 31, 2009). "Time Warner Cable Expands Internet Usage Pricing" (http:/ / www. businessweek. com/ technology/ content/ mar2009/ tc20090331_726397. htm?campaign_id=rss_daily). BusinessWeek. . Retrieved June 6, 2009. [69] Axelbank, Evan (April 16, 2009). "Time Warner Drops Internet Plan" (http:/ / rochesterhomepage. net/ fulltext?nxd_id=85011). Rochester Homepage. . Retrieved December 6, 2010. [70] "The lives of Asian youth" (http:/ / www. synovate. com/ changeagent/ index. php/ site/ full_story/ the_lives_of_asian_youth/ ), Change Agent, August 2005 [71] Giga.com (http:/ / gigaom. com/ 2010/ 07/ 09/ worldwide-broadband-subscribers/ ) Nearly Half a Billion Broadband Subscribers [72] Amir Hatem Ali, A. (2011). "The power of social media in developing nations" (http:/ / harvardhrj. com/ wp-content/ uploads/ 2009/ 09/ 185-220. pdf), Human Rights Journal, Harvard Law School, Vol. 24, Issue 1 (2011), pp. 185-219 [73] Wattal, S.; Yili Hong; Mandviwalla, M.; Jain, A., "Technology Diffusion in the Society: Analyzing Digital Divide in the Context of Social Class (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=5718600)", Proceedings of the 44th Hawaii International Conference on System Sciences (HICSS), pp.1-10, 47 January 2011, ISBN 978-0-7695-4282-9 [74] McCollum, S., "Getting Past the 'Digital Divide'" (http:/ / www. tolerance. org/ magazine/ number-39-spring-2011/ getting-past-digital-divide), Teaching Tolerance, No. 39 (Spring 2011), pp. 46-49, and Education Digest, Vol. 77 No. 2 (October 2011), pp. 52-55 [75] Definitions of World Telecommunication/ICT Indicators, March 2010 (http:/ / www. itu. int/ ITU-D/ ict/ material/ TelecomICT_Indicators_Definition_March2010_for_web. pdf), International Telecommunication Union, March 2010. Accessed on 21 October 2011. [76] Zeller Jr, Tom (October 23, 2006). "LINK BY LINK; The Internet Black Hole That Is North Korea" (http:/ / query. nytimes. com/ gst/ fullpage. html?res=9E0CEEDF173FF930A15753C1A9609C8B63& n=Top/ Reference/ Times Topics/ People/ K/ Kim Jong Il). The New York Times. . Retrieved May 5, 2010. [77] The state of the Internet in Cuba, January 2011 (http:/ / som. csudh. edu/ fac/ lpress/ cuba/ chapters/ lpdraft2. docx), Larry Press, Professor of Information Systems at California State University, January 2011 [78] "Table 108: Number and internet access of instructional computers and rooms in public schools, by selected school characteristics: Selected years, 1995 through 2008" (http:/ / nces. ed. gov/ programs/ digest/ d10/ tables/ dt10_108. asp), 2010 Tables and Figures, National Center for Education Statistics, U.S. Department of Education, August 2010, accessed 28 April 2012 [79] "Changes in Cuba: From Fidel to Raul Castro" (http:/ / books. google. com/ books?id=Q2qQZfkOCNsC& pg=PA114& lpg=PA114& dq=Private+ ownership+ of+ computers+ in+ Cuba& source=bl& ots=bKMn5ZraA6& sig=8CcYmtODxcyXSr9LxtjatH_vkdE& hl=en& ei=ydWPTuKbLcaWtweR_qCNDA& sa=X& oi=book_result& ct=result& resnum=4& ved=0CDAQ6AEwAw#v=onepage& q=Private ownership computers& f=false), Perceptions of Cuba: Canadian and American policies in comparative perspective, Lana Wylie, University of Toronto Press Incorporated, 2010, p. 114, ISBN 978-1-4426-4061-0 [80] "Cuba to keep internet limits" (http:/ / www. allbusiness. com/ media-telecommunications/ internet-www/ 11795551-1. html). Agence France-Presse (AFP). 9 February 2009. . [81] "Declaration of Principles" (http:/ / www. itu. int/ wsis/ docs/ geneva/ official/ dop. html), WSIS-03/GENEVA/DOC/4-E, World Summit on the Information Society, Geneva, 12 December 2003 [82] Scott, Aaron (August 11, 2011). "Trends in broadband adoption" (http:/ / www. pewinternet. org/ Reports/ 2010/ Home-Broadband-2010/ Part-1/ Broadband-adoption-among-African-Americans-grew-significantly-between-2009-and-2010. aspx). Home Broadband 2010. Pew Internet & American Life Project. . Retrieved December 23, 2011. [83] Wireless World: Wi-Fi now in rural areas (http:/ / www. physorg. com/ news71497509. html) July 7, 2006

168

Broadband Internet access


[84] "Tegola project linking Skye, Knoydart and Loch Hourne" (http:/ / www. tegola. org. uk). . Retrieved 2010-03-16. [85] "Broadband for Rural Nova Scotia" (http:/ / www. gov. ns. ca/ econ/ broadband/ ), Economic and Rural Development, Nova Soctia, Canada, access 27 April 2012 [86] "Judgement 12790 of the Supreme Court" (http:/ / 200. 91. 68. 20/ pj/ scij/ busqueda/ jurisprudencia/ jur_texto_sentencia. asp?nValor2=483874& tem1=013141& param7=0& lResultado=3& nValor1=1& strTipM=T& strLib=LIB), File 09-013141-0007-CO, 30 July 2010. ( English translation (http:/ / www. google. com/ translate_c?langpair=en& u=http:/ / 200. 91. 68. 20/ pj/ scij/ busqueda/ jurisprudencia/ jur_texto_sentencia. asp?nValor2=483874& tem1=013141& param7=0& lResultado=3& nValor1=1& strTipM=T& strLib=LIB)) [87] "Estonia, where being wired is a human right" (http:/ / www. csmonitor. com/ 2003/ 0701/ p07s01-woeu. html), Colin Woodard, Christian Science Monitor, 1 July 2003 [88] "Finland makes 1Mb broadband access a legal right" (http:/ / news. cnet. com/ 8301-17939_109-10374831-2. html), Don Reisinger, CNet News, 14 October 2009 [89] "Top French Court Declares Internet Access 'Basic Human Right'" (http:/ / www. foxnews. com/ story/ 0,2933,525993,00. html). London Times (Fox News). 12 June 2009. . [90] Constitution of Greece As revised by the parliamentary resolution of May 27th 2008 of the VIIIth Revisionary Parliament (http:/ / www. hellenicparliament. gr/ UserFiles/ f3c70a23-7696-49db-9148-f24dce6a27c8/ 001-156 aggliko. pdf), English language translation, Hellenic Parliament [91] Sarah Morris (17 November 2009). "Spain govt to guarantee legal right to broadband" (http:/ / www. reuters. com/ article/ idUSLH61554320091117). Reuters. . [92] Klang, Mathias; Murray, Andrew (2005). Human Rights in the Digital Age (http:/ / www. psypress. com/ 9781904385318). Routledge. p.1. . [93] For the BBC poll Internet users are those who used the Internet within the previous six months. [94] "BBC Internet Poll: Detailed Findings" (http:/ / news. bbc. co. uk/ 1/ shared/ bsp/ hi/ pdfs/ 08_03_10_BBC_internet_poll. pdf), BBC World Service, 8 March 2010 [95] "Internet access is 'a fundamental right'" (http:/ / news. bbc. co. uk/ 2/ hi/ 8548190. stm), BBC News, 8 March 2010 [96] "VI. Conclusions and recommendations" (http:/ / www2. ohchr. org/ english/ bodies/ hrcouncil/ docs/ 17session/ A. HRC. 17. 27_en. pdf), Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, Human Rights Council, Seventeenth session Agenda item 3, United Nations General Assembly, 16 May 2011 [97] "Can the Internet be a Human Right?" (http:/ / www. du. edu/ gsis/ hrhw/ volumes/ 2004/ best-2004. pdf), Michael L. Best, Human rights & Human Welfare, Vol. 4 (2004) [98] Kravets, David (June 3, 2011). "U.N. Report Declares Internet Access a Human Right" (http:/ / www. wired. com/ threatlevel/ 2011/ 06/ internet-a-human-right/ ). Wired. . [99] Measuring the Resilience of the Global Internet Infrastructure System (http:/ / www. stevens-tech. edu/ csr/ fileadmin/ csr/ Publications/ Omer_Measuring_the_Resilience_of_the_Global_Internet__Infrastructure. pdf), 2009 3rd Annual IEEE Systems Conference, 156-162. [100] Inference of Network-Service Disruption upon Natural Disasters (http:/ / users. ece. gatech. edu/ ~jic/ katrina. pdf), accessed 12/05/12. [101] Impact of Hurricane Katrina on Internet Infrastructure (http:/ / www. renesys. com/ tech/ presentations/ pdf/ Renesys-Katrina-Report-9sep2005. pdf), Renesys Report, 9 September 2005, accessed 12/05/2012. [102] Cisco trucks help restore internet after disasters (http:/ / abclocal. go. com/ kgo/ story?section=news/ business& id=8867345), ABC News report, 30 October 2012, accessed 12/05/2012. [103] Taiwans Earthquake and Tsunami Caused Internet accesss Interference (http:/ / www. telkom. co. id/ media-corner/ press-release/ taiwan-s-earthquake-and-tsunami-caused-internet-access-s-interference. html), Telkom Indonesia Press Release, 27 December 2006, accessed 12/05/2012. [104] Impact of Taiwan Earthquake on Internet Access (http:/ / www. ust. hk/ itsc/ channel/ 2007feb/ earthquake. html), Choy, C. (2007). Channel, The Hong Kong University of Science & Technology, 46. Accessed 12/05/2012. [105] Understanding and Mitigating Catastrophic Disruption and Attack (http:/ / www. noblis. org/ NewsPublications/ Publications/ TechnicalPublications/ SigmaJournal/ Documents/ Sigma_RE_UnderstandingAndMitigating. pdf), Masi, D., Smith E., Fischer M. Telecommunications and Cybersecurity, Noblis. Accessed 12/05/2012. [106] Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region (http:/ / aws. amazon. com/ message/ 65648/ ), AWS message, 29 April 2011, accessed 12/05/2012. [107] [ https:/ / aws. amazon. com/ message/ 67457/ Summary of the AWS Service Event in the US East Region], AWS message, 2 July 2012, accessed 12/05/2012. [108] AWS is down: Why the sky is falling (http:/ / justinsb. posterous. com/ aws-down-why-the-sky-is-falling), justinsb's posterous, 21 April 2011, accessed 12/05/2012. [109] Amazon Web Services June 2012 Outage Explained (http:/ / cloud-computing-today. com/ 2012/ 06/ 18/ amazon-web-services-june-2012-outage-explained/ ), Cloud Computing Today, 18 June 2012, accessed 12/05/2012. [110] Will Natural Disasters Kill the Cloud? (http:/ / crashcloud. com/ will-natural-disasters-kill-cloud/ ), CrashCloud, 21 August 2012, accessed 12/05/2012.

169

Broadband Internet access

170

External links
European broadband (http://ec.europa.eu/information_society/eeurope/i2010/digital_divide/index_en. htm#European_broadband_portal) Corporate vs. Community Internet (http://www.alternet.org/story/22216/), AlterNet, June 14, 2005, - on the clash between US cities' attempts to expand municipal broadband and corporate attempts to defend their markets Broadband data (http://www.google.com/publicdata/directory#!q=broadband), from Google public data US National Broadband Maps (http://broadbandmap.gov)

Languages used on the Internet


Languages used on the Internet provides information on the number of Internet users and the number of Web sites on the Internet by language.

Languages used
Most web pages on the Internet are in English. A study made by W3Techs shows that as of December 2011 more than 56% of all websites use English as their content language. Other top languages which are used at least in 2% of websites are German, Russian, Japanese, Spanish, Chinese, French, Italian and Portuguese. Note that those figures account for the one million web sites (e.g. 0.27% of the total web sites according to figures of Dec. 2011) the most visited, according to Alexa.com, and language is identified using only the home page of the sites in most of the cases. As a consequence those figures offer a significantly higher percentage for many languages (especially for English) as compared to the real figures for the whole universe (which remain unknown as of today but that some sources estimate below 50% for English - See for instance NET.LANG: Towards a multilingual cyberspace [1]). The use of English online has increased by around 281% over the past ten years, however this is far less than Spanish (743%), Chinese (1,277%), Russian (1,826%) or Arabic (showing a growth of 2,501% over the same period).[2] The foreign language internet is rapidly expanding, with English being used by only 27% of users worldwide. A study on the presence of Romance languages on the Internet, published by the Latin Union in collaboration with FUNREDES, showed that as of November 2007, 45% of the webpages were written in English, 7.8% in Spanish, 4.41% in French, 2.66% in Italian, 1.39% in Portuguese, 0.28% in Romanian and 5.9% in German.[3]

Languages used on the Internet

171

Internet users by language


Estimates of the number of Internet users by language as of 31 May 2011:[4]

Percentage of Internet users by language

Rank Language

Internet users 565,004,000 27% 509,965,000 25% 164,969,000 99,182,000 82,587,000 75,423,000 65,365,000 59,779,000 59,700,000 39,440,000 8% 5% 4% 4% 3% 3% 3% 2%

1 2 3 4 5 6 7 8 9 10

English Chinese Spanish Japanese Portuguese German Arabic French Russian Korean Others

350,557,000 17%

Languages used on the Internet

172

Content languages for websites


Estimates of the percentages of Web sites using various content languages as of 30 December 2011:[5]

Content languages for websites

Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Language English German Russian Japanese Spanish Chinese French Italian Portuguese Polish Arabic Dutch Turkish Swedish Persian Czech Romanian Korean Greek Hungarian Thai

Percentage 56.6% 6.5% 4.8% 4.7% 4.6% 4.5% 3.9% 2.1% 2.0% 1.4% 1.3% 1.1% 1.1% 0.7% 0.7% 0.5% 0.4% 0.3% 0.3% 0.3% 0.3%

Languages used on the Internet

173
22 23 24 25 26 27 28 29 30 31 32 33 34 35 Vietnamese 0.3% Danish Indonesian Finnish Norwegian Bulgarian Slovak Hebrew Croatian Lithuanian Serbian Catalan Slovenian Ukrainian 0.3% 0.3% 0.2% 0.2% 0.2% 0.2% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1% 0.1%

All other languages are used in less than 0.1% of websites.

References
[1] http:/ / net-lang. net/ lang_en [2] Rotaru, Alexandru. "The foreign language internet is good for business" (http:/ / www. scottmclay. co. uk/ foreign-language-internet-good-business/ ). . Retrieved 21 June 2011. [3] "Limbile i culturile pe Internet - Studiu 2007 (Languages and cultures on the Internet - 2007 study)" (http:/ / dtil. unilat. org/ LI/ 2007/ ro/ resultados_ro. htm) (in Romanian). Latin Union. . Retrieved 2011-02-02. [4] "Number of Internet Users by Language" (http:/ / www. internetworldstats. com/ stats7. htm), Internet World Stats, Miniwatts Marketing Group, 31 May 2011, accessed 22 April 2012 [5] "Usage of content languages for websites" (http:/ / w3techs. com/ technologies/ overview/ content_language/ all). W3Techs.com. . Retrieved 30 December 2011.

External links
Internet World Stats (http://www.internetworldstats.com) Internet World Stats Global Internet usage by language (http://www.internetworldstats.com/stats7.htm) Latest stats Internet World Stats Internet users in the world by years (http://www.internetworldstats.com/emarketing. htm) European Travel Commission New Media Trend Watch (http://www.newmediatrendwatch.com/) Estimation of English and non-English Language Use on the WWW, 2000 (http://arxiv.org/abs/cs.cl/ 0006032), web pages, 32 Latin character languages World GDP by Language (http://unicode.org/notes/tn13/) Internet Users Map 2005 (http://explomap.free.fr/?p=3) CIA - The World Factbook (https://www.cia.gov/library/publications/the-world-factbook/) Writing the Webs Future in Many Languages - NYTimes.com (http://www.nytimes.com/2008/12/31/ technology/internet/31hindi.html?partner=rss&emc=rss) English translation of the 23rd CNNIC Statistical Survey Report on Internet Usage in China (http://www. nanjingmarketinggroup.com/knowledge/23rd-report-internet-development-in-China) List of CNNIC statistical reports (http://www.cnnic.cn/en/index/0O/02/index.htm)

Languages used on the Internet Measuring Linguistic Diversity on the Internet (http://www.uis.unesco.org/ev.php?ID=6341_201& ID2=DO_TOPIC), UNESCO (2006) Twelve years of measuring linguistic diversity in the Internet (http://unesdoc.unesco.org/images/0018/ 001870/187016e.pdf), UNESCO (2009) Language Observatory (http://gii2.nagaokaut.ac.jp/gii/blog/lopdiary.php/), Japan Science and Technology Agency FUNREDES Observatory of linguistic and cultural diversity on the Internet (http://funredes.org/lc/english/ inicio/)

174

List of countries by number of Internet subscriptions


This is a sortable list of countries by number of Internet users in 2011, mostly. Internet users are persons using the Internet from any device (including mobile phones) in the last 12 months. Estimates are derived from either household surveys or from Internet subscription data.[1] Note: All United Nations member states are included, except South Sudan. Taiwan is listed as a sovereign country.
Rank Country Internet [2] users Percentage Year of [3] population 38.4 2011 78.3 2011 12.4 2011 80.0 2011 42.2 2011 82.7 2011 44.3 2011 22.4 2011 84.1 2011 77.2 2011 26.5 2011 36.5 2011 82.7 2011 53.30 2012 58.7 2011 44.4 2012 34.1 2011 65.6 2011 15.5 2011 67.0 2011 81.6 2011

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

China United States India Japan Brazil Germany Russia Indonesia United Kingdom France Nigeria Mexico Korea, South Iran Italy Turkey Vietnam Spain Pakistan Argentina Canada

513,100,000 245,203,319 150,000,000 101,228,736 81,798,000 67,364,898 63,450,004 55,000,000 52,731,209 50,290,226 45,039,711 42,000,000 40,329,660 42,000,000 35,800,000 35,000,000 30,858,742 30,654,678 29,128,970 28,000,000 27,757,540

List of countries by number of Internet subscriptions

175
25,000,000 23,852,486 21,691,776 19,554,832 17,723,000 16,685,471 33,600,000 16,303,864 15,812,676 15,371,396 13,811,220 12,412,559 11,744,181 11,115,096 10,675,864 10,290,847 9,642,383 9,116,147 8,494,837 8,270,742 8,136,552 7,928,527 7,787,031 7,435,798 6,688,285 6,557,389 6,375,892 5,950,449 5,885,877 5,702,872 5,470,903 5,425,269 5,306,268 5,231,136 5,131,601 5,066,494 4,976,899 4,712,306 4,700,192 55.9 2011 62.0 2011 26.4 2011 89.8 2011 61.7 2011 72.00 2011 33.0 2011 51.00 2011 23.70 2011 92.30 2011 30.60 2011 47.50 2011 28.00 2011 40.22 2011 36.50 2011 21.00 2011 44.02 2011 53.89 2011 30.20 2011 91.00 2011 78.00 2011 5.00 2011 45.00 2011 72.97 2011 85.20 2011 79.80 2011 19.00 2011 55.30 2011 59.00 2011 53.00 2011 12.00 2011 20.38 2011 74.50 2011 70.00 2011 14.00 2011 22.50 2011 90.00 2011 31.40 2011 89.37 2011

22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59

Colombia Poland Egypt Australia Malaysia Taiwan Philippines Morocco Thailand Netherlands Ukraine Saudi Arabia Kenya Venezuela Peru South Africa Romania Chile Uzbekistan Sweden Belgium Bangladesh Kazakhstan Czech Republic Switzerland Austria Sudan Portugal Hungary Greece Tanzania Nepal Hong Kong Israel Algeria Syria Denmark Ecuador Finland

List of countries by number of Internet subscriptions

176
4,698,640 4,408,931 4,237,239 4,156,012 4,077,107 3,935,090 3,825,957 3,689,698 3,617,754 3,604,065 3,597,097 3,588,244 3,534,610 3,401,619 3,192,587 3,170,498 3,085,054 3,035,605 2,592,409 2,575,587 2,329,415 2,324,141 2,299,873 2,271,387 2,212,665 2,154,413 2,059,012 1,927,648 1,925,956 1,897,236 1,779,211 1,700,587 1,676,596 1,621,250 1,594,060 1,580,335 1,543,829 1,543,715 1,519,979 50.00 2011 93.97 2011 13.01 2011 39.10 2011 74.44 2011 75.00 2011 39.60 2011 86.00 2011 51.00 2011 70.00 2011 14.91 2011 76.82 2011 35.50 2011 14.11 2011 15.00 2011 70.71 2011 42.20 2011 30.00 2011 14.78 2011 23.23 2011 60.00 2011 55.00 2011 65.05 2011 34.90 2011 17.50 2011 52.00 2011 68.00 2011 42.12 2011 74.20 2011 15.70 2011 48.00 2011 51.40 2011 36.56 2011 11.73 2011 86.20 2011 71.68 2011 11.50 2011 23.90 2011 5.00 2011

60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96

Azerbaijan Norway Uganda Tunisia Slovakia Singapore Belarus New Zealand Bulgaria United Arab Emirates Yemen Ireland Dominican Republic Ghana Sri Lanka Croatia Serbia Bolivia Angola Cuba Bosnia and Herzegovina Palestinian Authority Lithuania Jordan Senegal Lebanon Oman Costa Rica Kuwait Zimbabwe Puerto Rico Uruguay Georgia Guatemala Qatar Latvia Zambia Paraguay Iraq

List of countries by number of Internet subscriptions

177
1,487,878 1,477,617 1,467,387 1,440,066 1,403,766 1,294,827 1,177,845 1,090,155 1,074,012 993,824 986,801 985,565 981,467 980,307 974,476 935,323 903,540 860,554 807,615 795,930 677,583 646,298 626,664 600,628 582,949 529,198 528,779 502,544 473,092 457,451 455,753 455,649 453,952 406,998 368,248 332,342 326,376 300,648 295,567 5.00 2011 42.70 2011 49.00 2011 72.00 2011 38.00 2011 15.90 2011 56.70 2011 20.00 2011 17.69 2011 13.03 2011 4.30 2011 5.00 2011 76.50 2011 17.00 2011 1.10 2011 77.00 2011 31.50 2011 1.20 2011 8.37 2010 7.00 2011 55.20 2011 57.68 2011 20.00 2011 10.60 2011 9.00 2011 0.98 2011 3.33 2011 3.00 2011 2.20 2011 90.89 2011 3.10 2011 34.95 2011 15.30 2009 1.90 2011 6.20 2011 58.00 2011 3.50 2011 2.00 2011 95.02 2011

97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134

Afghanistan Panama Albania Slovenia Moldova Honduras Macedonia Kyrgyzstan El Salvador Tajikistan Mozambique Cameroon Estonia Libya Ethiopia Bahrain Jamaica Congo, Democratic Republic of the Haiti Rwanda Trinidad and Tobago Cyprus Mongolia Nicaragua Laos Burma Malawi Burkina Faso Cte d'Ivoire Luxembourg Cambodia Mauritius Armenia Madagascar Eritrea Macau Benin Mali Iceland

List of countries by number of Internet subscriptions

178
282,648 264,723 257,710 249,875 248,458 247,275 238,326 237,660 237,020 225,058 205,756 205,423 204,420 203,653 195,433 177,011 165,152 148,770 147,674 144,578 137,813 134,300 133,127 128,138 126,133 124,071 123,752 113,603 113,400 108,901 81,323 80,684 72,065 68,708 67,854 60,668 60,557 54,418 52,995 69.22 2011 40.00 2011 12.00 2011 5.00 2011 18.13 2011 28.00 2011 32.00 2011 5.60 2011 3.50 2011 56.00 2011 71.77 2011 1.30 2011 1.90 2011 65.00 2011 10.87 2011 32.00 2011 32.00 2011 21.00 2011 4.50 2011 7.00 2011 1.30 2011 34.00 2011 49.00 2011 50.00 2011 8.00 2011 1.25 2011 2.00 2011 3.00 2011 1.11 2011 2.20 2011 4.22 2011 50.64 2009 82.00 2011 81.00 2011 42.00 2011 88.34 2011 57.07 2011 83.63 2011 7.00 2011

135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167

Malta Montenegro Namibia Turkmenistan Swaziland Fiji Guyana Congo, Republic of the Togo Brunei Barbados Niger Chad Bahamas, The Gambia, The Suriname Cape Verde Bhutan Mauritania Botswana Guinea Maldives French Polynesia New Caledonia Gabon Somalia Papua New Guinea Liberia Burundi Central African Republic Lesotho Guam Antigua and Barbuda Andorra Saint Lucia Bermuda Aruba Guernsey Djibouti

List of countries by number of Internet subscriptions

179
44,674 44,033 42,663 39,774 39,704 39,042 38,497 37,922 37,443 36,909 36,191 36,076 35,694 34,313 33,898 30,959 29,234 26,479 22,940 21,431 19,620 18,821 15,781 13,440 13,344 10,967 10,074 10,039 7,176 5,438 3,163 2,908 2,541 2,288 1,802 1,383 1,337 1,125 43.01 2011 14.00 2010 2.67 2011 80.73 2011 5.50 2011 6.00 2010 43.16 2011 76.00 2010 51.31 2011 64.00 2011 20.16 2011 33.46 2010 69.47 2011 6.00 2011 36.00 2011 85.00 2011 27.40 2009 25.00 2011 75.00 2010 20.00 2010 8.00 2010 65.00 2011 49.60 2011 7.00 2010 0.26 2009 37.00 2010 10.00 2011 0.90 2011 48.60 2010 26.97 2004 30.00 2011 96.38 2011 33.00 2011 3.55 2009 1.21 2000 26.90 2011 8.68 2011 74.48 2009 0.00 2009

168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192

Saint Vincent and the Grenadines Belize Guinea-Bissau Faroe Islands Comoros Equatorial Guinea Seychelles Saint Kitts and Nevis Dominica Greenland So Tom and Prncipe Grenada Cayman Islands Solomon Islands Jersey Liechtenstein Virgin Islands, U.S. Tonga Monaco Micronesia, Federated States of Vanuatu Gibraltar San Marino Samoa Sierra Leone Virgin Islands, British Kiribati East Timor Anguilla Palau Tuvalu [4] Falkland Islands Saint Helena Marshall Islands Mayotte Montserrat Wallis and Futuna Niue Korea, North [5]

605

List of countries by number of Internet subscriptions

180
556 6.00 2010 36.00 2011 1.50 2003

193

Nauru Ascension Tokelau [6]

318 20

Worldwide Internet users 2006 2011a

World population 6.5 billion 7 billion Not using the Internetb Using the Internetb Users in the developing worldb Users in the developed worldb Users in Chinab
a

82% 18% 8% 10% 2%

65% 35% 22% 13% 8%

Estimate. b Share of world population. [7] Source: International Telecommunications Union.

Internet users by region 2006b Africa Americas Arab States Asia and Pacific Commonwealth of Independent States Europe
a

2011a,b 13% 56% 29% 27% 48% 74%

3% 39% 11% 11% 13% 50%

Estimate. b Share of regional population. [8] Source: International Telecommunications Union.

List of countries by number of Internet subscriptions

181

References
[1] Definitions of World Telecommunication/ICT Indicators, March 2010 (http:/ / www. itu. int/ ITU-D/ ict/ material/ TelecomICT_Indicators_Definition_March2010_for_web. pdf), International Telecommunication Union, March 2010. Accessed on 30 September 2011. [2] Population estimates obtained from Internet world stats (http:/ / www. internetworldstats. com/ list2. htm), Miniwatts marketing group, accessed on 06 October 2012. Population for India obtained from (http:/ / www. bbc. co. uk/ news/ technology-20297872), Internet and Mobile Association of India (IAMAI), accessed on 12 November 2012. Population for Falkland Islands obtained from World Population Prospects: The 2010 Revision (http:/ / esa. un. org/ unpd/ wpp/ index. htm), Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat, accessed on 19 August 2012. Population for Ascension Island obtained from About Ascension (http:/ / www. ascension-island. gov. ac/ about), Ascension Island Government, accessed on 19 August 2012. [3] Percentage of Individuals using the Internet 2000-2011 (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ Individuals using the Internet2000-2011. xls), International Telecommunication Union, accessed on 19 August 2012. [4] Population for 2010 was used. [5] Figure of "605" is the mean number of Internet users given a percentage between 0.0000000 and 0.0049999, both of which can be rounded down to 0.00. [6] Population for 31 March 2010 was used. [7] "The World in 2011: ITC Facts and Figures" (http:/ / www. itu. int/ ITU-D/ ict/ facts/ 2011/ material/ ICTFactsFigures2011. pdf), International Telecommunications Unions (ITU), Geneva, 2011 [8] "Internet Users" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ at_glance/ KeyTelecom. html), Key ICT indicators for the ITU/BDT regions, International Telecommunications Unions (ITU), Geneva, 16 November 2011

Internet users per 100 inhabitantsSource: International Telecommunications Union. "Internet users per 100 inhabitants 2001-2011", International Telecommunications Union, Geneva, accessed 4 April 2012

Internet users in 2010 as a percentage of a country's populationSource: International Telecommunications Union.

Number of Internet users in 2010Source: International Telecommunications Union.

[9] "Internet users per 100 inhabitants 2001-2011" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ 2011/ Internet_users_01-11. xls), International Telecommunications Union, Geneva, accessed 4 April 2012

List of countries by number of broadband Internet subscriptions

182

List of countries by number of broadband Internet subscriptions


Worldwide broadband subscriptions 2007a World population Fixed broadband Developing world Developed world Mobile broadband Developing world Developed world
a

2011a,b 7.0 billion 8.5% 4.8% 25.7% 17.0% 8.5% 56.5%

6.6 billion 5.3% 2.3% 18.3% 4.0% 0.8% 18.5%

Share of regional population. b Estimate. [1] Source: International Telecommunications Union.

Broadband subscriptions Fixed subscriptions: Africa Americas Arab States Asia and Pacific Commonwealth of Independent States Europe Mobile subscriptions: Africa Americas Arab States Asia and Pacific Commonwealth of Independent States Europe
a

2007a 0.1% 11.0% 0.9% 3.3% 2.3% 18.4% 2007a 0.2% 6.4% 0.8% 3.1% 0.2% 14.7%

2011a,b 0.2% 15.5% 2.2% 6.2% 9.6% 25.8% 2011a,b 3.8% 30.5% 13.3% 10.7% 14.9% 54.1%

Share of regional population. b Estimate. [1] Source: International Telecommunications Union.

List of countries by number of broadband Internet subscriptions

183

This is a list of countries by number of broadband Internet subscriptions.

Countries by number of fixed broadband subscriptions


The following is a list of countries by fixed (wired) broadband Internet subscriptions and penetration rate, compiled by the International Telecommunication Union mostly for the year 2010. It refers to subscriptions to high-speed access to the public Internet (a TCP/IP connection), at downstream speeds equal to, or greater than, 256 kbit/s. This can include for example cable modem, DSL, fibre-to-the-home/building and other fixed (wired) broadband subscriptions. This total is measured irrespective of the method of payment. It excludes subscriptions that have access to data communications (including the Internet) via mobile cellular networks.[2] Note: Because an Internet subscription may be shared by many people, the penetration rate will not reflect the actual level of access to broadband Internet of the population.

Millions of DSL lines by countries at the end of 2005, compared to the previous year.

Worldwide major ADSL operators in number of lines, at the end of the first 2005 half.

Rank

Country or territory

Fixed Per 100 Year [3] broadband inhabitants [3] subscriptions 126,337,000 85,723,155 34,044,729 26,089,800 21,345,000 19,579,823 17,193,570 15,700,000 13,266,310 13,259,398 11,325,022 10,990,000 10,534,492 10,138,741 7,079,792 6,330,000 5,385,000 9.42 2010 27.62 2010 26.91 2010 31.70 2010 34.00 2010 31.56 2010 35.68 2010 10.98 2010 6.81 2010 21.90 2010 9.98 2010 0.9 2010 22.86 2010 29.81 2010 9.73 2010 38.10 2010 24.18 2010

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

China United States Japan Germany France United Kingdom Korea, South Russia Brazil Italy Mexico India Spain Canada Turkey Netherlands Australia

List of countries by number of broadband Internet subscriptions

184
5,265,026 4,960,528 3,862,354 3,631,396 3,373,143 3,188,618 2,987,008 2,980,000 2,954,556 2,908,119 2,594,055 2,257,110 2,111,109 2,092,379 2,078,500 2,052,930 2,002,000 1,956,218 1,900,300 1,864,900 1,788,490 1,723,678 1,722,400 1,665,889 1,556,485 1,532,700 1,521,000 1,496,607 1,449,904 1,426,800 1,268,800 1,089,000 1,088,286 941,405 911,635 900,000 858,219 803,823 786,818 22.68 2010 12.96 2010 9.56 2010 4.13 2010 31.49 2010 4.61 2010 31.85 2010 13.87 2010 6.50 2010 37.94 2010 5.60 2010 19.87 2010 29.93 2010 37.70 2010 7.32 2010 19.23 2010 23.85 2010 19.59 2010 0.79 2010 25.14 2010 10.45 2010 35.30 2010 1.85 2010 17.36 2010 5.37 2010 28.57 2010 14.50 2010 5.45 2010 1.79 2010 8.90 2010 24.94 2010 24.93 2010 14.52 2010 21.06 2010 3.14 2010 2.54 2010 11.18 2010 18.25 2010 10.47 2010

18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

Taiwan Poland Argentina Vietnam Belgium Thailand Sweden Romania Ukraine Switzerland Colombia Greece Hong Kong Denmark Malaysia Portugal Austria Hungary Indonesia Israel Chile Norway Philippines Belarus Venezuela Finland Czech Republic Saudi Arabia Egypt Kazakhstan Singapore New Zealand Bulgaria Ireland Peru Algeria Serbia Croatia United Arab Emirates

List of countries by number of broadband Internet subscriptions

185
743,000 694,414 684,057 551,520 531,787 500,000 498,682 492,115 481,810 460,000 434,876 367,480 360,039 336,323 307,489 288,236 275,639 269,067 259,000 256,943 253,916 228,316 200,000 197,890 195,784 194,455 175,274 168,368 164,500 145,028 144,057 131,372 116,685 116,569 109,212 105,519 99,108 95,937 89,100 1.48 2010 12.71 2010 20.58 2010 14.71 2010 0.31 2010 0.68 2010 1.56 2010 24.25 2010 4.60 2010 5.01 2010 19.31 2010 10.91 2010 3.63 2010 25.08 2010 8.18 2010 6.19 2010 7.84 2010 7.53 2010 1.80 2010 12.47 2010 5.83 2010 1.09 2010 4.73 2010 1.37 2010 3.16 2010 17.62 2010 2.83 2010 33.18 2010 0.38 2010 10.81 2010 8.19 2010 24.16 2010 4.26 2010 27.99 2010 34.11 2010 3.29 2010 0.06 2010 0.97 2010 0.32 2010

56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92

South Africa Slovakia Lithuania Puerto Rico Pakistan Iran Morocco Slovenia Tunisia Azerbaijan Latvia Uruguay Dominican Republic Estonia Bosnia and Herzegovina Costa Rica Panama Moldova Guatemala Macedonia, Republic of Georgia Sri Lanka Lebanon Ecuador Jordan Cyprus El Salvador Luxembourg Sudan Trinidad and Tobago Qatar Macao Jamaica Malta Iceland Albania Nigeria Bolivia Uzbekistan

List of countries by number of broadband Internet subscriptions

186
85,177 84,000 79,227 78,647 76,000 72,800 71,709 67,625 67,564 60,000 58,435 56,190 55,648 54,804 52,400 50,082 47,600 46,000 45,449 40,100 38,196 35,666 33,000 32,247 30,186 28,147 24,702 24,502 23,250 23,000 21,699 20,180 20,000 19,217 18,852 16,400 16,269 15,971 15,672 2.75 2010 0.35 2010 6.10 2010 0.63 2010 1.00 2010 1.15 2010 2.60 2010 5.36 2010 0.33 2010 0.04 2010 0.20 2010 20.56 2010 1.49 2007 0.16 2010 8.30 2010 0.21 2010 0.82 2010 1.68 2010 1.63 2010 61.75 2010 15.23 2010 0.25 2010 0.26 2010 11.91 2010 13.39 2009 0.44 2010 7.20 2010 28.87 2010 2.70 2010 63.83 2010 5.44 2010 11.58 2010 0.10 2010 17.88 2010 33.53 2010 0.03 2010 33.40 2010 3.22 2010 2.99 2010

93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123

Armenia Yemen Mauritius Senegal Honduras Libya Mongolia Bahrain Syria Bangladesh Nepal Barbados Palestinian Authority Uganda Montenegro Ghana Nicaragua Kuwait Oman Bermuda New Caledonia Cambodia Zimbabwe French Polynesia French Guiana Paraguay Bahamas Andorra Fiji Liechtenstein Brunei Saint Lucia Angola Aruba Cayman Islands Burma Faroe Islands Cape Verde Suriname

List of countries by number of broadband Internet subscriptions

187
15,400 15,148 14,633 14,600 14,437 14,193 13,800 12,502 12,328 12,025 11,978 11,193 10,433 10,267 10,100 9,640 9,391 9,100 8,915 8,675 8,673 8,058 7,900 7,119 6,624 6,278 6,100 5,359 5,120 4,700 4,155 4,107 4,082 3,852 3,707 3,706 3,653 3,569 3,150 0.29 2010 4.80 2010 0.06 2010 27.86 2010 13.82 2010 0.09 2010 38.98 2010 11.43 2010 21.52 2010 0.19 2010 0.60 2010 1.48 2010 35.68 2010 0.08 2010 32.03 2010 0.42 2010 13.86 2010 8.34 2010 2.86 2010 1.20 2010 0.01 2010 0.91 2010 0.04 2010 8.03 2010 0.19 2010 7.26 2010 0.09 2010 0.03 2010 0.03 2010 0.07 2010 0.01 2010 0.00 2010 0.27 2010 0.06 2010 0.02 2010 0.03 2010 24.79 2008 0.04 2010 0.01 2010

124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158

Kyrgyzstan Maldives Mozambique Saint Kitts and Nevis Grenada Burkina Faso Monaco Saint Vincent and the Grenadines Greenland Laos Botswana Guyana Gibraltar Zambia San Marino Namibia Dominica Virgin Islands, U.S. Belize Bhutan Congo, Democratic Republic of the Djibouti Cte d'Ivoire Antigua and Barbuda Mauritania Seychelles Papua New Guinea Madagascar Malawi Tajikistan Kenya Ethiopia Gabon Togo Niger Cuba Anguilla Benin Tanzania

List of countries by number of broadband Internet subscriptions

188
3,000 2,640 2,314 2,000 1,675 1,626 1,500 1,186 1,174 1,061 1,000 1,000 998 900 723 582 537 500 500 500 400 400 350 320 313 239 200 200 186 150 150 124 118 77 50 N/A 1.67 2010 0.02 2010 0.02 2010 0.37 2010 8.26 2010 0.14 2010 0.00 2010 0.17 2010 38.91 2010 7.82 2010 0.01 2010 0.96 2010 0.90 2010 0.90 2010 0.01 2010 0.35 2010 13.04 2010 0.01 2010 0.04 2010 0.21 2010 0.02 2010 3.90 2010 0.02 2010 3.26 2010 2010 1.17 2010 0.00 2010 0.11 2010 0.00 2010 0.00 2010 0.02 2010 0.00 2010 0.00 2010 0.00 2010 0.84 2010

159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186

Guam Rwanda Mali Solomon Islands Cook Islands Swaziland Afghanistan Equatorial Guinea Falkland Islands Wallis and Futuna Cameroon Tonga Micronesia, Federated States of Kiribati Turkmenistan So Tom and Prncipe Saint Helena Guinea Timor-Leste Vanuatu Lesotho Nauru Gambia Tuvalu Ascension Island Palau Burundi Samoa Liberia Chad Comoros Congo, Republic of the Eritrea Iraq Montserrat

List of countries by number of broadband Internet subscriptions

189

Countries by number of broadband Internet users


Abbreviations p.p. population penetration Total p.p. Total population penetration = Total subscribers / Total country population. H.p. Household penetration = Total home subscribers / Total number of households in the country.
Rank World[4] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Switzerland China [4] [6] Country DSL p.p. Cable p.p. Other p.p. Total p.p. 4.5% 6.3% 10.9% 7.3% [6] 27.9% 29.7% [6] 24.1% 5.9% 5.4% [4] N/A N/A 14.6% 4.2% 3.2% 1.6% 6.4% 10.6% N/A 1.9% N/A 0.0% 2.2% 4.1% 16.9% 0.3% 14.8% 4.1% N/A 4.0% 2.3% 13.2% N/A N/A 6.4% N/A 10.4% N/A N/A 1.7% 14.8% 0.3% 0.1% 0.0% 17.9% N/A 1.5% N/A 0.5% 0.1% 0.1% 0.0% 0.1% 0.9% 0.1% N/A 1.4% 2.5% 0.1% N/A N/A 7.9% N/A 0.8% 7.0% 7.7% 27.2% 26.3% 31.3% 31.4% 30.5% 34.4% H.p. Total subscribers N/A N/A N/A N/A N/A N/A N/A N/A 523,070,000 Date Dec. 2010

116,650,000 Mar. 2011 [5] 83,344,927 33,537,796 25,599,360 20,257,000 18,827,700 16,789,170 16,500,000 16,500,000 [10] Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Mar. 2011 Dec. 2011 May. 2012 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Mar. 2011 Jun. 2010 Jun. 2010 Dec. 2009 Jun. 2010 Dec. 2010 Jun. 2010 Oct. 2010 Dec. 2009 Jun. 2010 Dec. 2009 Jun. 2010

United States Japan [6]

Germany France

[6]

United Kingdom South Korea Russia Brazil India Italy [7] [6]

11.5% 27.2%[8] 8.1% 0.9% 21.3% 10.1% 22.2% 30.1% 12.3% 37.8% 23.4% 21.6% 13.1% 11.2% 30.0% 64.6% 3.4% 31.8% 13.4% 37.1% N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

[9]

4.7% N/A 20.9%

[11]

14,310,000 12,849,074 10,843,812 10,261,933 10,138,741 9,098,755 6,245,000 5,167,000 5,001,000 4,982,882 4,507,703 3,237,052 3,210,700 3,013,000 2,966,384 2,934,762 2,894,830

[6] [6]

Mexico Spain

7.8% 18.0% 13.1% 9.0 [6] 22.0% 19.3% [4] 10.1% 7.6%

[6] [6]

Canada Turkey

[6]

Netherlands Australia [6]

Republic of China (Taiwan) Poland [6] [12]

Argentina Belgium

6.4% 16.7% N/A 3.3% 17.5% 3.7% 25.9%

[6] [13]

Malaysia Vietnam Sweden

[4]

[6] [4] [6]

Romania

List of countries by number of broadband Internet subscriptions

190
34.3% 5.1% 19.9% 37.3% 18.9% 1.18% 23.0% 4.1% 18.7% 10.2% 1.9% 34.2% 24.3% [18] 2,417,654 2,309,688 2,252,653 2,062,000 2,013,528 2,009,593 1,921,445 1,872,900 1,870,149 1,729,575 1,722,407 1,653,837 1,560,000 1,460,149 1,446,900 1,407,500 1,048,518 1,029,063 912,323 907,859 651,268 437,207 400,000 382,948 300,000 242,134 169,757 122,522 106,258 101,000 N/A N/A Dec. 2009 Mar. 2010 Jan. 2011 Jun. 2010 Jun. 2010 April 2012 Jun. 2010 Dec. 2009 Jun. 2010 Jun. 2010 Dec. 2009 Jun. 2010 Dec. 2007 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Dec. 2007 2008 Dec. 2010 2008 Oct. 2010 Jun. 2010 Jun. 2010 Jun. 2010 Dec. 2007 Dec. 2007 Dec. 2008

27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 N/A N/A

Hong Kong Colombia Greece

[4]

19.3% 3.3% 19.9% 22.3% 10.5% N/A 15.9% 1.8% 8.2% 5.4%

N/A 1.6% 0.0% 10.0% 7.6% N/A 6.9% N/A 8.7% 4.8% N/A 9.2% N/A N/A 4.5% 4.3% 1.5% N/A N/A 3.9 1.6% N/A N/A N/A N/A N/A 5.8% 0.1% 0.0% N/A N/A N/A

N/A 0.2% 0.0% 5.1% 0.7% N/A 0.2% N/A 1.8% 0.0% N/A 4.9% N/A N/A 1.5% 0.3% 0.1% N/A N/A 0.1 3.4% N/A N/A N/A N/A N/A 0.3% 0.2% 2.8% N/A N/A N/A

N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 70.02% N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 93.8% 99.9%

[14]

[15] [6]

Denmark Portugal Pakistan Austria

[6] [16]

[6] [4] [6]

Ukraine

Hungary Chile [6]

Philippines Norway Israel [6]

[4]

1.6% 20.2% 14.9%

[17] [19] [6]

Venezuela

N/A 7.7% 21.8%

5.0% 13.7% 26.4% 24.5% 23.00% 3.3% 20.3% 12.0% 0.5% 0.18% [18] [22]

Czech Republic Finland [6]

New Zealand Croatia Peru [20]

[6]

23.0% N/A N/A [6] 16.3 7.0% N/A N/A N/A N/A

[21]

Republic of Ireland Slovakia Egypt [6]

[17] [22]

Indonesia Uruguay Iran [22]

[23]

11.0% 0.41% [22]

Ecuador

[24] [6]

N/A 28.0% 0.8% 30.5% N/A N/A N/A

1.6% 34.1% 1.1% 33.3% 1.0% [18]

Luxembourg Bolivia Iceland [25]

[6] [17] [17] [26]

Belarus

Monaco

38.0% N/A

Singapore

List of countries by number of broadband Internet subscriptions

191
82.5% Dec. 2007

N/A

Macau

[17]

N/A

N/A

N/A

N/A

N/A

References
[1] "Key Global Telecom Indicators for the World Telecommunication Service Sector" (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ at_glance/ keytelecom. html), International Telecommunications Unions (ITU), Geneva, 2011 [2] Definitions of World Telecommunication/ICT Indicators, March 2010 (http:/ / www. itu. int/ ITU-D/ ict/ material/ TelecomICT_Indicators_Definition_March2010_for_web. pdf), International Telecommunication Union. Accessed on 30 September 2011. [3] Fixed broadband subscriptions (http:/ / www. itu. int/ ITU-D/ ict/ statistics/ material/ excel/ 2010/ FixedBroadbandInternetSubscriptions_00-10. xls), International Telecommunication Union. Accessed on 8 April 2012. [4] "World Broadband Statistics:Short Report Q4 2010" (http:/ / broadband. cti. gr/ el/ download/ broadbandshortreport2010. pdf). Point Topic. March 2010. . Retrieved January 25, 2011. [5] http:/ / china-screen-news. com/ 2011/ 04/ china-telcos-announce-march-2011-subscribers-totals/ [6] Total fixed and wireless broadband subscriptions by country (June 2010) (http:/ / www. oecd. org/ dataoecd/ 22/ 15/ 39574806. xls) and Fixed and wireless broadband subscriptions per 100 inhabitants (June 2010) (http:/ / www. oecd. org/ dataoecd/ 21/ 35/ 39574709. xls). OECD. Retrieved January 25, 2011 [7] "Residential Broadband Statistics" (http:/ / www. acm-consulting. com/ data-downloads/ cat_view/ 16-broadband. html). AC&M-Consulting. 23 June 2011. . Retrieved 19 July 2011. [8] By September 2010 [9] "Dados informativos Banda Larga Fixa" (http:/ / www. anatel. gov. br/ Portal/ verificaDocumentos/ documento. asp?numeroPublicacao=257088& assuntoPublicacao=Dados informativos - Banda Larga& caminhoRel=null& filtro=1& documentoPath=257088. pdf) (in Portuguese) (Press release). ANATEL. 2011-02-23. . Retrieved 2011-03-19. [10] http:/ / www. teleco. com. br/ blarga. asp [11] http:/ / www. trai. gov. in/ WriteReadData/ PressRealease/ Document/ PR-TSD-May12. pdf [12] "Accesos a Internet" (http:/ / www. indec. mecon. ar/ nuevaweb/ cuadros/ 14/ internet_03_11. pdf) (in Spanish) (PDF) (Press release). INDEC. March 15, 2011. . Retrieved March 16, 2011. [13] (http:/ / www. internetworldstats. com/ asia/ my. htm). [14] "Informe de Conectividad 1T-2010" (http:/ / www. mintic. gov. co/ mincom/ documents/ portal/ documents/ root/ informes del sector/ informes de conectividad/ INFORMEDECONECTIVIDAD1T-2010. pdf) (in Spanish) (PDF) (Press release). Ministerio de Tecnologas de la Informacin y las Comunicaciones. August 2010. . Retrieved January 25, 2011. [15] http:/ / news. in. gr/ science-technology/ article/ ?aid=1231106170 [16] (http:/ / www. pta. gov. pk/ index. php?option=com_content& view=article& id=269:telecom-indicators& catid=124:industry-report& Itemid=599), Pakistan Telecommunication Authority. [17] World Broadband Statistics Report Q4 2009 (http:/ / point-topic. com/ contentDownload/ dslanalysis/ world broadband statistics q407. pdf), Point Topic. [18] Total population penetration rate obtained using July 2007 population estimates by the CIA's The World Factbook (https:/ / www. cia. gov/ library/ publications/ the-world-factbook/ fields/ 2119. html), updated on March 6, 2008. [19] "Venezuela logra crecimiento del 18% en banda ancha durante primer semestre de 2010" (http:/ / www. hispanicbusiness. com/ marketwire/ espanol/ 2010/ 12/ 16/ venezuela_logra_crecimiento_del_18_en. htm). HispanicBusiness.com. December 16, 2010. . Retrieved January 25, 2011. [20] "HAKOM Croatian Post and Electronic Communications Agency" (http:/ / www. hakom. hr/ default. aspx?id=60). . [21] "Banda Ancha Crece 9,4% en el Per durante primer semestre de 2010" (http:/ / www. tecnologiahechapalabra. com/ comunicaciones/ internet/ articulo. asp?i=5319) (in Spanish). Tecnologa Hecha Palabra. December 24, 2010. . Retrieved January 25, 2011. [22] http:/ / www. itu. int/ ITU-D/ icteye/ Reporting/ ShowReportFrame. aspx?ReportName=/ WTI/ InformationTechnologyPublic& RP_intYear=2008& RP_intLanguageID=1 [23] "Cifras de los servicios ms solicitado" (http:/ / www. ursec. gub. uy/ scripts/ templates/ portada_principal. asp?nota=portadahome. asp) (in Spanish). URSEC. . Retrieved April 25, 2011. [24] "Ecuador registra un incremento del 57% en el acceso a Internet" (http:/ / www. conatel. gob. ec/ site_conatel/ index. php?option=com_content& view=article& id=1336:ecuador-registra-un-incremento-del-57-en-el-acceso-a-internet& catid=46:noticias-articulos& Itemid=184) (in Spanish). SENATEL. . Retrieved March 19, 2011. [25] "Bases para un Plan Nacional de Banda Ancha de Internet" (http:/ / att. gob. bo/ attachments/ 807_ATT Parte 2. ppt) (in Spanish). Autoridad de Fiscalizacin y Control Social de Telecomunicaciones y Transportes. November 9, 2010. . Retrieved March 19, 2011. [26] "S'poreans are biggest online shoppers in Asia-Pacific region: Poll". The Straits Times. 10 June 2009. "Singapore is now the most wired nation on earth, with the household broadband penetration rate hitting 99.9 per cent last December"

Internet governance

192

Internet governance
Internet governance is the development and application of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. This article describes how the Internet was and is currently governed, some of the controversies that occurred along the way, and the ongoing debates about how the Internet should or should not be governed in the future. Internet governance should not be confused with E-Governance which refers to technology driven governance.

Background
The Internet is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body. However, to help ensure interoperability, several key technical and policy aspects of the underlying core infrastructure and the principal namespaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), headquartered in Marina del Rey, California. ICANN oversees the assignment of globally unique identifiers on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. This seeks to create a globally unified namespace to ensure the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. However, the National Telecommunications and Information Administration, an agency of the United States Department of Commerce, continues to have final approval over changes to the DNS root zone.[1][2] This authority over the root zone file makes ICANN one of a few bodies with global, centralized influence over the otherwise distributed Internet.[3] On 16 November 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to open an ongoing, non-binding conversation among multiple stakeholders about the future of Internet governance.[4] Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.[5][6]

Definition
The definition of Internet governance has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors, such as national governments, corporate entities and civil society, to play a role in the Internet's governance. A Working group established after a United Nations-initiated World Summit on the Information Society (WSIS) proposed the following definition of Internet governance as part of its June 2005 report: Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.[7] Law professor Yochai Benkler developed a conceptualization of Internet governance by the idea of three "layers" of governance: the "physical infrastructure" layer through which information travels; the "code" or "logical" layer that controls the infrastructure; and the "content" layer, which contains the information that signals through the network.[8]

Internet governance

193

History
To understand how the Internet is managed today, it is necessary to know some of the main events of Internet governance.

Formation and growth of the network


The original ARPANET, one of the components which evolved eventually into the Internet, connected four Universities: University of California Los Angeles, University of California Santa Barbara, Stanford Research Institute and Utah University. The IMPs, interface minicomputers, were built during 1969 by Bolt, Beranek and Newman in accord with a proposal by the US Department of Defense Advanced Research Projects Agency, which funded the system as an experiment. By 1973 it connected many more systems and included satellite links to Hawaii and Scandinavia, and a further link from Norway to London. ARPANET continued to grow in size, becoming more a utility than a research project. For this reason, in 1975 it was transferred to the US Defense Communications Agency. During the development of ARPANET, a numbered series of Request for Comments (RFCs) memos documented technical decisions and methods of working as they evolved. The standards of today's Internet are still documented by RFCs, produced through the very process which evolved on ARPANET. Outside of the USA the dominant technology was X.25. The International Packet Switched Service, created during 1978, used X.25 and extended to Europe, Australia, Hong Kong, Canada, and the USA. It allowed individual users and companies to connect to a variety of mainframe systems, including Compuserve. Between 1979 and 1984, a system known as Unix to Unix Copy Program(UUCP) grew to connect 940 hosts, using methods like X.25 links, ARPANET connections, and leased lines. Usenet News, a distributed discussion system, was a major use of UUCP. The Internet protocol suite, developed between 1973 and 1977 with funding from ARPA, was intended to hide the differences between different underlying networks and allow many different applications to be used over the same network. RFC 801 describes how the US Department of Defense organized the replacement of ARPANET's Network Control Program by the new Internet Protocol during January 1983. During the same year, the military systems were removed to a distinct MILNET, and the Domain Name System was invented to manage the names and addresses of computers on the "ARPA Internet". The familiar top-level domains .gov, .mil, .edu, .org, .net, .com, and .int, and the two-letter country code top-level domains were deployed during 1984. Between 1984 and 1986 the US National Science Foundation created the NSFNET backbone, using TCP/IP, to connect their supercomputing facilities. The combined network became generally known as the Internet. By the end of 1989 Australia, Germany, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, and the United Kingdom had connected to the Internet, which now contained over 160,000 hosts. During 1990, ARPANET formally terminated, and during 1991 the NSF ended its restrictions on commercial use of its part of the Internet. Commercial network providers began to interconnect, extending the Internet. Today almost all Internet infrastructure is provided and owned by the private sector. Traffic is exchanged between these networks, at major interconnect points, in accordance with established Internet standards and commercial agreements.

Internet governance

194

Governors
During 1979 the Internet Configuration Control Board was founded by DARPA to oversee the network's development. During 1984 it was renamed the Internet Advisory Board (IAB), and during 1986 it became the Internet Activities Board. The Internet Engineering Task Force (IETF) was formed during 1986 by the US Government to develop and promote Internet standards. It consisted initially of researchers, but by the end of the year participation was available to anyone, and its business was performed largely by email. From the early days of the network until his death during 1998, Jon Postel oversaw address allocation and other Internet protocol numbering and assignments in his capacity as Director of the Computer Networks Division at the Information Sciences Institute of the University of Southern California, under a contract from the Dept. of Defense. This function eventually became known as the Internet Assigned Numbers Authority (IANA), and as it expanded to include management of the global Domain Name System (DNS) root servers, a small organization grew. Postel also served as RFC Editor. Allocation of IP addresses was delegated to four Regional Internet Registries (RIRs): American Registry for Internet Numbers (ARIN) for North America Rseaux IP Europens - Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region In 2004 a new RIR, AfriNIC, was created to manage allocations for Africa. After Jon Postel's death during 1998, the IANA became part of the Internet Corporation for Assigned Names and Numbers (ICANN), a newly created Californian non-profit corporation, initiated during September 1998 by the US Government and awarded a contract by the US Department of Commerce. Initially two board members were elected by the Internet community at large, though this was changed by the rest of the board during 2002 in a little- attended public meeting in Accra, Ghana.[9] During 1992 the Internet Society (ISOC) was founded, with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. The IAB was renamed the Internet Architecture Board, and became part of ISOC. The Internet Engineering Task Force also became part of the ISOC. The IETF is overseen currently by the Internet Engineering Steering Group (IESG), and longer term research is carried on by the Internet Research Task Force and overseen by the Internet Research Steering Group. During 2002, a restructuring of the Internet Society gave more control to its corporate members. At the first World Summit on the Information Society (WSIS) in Geneva 2003 the topic of Internet governance was discussed. ICANN's status as a private corporation under contract to the U.S. government created controversy among other governments, especially Brazil, China, South Africa and some Arab states. Since no general agreement existed even on the definition of what comprised Internet governance, United Nations Secretary General Kofi Annan initiated a Working Group on Internet Governance (WGIG) to clarify the issues and report before the second part of the World Summit on the Information Society in Tunis 2005. After much controversial debate, during which the US delegation refused to consider surrendering the US control of the Root Zone file, participants agreed on a compromise to allow for wider international debate on the policy principles. They agreed to establish an Internet Governance Forum, to be convened by United Nations Secretary General before the end of the second quarter of the year 2006. [10] The Greek government volunteered to host the first such meeting.

Internet governance

195

Globalization and governance controversy


The position of the US Department of Commerce as the controller of the Internet gradually attracted criticism from those who felt that control should be more international. A hands-off philosophy by the US Dept. of Commerce helped limit this criticism, but this was undermined in 2005 when the Bush administration intervened to help kill the .xxx top level domain proposal. When the IANA functions were given to a new US non-profit Corporation called ICANN, controversy increased. ICANN's decision-making process was criticised by some observers as being secretive and unaccountable. When the directors' posts which had previously been elected by the "at-large" community of Internet users were abolished, some feared that ICANN would become illegitimate and its qualifications questionable, due to the fact that it was now losing the aspect of being a neutral governing body. ICANN stated that they were merely streamlining decision-making processes, and developing a structure suitable for the modern Internet. Other topics of controversy included the creation and control of generic top-level domains (.com, .org, and possible new ones, such as .biz or .xxx), the control of country-code domains, recent proposals for a large increase in ICANN's budget and responsibilities, and a proposed "domain tax" to pay for the increase. There were also suggestions that individual governments should have more control, or that the International Telecommunication Union or the United Nations should have a function in Internet governance.[11] One such proposal, resulting from a September 2011 summit between India, Brazil, and South Africa (IBSA), would seek to move internet governance into their sphere of dominance.[12] The move is a reaction to a perception that the principles of the 2005 Tunis Agenda for the Information Society have not been met.[12][13] The statement calls for the subordination of independent technical organizations such as ICANN and the ITU to a political organization operating under the auspices of the United Nations.[12]

References
[1] Klein, Hans. (2004). "ICANN and Non-Territorial Sovereignty: Government Without the Nation State." (http:/ / www. ip3. gatech. edu/ research/ KLEIN_ICANN+ Sovereignty. doc) Internet and Public Policy Project. Georgia Institute of Technology. [2] Packard, Ashley (2010). Digital Media Law. Wiley-Blackwell. pp.65. ISBN978-1-4051-8169-3. [3] Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. pp.61. ISBN978-0-262-01459-5. [4] Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. pp.67. ISBN978-0-262-01459-5. [5] Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. pp.7980. ISBN978-0-262-01459-5. [6] DeNardis, Laura, The Emerging Field of Internet Governance (http:/ / ssrn. com/ abstract=1678343) (September 17, 2010). Yale Information Society Project Working Paper Series. [7] "Report of the Working Group on Internet Governance (WGIG)" (http:/ / www. wgig. org/ docs/ WGIGREPORT. pdf), June 2005), p.4. [8] Yochai Benkler, "From Consumers to Users: Shifting the Deeper Structures of Regulation Towards Sustainable Commons and User Access" (http:/ / www. law. indiana. edu/ fclj/ pubs/ v52/ no3/ benkler1. pdf), 52 Fed. Comm. L.J. 561, (2000). [9] "Net governance chief will step down next year" (http:/ / business. highbeam. com/ 437235/ article-1G1-106850793/ net-governance-chief-step-down-next-year), David McGuire, Washingtonpost.com, 28 May 2002. [10] Gore, Innocent Internet governance: U.S., Developing countries strike deal, Africa News Service Nov. 21, 2005, Gale Canada in Context.Web, 29 Oct 2011. [11] Goldsmith/Wu, Jack/Tim (2006). Who Control the Internet? Illusions of a Borderless World. New York: Oxford University Press Inc.. pp.171. ISBN978-0-19-515266-1. [12] "Recommendations from the IBSA (India-Brazil-South Africa) Multistakeholder meeting on Global Internet Governance" (http:/ / www. culturalivre. org. br/ artigos/ IBSA_recommendations_Internet_Governance. pdf), 1-2 September 2011, Rio de Janeiro, Brazil [13] "Tunis Agenda for the Information Society" (http:/ / www. itu. int/ wsis/ docs2/ tunis/ off/ 6rev1. html), World Summit on the Information Society, 18 November 2005

Internet governance

196

Further reading
Ruling the Root: Internet Governance and the Taming of Cyberspace (http://mitpress.mit.edu/catalog/item/ default.asp?ttype=2&tid=10205/) by Milton Mueller, MIT Press, 2002. The definitive study of DNS and ICANN's early history. Protocol Politics (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11893/), Laura DeNardis, MIT Press, 2009. IP addressing and the migration to IPv6 "One History of DNS" (http://www.byte.org/one-history-of-dns.pdf) by Ross W. Rader. April 2001. Article contains historic facts about DNS and explains the reasons behind the so called "dns war". "The Emerging Field of Internet Governance" (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1678343), by Laura DeNardis. September 2010. Suggests a framework for understanding problems in Internet governance. Launching the DNS War: Dot-Com Privatization and the Rise of Global Internet Governance (http://www. scribd.com/doc/58805571/ Launching-the-DNS-War-Dot-Com-Privatization-and-the-Rise-of-Global-Internet-Governance) by Craig Simon. December 2006. Ph.D. dissertation containing an extensive history of events which sparked the so-called "dns war". "Habermas@discourse.net: Toward a Critical Theory of Cyberspace" (http://papers.ssrn.com/sol3/papers. cfm?abstract_id=363840), by A. Michael Froomkin, 116 Harv. L. Rev. 749 (2003). Argues that the Internet standards process undertaken by the IETF fulfils Jrgen Habermas's conditions for the best practical discourse. Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. ISBN978-0-262-01459-5. Dutton, William H.; Malcolm Peltu (2007-03). "The emerging Internet governance mosaic: Connecting the pieces". Information Polity: The International Journal of Government & Democracy in the Information Age 12 (1/2): 6381. ISSN15701255.

External links
Internet Governance Project (http://www.internetgovernance.org/) Diplo Internet Governance Community (http://www.diplointernetgovernance.org/) Global Internet Governance Academic Network (GigaNet) (http://www.gig-net.org/) "The Politics and Issues of Internet Governance" (http://www.institut-gouvernance.org/en/analyse/ fiche-analyse-265.html), Milton L. Mueller, April 2007, analysis from the Institute of research and debate on Governance (Institut de recherche et dbat sur la gouvernance) ICANN - the Internet Corporation for Assigned Names and Numbers (http://www.icann.org) World Summit on the Information Society: Geneva 2003 and Tunis 2005 (http://www.itu.int/wsis) The Internet Governance Forum (IGF) (http://www.wgig.org), Working Group on Internet Governance (WGIG) The Future of Global Internet governance (http://www.iit.cnr.it/en/taxonomy/term/334/), Institute of Informatics and Telematics - Consiglio Nazionale delle Ricercha (IIT-CNR), Pisa CircleID: Internet Governance (http://www.circleid.com/topics/internet_governance) "United States cedes control of the internet - but what now? - Review of an extraordinary meeting" (http://www. theregister.co.uk/2006/07/27/ntia_icann_meeting/), Kieren McCarthy, The Register, July 2006 APC Internet Rights Charter (http://www.apc.org/en/pubs/about-apc/apc-internet-rights-charter), Association for Progressive Communications, November 2006

197

Common uses
Timeline of popular Internet services
2011
Google+, a social networking system by Google, integrating several of the company's existing services, such as Google Buzz and Picasa Web Albums. Duolingo, a free language-learning website and crowdsourced text translation platform.

2010
OnLive, a cloud gaming platform where the video games are synchronized, rendered, and stored on remote servers and delivered via the Internet. Diaspora, a free personal web server that implements a distributed social networking service. Flattr, a microdonation system.

2009
Google Docs, a free, Web-based word processor, spreadsheet, presentation, form, and data storage service offered by Google goes out of beta. Wolfram Alpha, the answer engine is born. Kickstarter, an online threshold pledge system for funding creative projects. Web 2.0 Suicide Machine, a service to automatically delete private content and friend relationships from many social networking sites at once.

2008
Encyclopedia of Life, a free, online collaborative encyclopedia intended to document all of the 1.8 million living species known to science. GitHub, a web-based hosting service for software development projects that use the Git revision control system TinEye, a reverse image search engine. Spotify, a DRM-based music streaming service offering unlimited streaming of selected music from a range of major and independent record labels. Jinni, a semantic search and recommendation engine for movies, TV shows and short films. Amazon Elastic Compute Cloud (EC2), a cloud computing platform that allows users to rent virtual computers on which to run their own computer applications. Dropbox, a file hosting service that uses cloud computing to enable users to store and share files and folders with others across the Internet using file synchronization.

Timeline of popular Internet services

198

2007
Google Street View, a technology featured in Google Maps and Google Earth that provides panoramic views from various positions along many streets in the world. Kindle, the e-book reader by Amazon.com is launched together with the e-book virtual bookshop. In July 2010 Amazon announced that e-book sales for its Kindle reader outnumbered sales of hardcover books. Tumblr is a microblogging platform that allows users to post text, images, videos, links, quotes and audio to their tumblelog, a short-form blog. Experience Project, a free social networking website of online communities premised on connecting people through shared life experiences. SoundCloud, an online audio distribution platform which allows collaboration, promotion and distribution of audio recordings.

2006
WikiLeaks, an international non-profit organisation that publishes submissions of private, secret, and classified media from anonymous news sources and news leaks. Twitter is a social networking and micro-blogging service that enables its users to send and read other users' updates, tweets, which are text-based posts of up to 140 characters in length. IMSLP, the International Music Score Library Project. Spokeo, a social network aggregator web site that aggregates data from many online and offline sources launches, raising many privacy concerns. YouPorn, a free pornographic video sharing website. Mint.com is a free web-based personal financial management service for the US and Canada. Khan Academy, free educational video repository with more than 3,000 micro lectures, automated exercises, and tutoring.

2005
YouTube is a video sharing website. Google Earth is a virtual globe computer program. Megaupload, allows users to upload and download files. Musopen, an online music library of copyright-free (public domain) music. OpenID, an open standard that describes how users can be authenticated in a decentralized manner eyeOS, an open source web desktop following the cloud computing concept. Etsy, an e-commerce website focused on handmade or vintage items as well as art and craft supplies. Pandora Radio, an online radio and music recommendation system based on the Music Genome Project.

2004
OpenStreetMap, a collaborative project to create a free editable map of the world. Podcast: A downloadable audio file for listening to on a portable media player. A bit like a radio program that you can save and listen to at your convenience. "Podcast" is a portmanteau of the words "iPod" and "broadcast". Podcasting began to catch hold in late 2004, though the ability to distribute audio and video files easily has been around since before the dawn of the Internet. Facebook is a social networking website. World of Warcraft (WoW) is a massively multiplayer online role-playing game (MMORPG). Flickr is a photo/ video sharing website.

Timeline of popular Internet services

199

2003
Skype, a software application that allows users to make voice calls over the Internet. iTunes is an online store which sells music and videos in downloadable form. MySpace is a social networking website. Steam, a digital distribution, digital rights management, multiplayer and communications platform for computer games developed by Valve Corporation. Second Life is a virtual world. 4chan is created. CouchSurfing, a hospitality exchange network. The Pirate Bay, a Swedish website that hosts torrent files.

2002
Tor, a system intended to enable online anonymity is released. Last.fm, a music recommender system. LinkedIn, a business-oriented social networking site is founded and launched next year. TinyURL, a URL shortening service.

Skyscanner, a flight search engine that allows users to browse for flights via price and location.

2001
Wikipedia, an online encyclopedia. StumbleUpon, a discovery engine that uses collaborative filtering to recommend web content to its users. PartyPoker.com, a set of online poker card rooms. Meetup, an online social networking portal that facilitates offline group meetings in various localities around the world.

2000
Blogger is a blog publishing service that allows private or multi-user blogs with time-stamped entries. Geocaching.com starts its activity. This outdoor sport activity with online support can be considered one of the early forms of geosocial networking. deviantArt, an online community showcasing various forms of user-made artwork. TripAdvisor, travel site that assists customers in gathering travel information, posting reviews and opinions of travel related content and engaging in interactive travel forums.

1999
RSS, the first version of the web feed formats used to publish frequently updated works is created at Netscape. Monster.com, an employment website. SourceForge, a web-based source code repository. It acts as a centralized location for software developers to control and manage open source software development. SETI@home, an internet-based public volunteer computing project. Its purpose is to analyze radio signals, searching for signs of extra terrestrial intelligence. Napster (now defunct) was an online music peer-to-peer file sharing service

Timeline of popular Internet services

200

1998
Google Inc. launched a search engine for web sites of the World Wide Web, subsequently extending search facilities to many types of media, including books, magazines, forums, email, news. Yahoo! Groups a community-driven Internet communication tool, a hybrid between an electronic mailing list and an Internet forum starts off as Yahoo! Clubs PayPal, an e-commerce business allowing payments and money transfers to be made through the Internet.

1997
Babel Fish launched by AltaVista. It was the first language translation service for web content, with technology provided by SYSTRAN. Netflix, an American corporation that offers both on-demand video streaming over the internet, and flat rate online video rental. Go Daddy, an Internet domain registrar and Web hosting company. About.com, an online resource for original information and advice.

1996
Ultima Online (UO), a graphical massively multiplayer online role-playing game (MMORPG). Internet Archive is an archive of periodically cached versions of websites. Hotmail, a free web-based email service. Ticketmaster, a ticket sales and distribution company goes online and sells its first ticket through their platform. Shopzilla, a price comparison service.

1995
Ebay is an auction and shopping website. Wiki: A website that anyone can edit. Craigslist, a centralized network of online communities, featuring free online classified advertisements. AltaVista, a web search engine owned by Yahoo!. It was once one of the most popular search engines but its popularity declined with the rise of Google.

1994
Amazon.com is an online retailer, best known for selling books, but now sells all kinds of goods. GeoCities a free web hosting service, now defunct, founded as Beverly Hills Internet (BHI) by David Bohnett and John Rezner . The Yahoo! website started off as a web directory and soon became a webportal offering all kinds of Internet services. Match.com, an online dating company. FedEx.com launches, being the first transportation web site to offer online package tracking.

Timeline of popular Internet services

201

1993
Blog: A blog (a contraction of the term weblog) is a type of website which resembles an online diary. Entries are commonly displayed in reverse-chronological order. Originally hand-coded, there are now blogging tools (a kind of content management system) to facilitate searching and linking to other blogs. CDDB, a database for software applications to look up audio CD (compact disc) information over the Internet. Hutchison Paging email gateway allows emails to be sent to message pagers in the UK. This same system worked with Orange mobile phones when they were launched in 1994, emails would arrive as texts.

1992
HTML was developed by a British engineer, Tim Berners-Lee while working in CERN. This was devised so that reports from CERN, including photographs, graphs and tables could be shared (served) across the web. Veronica (search engine) provides an index of files on Gopher servers.

1991
arXiv, an open access archive for electronic preprints of scientific papers. Gopher: A hypertext system which was soon largely replaced by the World Wide Web.

1990
ARPANET was retired and merged into the NSFNET. IMDb, the Internet Movie Database. The Archie search engine lists names of files on FTP sites.

1988
Internet Relay Chat (IRC): A form of real-time Internet text messaging (chat) or synchronous conferencing. It is mainly designed for group communication in discussion forums, called channels, but also allows one-to-one communication via private message.

1986
LISTSERV the first electronic mailing list software application,

1983
Internet: A global computer network which was created by interconnecting various existing networks with the TCP/IP protocol suite.

1982
First standardization of the Simple Mail Transfer Protocol, a network transmission standard for the transport of email.

Timeline of popular Internet services

202

1979
Usenet: A distributed threaded discussion and file sharing system; a collection of forums known as newsgroups, that was a precursor to today's web-based forums. One notable difference from a BBS or web forum is that there is no central system owner. Usenet is distributed among a large, constantly changing conglomeration of servers which store and forward messages to one another.

1978
MUD: First real-time, multi-player MUD adventure game was developed by Roy Trubshaw and Richard Bartle at Essex University, England.

1973
E-mail: First proposal for standardization of electronic mail message format in RFC 561. ARPANET made its first international connection between University college in London and Royal Radar establishment in Norway.

1971
FTP: File Transfer Protocol Project Gutenberg, a volunteer effort to digitize and archive cultural works.

1969
Telnet: A system for logging in, over a network, to a computer situated in another location. ARPANET connected Stanford research Institute in Santa Barbara to the University of Utah, the internet was born, although the first attempt actually crashed on the 'g' of the word 'Login'

1960s
Email: Electronic mail applications are developed on timesharing main frame computers for communication between system users. The beginning of the internet can be traced back to 1962, when the RAND (America's military think tank) tackled the problem of how they could communicate in the aftermath of a nuclear attack, their thinking was prompted by the Cuban Missile Crisis.

Email

203

Email
Electronic mail, commonly referred to as email or e-mail, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks. Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages. Historically, the term electronic mail was used generically for any electronic The at sign, a part of every SMTP email [1] address document transmission. For example, several writers in the early 1970s used the term to describe fax document transmission.[2][3] As a result, it is difficult to find the first citation for the use of the term with the more specific meaning it has today. An Internet email message[4] consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp. Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized in RFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions (MIME). Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it,[5] but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today. Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP), first published as Internet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.

Spelling
Electronic mail has several English spelling options that occasionally prove cause for vehement disagreement.[6][7] email is the form required by IETF Requests for Comment and working groups[8] and increasingly by style guides.[9][10][11] This spelling also appears in most dictionaries.[12][13][14][15][16][17] e-mail is a form previously recommended by some prominent journalistic and technical style guides. According to Corpus of Contemporary American English data, this is the form that appears most frequently in edited, published American English writing.[18] mail was the form used in the original RFC. The service is referred to as mail and a single piece of electronic mail is called a message.[19][20][21] eMail, capitalizing only the letter M, was common among ARPANET users and the early developers of Unix, CMS, AppleLink, eWorld, AOL, GEnie, and Hotmail.

Email EMail is a traditional form that has been used in RFCs for the "Author's Address",[20][21] and is expressly required "for historical reasons".[22] E-mail is sometimes used, capitalizing the initial letter E as in similar abbreviations like E-piano, E-guitar, A-bomb, H-bomb, and C-section.[23] There is also some variety in the plural form of the term. In US English email is used as a mass noun (like the term mail for items sent through the postal system), but in British English it is more commonly used as a count noun with the plural emails.

204

Origin
Precursors
Sending text messages electronically could be said to date back to the Morse code telegraph of the mid 1800s; and the 1939 New York World's Fair, where IBM sent a letter of congratulations from San Francisco to New York on an IBM radio-type, calling it a high-speed substitute for mail service in the world of tomorrow.[24] Teleprinters were used in Germany during World War II,[25] and use spread until the late 1960s when there was a worldwide Telex network. Additionally, there was the similar but incompatible American TWX, which remained important until the late 1980s.[26] By the early 1970s, the United States Department of Defense AUTODIN network provided message service between 1,350 terminals, handling 30 million messages per month, with an average message length of approximately 3,000 characters. Autodin was supported by 18 large computerized switches, and was connected to the United States General Services Administration Advanced Record System, which provided similar services to roughly 2,500 terminals.[27]

Host-based mail systems


With the introduction of MIT's Compatible Time-Sharing System (CTSS) in 1961[28] for the first time multiple users were able to log into a central system[29] from remote dial-up terminals, and to store, and share, files on the central disk.[30] Informal methods of using this to pass messages developed and were expanded to create the first true email system: MIT's CTSS MAIL, in 1965.[31] Other early time-sharing systems soon had their own email applications: 1972 Unix mail program[32][33] 1972 APL Mailbox by Larry Breed[34][35] 1974 The PLATO IV Notes on-line message board system was generalized to offer 'personal notes' (email) in August, 1974.[27][36] 1978 EMAIL at University of Medicine and Dentistry of New Jersey[37] 1981 PROFS by IBM[38][39] 1982 ALL-IN-1[40] by Digital Equipment Corporation Although similar in concept, all these original email systems had widely different features and ran on incompatible systems. They allowed communication only between users logged into the same host or "mainframe" although this could be hundreds or even thousands of users within an organization.

Email

205

Email networks
Soon systems were developed to link compatible mail programs between different organisations over dialup modems or leased lines, creating local and global networks. In 1971 the first ARPANET email was sent,[41] and through RFC 561, RFC 680, RFC 724 and finally 1977's RFC 733, became a standardized working system. Other separate networks were also being created including: Unix mail was networked by 1978's uucp,[42] which was also used for USENET newsgroup postings IBM mainframe email was linked by BITNET in 1981[43] IBM PCs running DOS in 1984 could link with FidoNet for email and shared bulletin board posting

LAN email systems


In the early 1980s, networked personal computers on LANs became increasingly important. Server-based systems similar to the earlier mainframe systems were developed. Again these systems initially allowed communication only between users logged into the same server infrastructure. Examples include: cc:Mail Lantastic WordPerfect Office Microsoft Mail Banyan VINES Lotus Notes

Eventually these systems too could also be linked between different organizations, as long as they ran the same email system and proprietary protocol.[44]

Attempts at interoperability
Early interoperability among independent systems included: ARPANET, the forerunner of today's Internet, defined the first protocols for dissimilar computers to exchange email uucp implementations for non-Unix systems were used as an open "glue" between differing mail systems, primarily over dialup telephones CSNet used dial-up telephone access to link additional sites to the ARPANET and then Internet Later efforts at interoperability standardization included: Novell briefly championed the open MHS protocol but abandoned it after purchasing the non-MHS WordPerfect Office (renamed Groupwise) The Coloured Book protocols on UK academic networks until 1992 X.400 in the 1980s and early 1990s was promoted by major vendors and mandated for government use under GOSIP but abandoned by all but a few in favor of Internet SMTP by the mid-1990s.

From SNDMSG to MSG


In the early 1970s, Ray Tomlinson updated an existing utility called SNDMSG so that it could copy messages (as files) over the network. Lawrence Roberts, the project manager for the ARPANET development, took the idea of READMAIL, which dumped all "recent" messages onto the user's terminal, and wrote a program for TENEX in TECO macros called RD which permitted accessing individual messages.[45] Barry Wessler then updated RD and called it NRD.[46]

Email Marty Yonke combined rewrote NRD to include reading, access to SNDMSG for sending, and a help system, and called the utility WRD which was later known as BANANARD. John Vittal then updated this version to include 3 important commands: Move (combined save/delete command), Answer (determined to whom a reply should be sent) and Forward (send an email to a person who was not already a recipient). The system was called MSG. With inclusion of these features, MSG is considered to be the first integrated modern email program, from which many other applications have descended.[45]

206

Rise of ARPANET mail


The ARPANET computer network made a large contribution to the development of email. There is one report that indicates experimental inter-system email transfers began shortly after its creation in 1969.[31] Ray Tomlinson is generally credited as having sent the first email across a network, initiating the use of the "@" sign to separate the names of the user and the user's machine in 1971, when he sent a message from one Digital Equipment Corporation DEC-10 computer to another DEC-10. The two machines were placed next to each other.[47][48] Tomlinson's work was quickly adopted across the ARPANET, which significantly increased the popularity of email. For many years, email was the killer app of the ARPANET and then the Internet. Most other networks had their own email protocols and address formats; as the influence of the ARPANET and later the Internet grew, central sites often hosted email gateways that passed mail between the Internet and these other networks. Internet email addressing is still complicated by the need to handle mail destined for these older networks. Some well-known examples of these were UUCP (mostly Unix computers), BITNET (mostly IBM and VAX mainframes at universities), FidoNet (personal computers), DECNET (various networks) and CSNet, a forerunner of NSFNet. An example of an Internet email address that routed mail to a user at a UUCP host: hubhost!middlehost!edgehost!user@uucpgateway.somedomain.example.com This was necessary because in early years UUCP computers did not maintain (and could not consult central servers for) information about the location of all hosts they exchanged mail with, but rather only knew how to communicate with a few network neighbors; email messages (and other data such as Usenet News) were passed along in a chain among hosts who had explicitly agreed to share data with each other. (Eventually the UUCP Mapping Project would provide a form of network routing database for email.)

Operation overview
The diagram to the right shows a typical sequence of events[49] that takes place when Alice composes a message using her mail user agent (MUA). She enters the email address of her correspondent, and hits the "send" button.

Email

207

1. Her MUA formats the message in email format and uses the Submission Protocol (a profile of the Simple Mail Transfer Protocol (SMTP), see RFC 6409) to send the message to the local mail submission agent (MSA), in this case smtp.a.org, run by Alice's internet service provider (ISP). 2. The MSA looks at the destination address provided in the SMTP protocol (not from the message header), in this case bob@b.org. An Internet email address is a string of the form localpart@exampledomain. The part before the @ sign is the local part of the address, often the username of the recipient, and the part after the @ sign is a domain name or a fully qualified domain name. The MSA resolves a domain name to determine the fully qualified domain name of the mail exchange server in the Domain Name System (DNS). 3. The DNS server for the b.org domain, ns.b.org, responds with any MX records listing the mail exchange servers for that domain, in this case mx.b.org, a message transfer agent (MTA) server run by Bob's ISP. 4. smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the message to other MTAs before the message reaches the final message delivery agent (MDA). 1. The MDA delivers it to the mailbox of the user bob. 2. Bob presses the "get mail" button in his MUA, which picks up the message using either the Post Office Protocol (POP3) or the Internet Message Access Protocol (IMAP4). That sequence of events applies to the majority of email users. However, there are many alternative possibilities and complications to the email system: Alice or Bob may use a client connected to a corporate email system, such as IBM Lotus Notes or Microsoft Exchange. These systems often have their own internal email format and their clients typically communicate with the email server using a vendor-specific, proprietary protocol. The server sends or receives email via the Internet through the product's Internet mail gateway which also does any necessary reformatting. If Alice and Bob work for the same company, the entire transaction may happen completely within a single corporate email system. Alice may not have a MUA on her computer but instead may connect to a webmail service. Alice's computer may run its own MTA, so avoiding the transfer at step 1. Bob may pick up his email in many ways, for example logging into mx.b.org and reading it directly, or by using a webmail service. Domains usually have several mail exchange servers so that they can continue to accept mail when the main mail exchange server is not available. Email messages are not secure if email encryption is not used correctly.

Email Many MTAs used to accept messages for any recipient on the Internet and do their best to deliver them. Such MTAs are called open mail relays. This was very important in the early days of the Internet when network connections were unreliable. If an MTA couldn't reach the destination, it could at least deliver it to a relay closer to the destination. The relay stood a better chance of delivering the message at a later time. However, this mechanism proved to be exploitable by people sending unsolicited bulk email and as a consequence very few modern MTAs are open mail relays, and many MTAs don't accept messages from open mail relays because such messages are very likely to be spam.

208

Message format
The Internet email message format is now defined by RFC 5322, with multi-media content attachments being defined in RFC 2045 through RFC 2049, collectively called Multipurpose Internet Mail Extensions or MIME. RFC 5322 replaced the earlier RFC 2822 in 2008, and in turn RFC 2822 in 2001 replaced RFC 822 which had been the standard for Internet email for nearly 20 years. Published in 1982, RFC 822 was based on the earlier RFC 733 for the ARPANET.[50] Internet email messages consist of two major sections: Header Structured into fields such as From, To, CC, Subject, Date, and other information about the email. Body The basic content, as unstructured text; sometimes containing a signature block at the end. This is exactly the same as the body of a regular letter. The header is separated from the body by a blank line.

Message header
Each message has exactly one header, which is structured into fields. Each field has a name and a value. RFC 5322 specifies the precise syntax. Informally, each line of text in the header that begins with a printable character begins a separate field. The field name starts in the first character of the line and ends before the separator character ":". The separator is then followed by the field value (the "body" of the field). The value is continued onto subsequent lines if those lines have a space or tab as their first character. Field names and values are restricted to 7-bit ASCII characters. Non-ASCII values may be represented using MIME encoded words. Header fields Email header fields can be multi-line, and each line should be at most 78 characters long and in no event more than 998 characters long.[51] Header fields defined by RFC 5322 can only contain US-ASCII characters; for encoding characters in other sets, a syntax specified in RFC 2047 can be used.[52] Recently the IETF EAI working group has defined some standards track extensions,[53][54] replacing previous experimental extensions, to allow UTF-8 encoded Unicode characters to be used within the header. In particular, this allows email addresses to use non-ASCII characters. Such characters must only be used by servers that support these extensions. The message header must include at least the following fields:[55] From: The email address, and optionally the name of the author(s). In many email clients not changeable except through changing account settings. Date: The local time and date when the message was written. Like the From: field, many email clients fill this in automatically when sending. The recipient's client may then display the time in the format and time zone local to him/her. The message header should include at least the following fields:[56] Message-ID: Also an automatically generated field; used to prevent multiple delivery and for reference in In-Reply-To: (see below).

Email In-Reply-To: Message-ID of the message that this is a reply to. Used to link related messages together. This field only applies for reply messages. RFC 3864 describes registration procedures for message header fields at the IANA; it provides for permanent [57] and provisional [58] message header field names, including also fields defined for MIME, netnews, and http, and referencing relevant RFCs. Common header fields for email include: To: The email address(es), and optionally name(s) of the message's recipient(s). Indicates primary recipients (multiple allowed), for secondary recipients see Cc: and Bcc: below. Subject: A brief summary of the topic of the message. Certain abbreviations are commonly used in the subject, including "RE:" and "FW:". Bcc: Blind Carbon Copy; addresses added to the SMTP delivery list but not (usually) listed in the message data, remaining invisible to other recipients. Cc: Carbon copy; Many email clients will mark email in your inbox differently depending on whether you are in the To: or Cc: list. Content-Type: Information about how the message is to be displayed, usually a MIME type. Precedence: commonly with values "bulk", "junk", or "list"; used to indicate that automated "vacation" or "out of office" responses should not be returned for this mail, e.g. to prevent vacation notices from being sent to all other subscribers of a mailinglist. Sendmail uses this header to affect prioritization of queued email, with "Precedence: special-delivery" messages delivered sooner. With modern high-bandwidth networks delivery priority is less of an issue than it once was. Microsoft Exchange respects a fine-grained automatic response suppression mechanism, the X-Auto-Response-Suppress header.[59] References: Message-ID of the message that this is a reply to, and the message-id of the message the previous reply was a reply to, etc. Reply-To: Address that should be used to reply to the message. Sender: Address of the actual sender acting on behalf of the author listed in the From: field (secretary, list manager, etc.). Archived-At: A direct link to the archived form of an individual email message.[60] Note that the To: field is not necessarily related to the addresses to which the message is delivered. The actual delivery list is supplied separately to the transport protocol, SMTP, which may or may not originally have been extracted from the header content. The "To:" field is similar to the addressing at the top of a conventional letter which is delivered according to the address on the outer envelope. In the same way, the "From:" field does not have to be the real sender of the email message. Some mail servers apply email authentication systems to messages being relayed. Data pertaining to server's activity is also part of the header, as defined below. SMTP defines the trace information of a message, which is also saved in the header using the following two fields:[61] Received: when an SMTP server accepts a message it inserts this trace record at the top of the header (last to first). Return-Path: when the delivery SMTP server makes the final delivery of a message, it inserts this field at the top of the header. Other header fields that are added on top of the header by the receiving server may be called trace fields, in a broader sense.[62] Authentication-Results: when a server carries out authentication checks, it can save the results in this field for consumption by downstream agents.[63] Received-SPF: stores the results of SPF checks.[64] Auto-Submitted: is used to mark automatically generated messages.[65] VBR-Info: claims VBR whitelisting[66]

209

Email

210

Message body
Content encoding Email was originally designed for 7-bit ASCII.[67] Most email software is 8-bit clean but must assume it will communicate with 7-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7 bit content with a few characters outside that range and base64 for arbitrary binary data. The 8BITMIME and BINARY extensions were introduced to allow transmission of mail without the need for these encodings, but many mail transport agents still do not support them fully. In some countries, several encoding schemes coexist; as the result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is coincidence, when the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity. Plain text and HTML Most modern graphic email clients allow the use of either plain text or HTML for the message body at the option of the user. HTML email messages often include an automatically generated plain text copy as well, for compatibility reasons. Advantages of HTML include the ability to include in-line links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. Disadvantages include the increased size of the email, privacy concerns about web bugs, abuse of HTML email as a vector for phishing attacks and the spread of malicious software.[68] Some web based Mailing lists recommend that all posts be made in plain-text, with 72 or 80 characters per line[69][70] for all the above reasons, but also because they have a significant number of readers using text-based email clients such as Mutt. Some Microsoft email clients allow rich formatting using RTF, but unless the recipient is guaranteed to have a compatible email client this should be avoided.[71] In order to ensure that HTML sent in an email is rendered properly by the recipient's client software, an additional header must be specified when sending: "Content-type: text/html". Most email programs send this header automatically.

Servers and client applications


Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents (MTAs); and delivered to a mail store by programs called mail delivery agents (MDAs, also sometimes called local delivery agents, LDAs). Users can retrieve their messages from servers using standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Webmail

The interface of an email client, Thunderbird.

Email interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on an email client. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs). Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol. Accepting a message obliges an MTA to deliver it,[72] and when a message cannot be delivered, that MTA must send a bounce message back to the sender, indicating the problem.

211

Filename extensions
Upon reception of email messages, email client applications save messages in operating system files in the file system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the mbox format. The specific format used is often indicated by special filename extensions: eml Used by many email clients including Microsoft Outlook Express, Windows Mail and Mozilla Thunderbird. The files are plain text in MIME format, containing the email header as well as the message contents and attachments in one or more of several formats. emlx Used by Apple Mail. msg Used by Microsoft Office Outlook and OfficeLogic Groupware. mbx Used by Opera Mail, KMail, and Apple Mail based on the mbox format. Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory.

URI scheme mailto:


The URI scheme, as registered with the IANA, defines the mailto: scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the To: field.[73]

Types
Main article: Email types

Web-based email (webmail)


This is the type of email that most users are familiar with. Many free email providers host their serves as web-based email (e.g. Hotmail, Yahoo, Gmail, AOL). This allows users to log into the email account by using an Internet browser to send and receive their email. Its main disadvantage is the need to be connected to the internet while using it. Other software tools exist which integrate parts of the webmail functionality into the OS (e.g. creating messages directly from third party applications via MAPI).

Email

212

POP3 email services


POP3 is the acronym for Post Office Protocol 3. It is a leading email account type on the Internet. In a POP3 email account, your email messages are downloaded to your computer and then they are deleted from the mail server. It is difficult to save and view your messages on multiple computers. Also, the messages you send from the computer are not copied to the Sent Items folder on the computers. The messages are deleted from the server to make room for more incoming messages. POP supports simple download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC's).[3] Although most POP clients have an option to leave messages on the server after downloading a copy of them, most e-mail clients using POP3 simply connect, retrieve all messages, store them on the user's computer as new messages, delete them from the server, and then disconnect. Other protocols, notably IMAP, (Internet Message Access Protocol) provide more complete and complex remote access to typical mailbox operations. Many e-mail clients support POP as well as IMAP to retrieve messages; however, fewer Internet Service Providers (ISPs) support IMAP

IMAP email servers


IMAP refers to Internet Message Access Protocol. It is an alternative to the POP3 email. With an Internet Message Protocol (IMAP) account, you have access to mail folders on the mail server and you can use any computer to read your messages wherever you are. It shows the headers of your messages, the sender and the subject and you choose to download only those messages you need to read. Usually mail is saved on a mail server, therefore it is safer and it is backed up on an email server.

MAPI email servers


Messaging Application Programming Interface (MAPI) is a messaging architecture and a Component Object Model based API for Microsoft Windows.

Use
Flaming Flaming occurs when a person sends a message with angry or antagonistic content. The term is derived from the use of the word Incendiary to describe particularly heated email discussions. Flaming is assumed to be more common today because of the ease and impersonality of email communications: confrontations in person or via telephone require direct interaction, where social norms encourage civility, whereas typing a message to another person is an indirect interaction, so civility may be forgotten. Flaming is generally looked down upon by Internet communities as it is considered rude and non-productive. Email bankruptcy Also known as "email fatigue", email bankruptcy is when a user ignores a large number of email messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all. As a solution, people occasionally send a boilerplate message explaining that the email inbox is being cleared out. Harvard University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it.[74]

Email

213

In business
Email was widely accepted by the business community as the first broad electronic communication medium and was the first 'e-revolution' in business communication. Email is very simple to understand and like postal mail, email solves two basic problems of communication: logistics and synchronization (see below). LAN based email is also an emerging form of usage for business. It not only allows the business user to download mail when offline, it also allows the small business user to have multiple users' email IDs with just one email connection. Pros The problem of logistics: Much of the business world relies upon communications between people who are not physically in the same building, area or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a way to exchange information between two or more people with no set-up costs and that is generally far less expensive than physical meetings or phone calls. The problem of synchronisation: With real time communication by meetings or phone calls, participants have to work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently. Cons Most business workers today spend from one to two hours of their working day on email: reading, ordering, sorting, 're-contextualizing' fragmented information, and writing email.[75] The use of email is increasing due to increasing levels of globalisation labour division and outsourcing amongst other things. Email can lead to some well-known problems: Loss of context: which means that the context is lost forever; there is no way to get the text back. Information in context (as in a newspaper) is much easier and faster to understand than unedited and sometimes unrelated fragments of information. Communicating in context can only be achieved when both parties have a full understanding of the context and issue in question. Information overload: Email is a push technology the sender controls who receives the information. Convenient availability of mailing lists and use of "copy all" can lead to people receiving unwanted or irrelevant information of no use to them. Inconsistency: Email can duplicate information. This can be a problem when a large team is working on documents and information while not in constant contact with the other members of their team. Liability. Statements made in an email can be deemed legally binding and be used against a party in a court of law.[76] Despite these disadvantages, email has become the most widely used medium of communication within the business world. In fact, a 2010 study on workplace communication [77], found that 83% of U.S. knowledge workers felt that email was critical to their success and productivity at work.[78] Research on email marketing Research suggests that email marketing can be viewed as useful by consumers if it contains information such as special sales offerings and new product information. Offering interesting hyperlinks or generic information on consumer trends is less useful.[79] This research by Martin et al. (2003) also shows that if consumers find email marketing useful, they are likely to visit a store thereby overcoming limitations of Internet marketing such as not being able to touch or try on a product.

Email

214

Problems
Attachment size limitation
Email messages may have one or more attachments. Attachments serve the purpose of delivering binary or text files of unspecified size. In principle there is no technical intrinsic restriction in the SMTP protocol limiting the size or number of attachments. In practice, however, email service providers implement various limitations on the permissible size of files or the size of an entire message. Furthermore, due to technical reasons, often a small attachment can increase in size when sent,[80] which can be confusing to senders when trying to assess whether they can or cannot send a file by email, and this can result in their message being rejected. As larger and larger file sizes are being created and traded, many users are either forced to upload and download their files using an FTP server, or more popularly, use online file sharing facilities or services, usually over web-friendly HTTP, in order to send and receive them.

Information overload
A December 2007 New York Times blog post described information overload as "a $650 Billion Drag on the Economy",[81] and the New York Times reported in April 2008 that "E-MAIL has become the bane of some peoples professional lives" due to information overload, yet "none of the current wave of high-profile Internet start-ups focused on email really eliminates the problem of email overload because none helps us prepare replies".[82] GigaOm posted a similar article in September 2010, highlighting research [83] that found 57% of knowledge workers were overwhelmed by the volume of email they received.[78] Technology investors reflect similar concerns.[84] In October 2010, CNN published an article titled "Happy Information Overload Day" that compiled research on email overload from IT companies and productivity experts. According to Basex, the average knowledge worker receives 93 emails a day. Subsequent studies have reported higher numbers.[85] Marsha Egan, an email productivity expert, called email technology both a blessing and a curse in the article. She stated, "Everyone just learns that they have to have it dinging and flashing and open just in case the boss e-mails," she said. "The best gift any group can give each other is to never use e-mail urgently. If you need it within three hours, pick up the phone."[86]

Spamming and computer viruses


The usefulness of email is being threatened by four phenomena: email bombardment, spamming, phishing, and email worms. Spamming is unsolicited commercial (or bulk) email. Because of the minuscule cost of sending email, spammers can send hundreds of millions of email messages each day over an inexpensive Internet connection. Hundreds of active spammers sending this volume of mail results in information overload for many computer users who receive voluminous unsolicited email each day.[87][88] Email worms use email as a way of replicating themselves into vulnerable computers. Although the first email worm affected UNIX computers, the problem is most common today on the more popular Microsoft Windows operating system. The combination of spam and worm programs results in users receiving a constant drizzle of junk email, which reduces the usefulness of email as a practical tool. A number of anti-spam techniques mitigate the impact of spam. In the United States, U.S. Congress has also passed a law, the Can Spam Act of 2003, attempting to regulate such email. Australia also has very strict spam laws restricting the sending of spam from an Australian ISP,[89] but its impact has been minimal since most spam comes from regimes that seem reluctant to regulate the sending of spam.

Email

215

Email spoofing
Email spoofing occurs when the header information of an email is altered to make the message appear to come from a known or trusted source. It is often used as a ruse to collect personal information.

Email bombing
Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash.

Privacy concerns
Today it can be important to distinguish between Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees. Email privacy, without some security precautions, can be compromised because: email messages are generally not encrypted. email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages. many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox. the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication. There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor anonymity network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail,[90] or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server. Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this. Finally, attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses.

Tracking of sent mail


The original SMTP mail service provides limited mechanisms for tracking a transmitted message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice (bounce message), but both software bugs and system failures can cause messages to be lost. To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production. (A complete Message Tracking mechanism was also defined, but it never gained traction; see RFCs 3885 through 3888.) Many ISPs now deliberately disable non-delivery reports (NDRs) and delivery receipts due to the activities of spammers: Delivery Reports can be used to verify whether an address exists and so is available to be spammed If the spammer uses a forged sender email address (email spoofing), then the innocent email address that was used can be flooded with NDRs from the many invalid email addresses the spammer may have attempted to mail.

Email These NDRs then constitute spam from the ISP to the innocent user There are a number of systems that allow the sender to see if messages have been opened.[91][92][93][94] The receiver could also let the sender know that the emails have been opened through an "Okay" button. A check sign can appear in the sender's screen when the receiver's "Okay" button is pressed.

216

U.S. government
The U.S. federal government has been involved in email in several different ways. Starting in 1977, the U.S. Postal Service (USPS) recognized that electronic mail and electronic transactions posed a significant threat to First Class mail volumes and revenue. Therefore, the USPS initiated an experimental email service known as E-COM. Electronic messages were transmitted to a post office, printed out, and delivered as hard copy. To take advantage of the service, an individual had to transmit at least 200 messages. The delivery time of the messages was the same as First Class mail and cost 26 cents. Both the Postal Regulatory Commission and the Federal Communications Commission opposed E-COM. The FCC concluded that E-COM constituted common carriage under its jurisdiction and the USPS would have to file a tariff.[95] Three years after initiating the service, USPS canceled E-COM and attempted to sell it off.[96][97][98][99][100][101] The early ARPANET dealt with multiple email clients that had various, and at times incompatible, formats. For example, in the Multics, the "@" sign meant "kill line" and anything before the "@" sign was ignored, so Multics users had to use a command-line option to specify the destination system.[31] The Department of Defense DARPA desired to have uniformity and interoperability for email and therefore funded efforts to drive towards unified inter-operable standards. This led to David Crocker, John Vittal, Kenneth Pogran, and Austin Henderson publishing RFC 733, "Standard for the Format of ARPA Network Text Message" (November 21, 1977), which was apparently not effective. In 1979, a meeting was held at BBN to resolve incompatibility issues. Jon Postel recounted the meeting in RFC 808, "Summary of Computer Mail Services Meeting Held at BBN on 10 January 1979" (March 1, 1982), which includes an appendix listing the varying email systems at the time. This, in turn, lead to the release of David Crocker's RFC 822, "Standard for the Format of ARPA Internet Text Messages" (August 13, 1982).[102] The National Science Foundation took over operations of the ARPANET and Internet from the Department of Defense, and initiated NSFNet, a new backbone for the network. A part of the NSFNet AUP forbade commercial traffic.[103] In 1988, Vint Cerf arranged for an interconnection of MCI Mail with NSFNET on an experimental basis. The following year Compuserve email interconnected with NSFNET. Within a few years the commercial traffic restriction was removed from NSFNETs AUP, and NSFNET was privatised. In the late 1990s, the Federal Trade Commission grew concerned with fraud transpiring in email, and initiated a series of procedures on spam, fraud, and phishing.[104] In 2004, FTC jurisdiction over spam was codified into law in the form of the CAN SPAM Act.[105] Several other U.S. federal agencies have also exercised jurisdiction including the Department of Justice and the Secret Service. NASA has provided email capabilities to astronauts aboard the Space Shuttle and International Space Station since 1991 when a Macintosh Portable was used aboard Space Shuttle mission STS-43 to send the first email via AppleLink.[106][107][108] Today astronauts aboard the International Space Station have email capabilities through the via wireless networking throughout the station and are connected to the ground at 3 Mbit/s Earth to station and 10 Mbit/s station to Earth, comparable to home DSL connection speeds.[109]

Email

217

Notes
[1] "RFC 5321 Simple Mail Transfer Protocol" (http:/ / tools. ietf. org/ html/ rfc5321#section-2. 3. 11). Network Working Group. . Retrieved 2010-02=October 2008. [2] Ron Brown, Fax invades the mail market, New Scientist (http:/ / books. google. com/ books?id=Ry64sjvOmLkC& pg=PA218), Vol. 56, No. 817 (Oct., 26, 1972), pages 218-221. [3] Herbert P. Luckett, What's News: Electronic-mail delivery gets started, Popular Science (http:/ / books. google. com/ books?id=cKSqa8u3EIoC& pg=PA85), Vol. 202, No. 3 (March 1973); page 85 [4] Unless explicitly qualified, any technical descriptions in this article will refer to current Internet e-mail rather than to earlier email systems. [5] See (Partridge 2008) for early history of email, from origins through 1991. [6] Long, Tony (23 October 2000). A Matter of (Wired News) Style (http:/ / www. nettime. org/ Lists-Archives/ nettime-bold-0010/ msg00471. html). Wired magazine. . [7] Readers on (Wired News) Style (http:/ / www. wired. com/ culture/ lifestyle/ news/ 2000/ 10/ 39651). Wired magazine. 24 October 2000. . [8] "RFC Editor Terms List" (http:/ / www. rfc-editor. org/ rfc-style-guide/ terms-online. txt). IETF. . [9] Yahoo style guide (http:/ / styleguide. yahoo. com/ word-list/ e) [10] AP Stylebook editors share big changes (http:/ / www. aces2011. org/ sessions/ 18/ the-ap-stylebook-editors-visit-aces-2011/ ) from the American Copy Editors Society [11] Gerri Berendzen; Daniel Hunt. "AP changes e-mail to email" (http:/ / www. aces2011. org/ sessions/ 18/ the-ap-stylebook-editors-visit-aces-2011/ ). 15th National Conference of the American Copy Editors Society (2011, Phoenix). ACES. . Retrieved 23 March 2011. [12] AskOxford Language Query team. "What is the correct way to spell 'e' words such as 'email', 'ecommerce', 'egovernment'?" (http:/ / www. askoxford. com/ asktheexperts/ faq/ aboutspelling/ email). FAQ. Oxford University Press. . Retrieved 4 September 2009. "We recommend email, as this is now by far the most common form" [13] Reference.com (http:/ / dictionary. reference. com/ browse/ email) [14] Random House Unabridged Dictionary, 2006 [15] The American Heritage Dictionary of the English Language, Fourth Edition [16] Princeton University WordNet 3.0 [17] The American Heritage Science Dictionary, 2002 [18] ""Email" or "e-mail"" (http:/ / english. stackexchange. com/ questions/ 1925/ email-or-e-mail). English Language & Usage Stack Exchange. August 25, 2010. . Retrieved September 26, 2010. [19] RFC 821 (rfc821) Simple Mail Transfer Protocol (http:/ / www. faqs. org/ rfcs/ rfc821. html) [20] RFC 1939 (rfc1939) Post Office Protocol Version 3 (http:/ / www. faqs. org/ rfcs/ rfc1939. html) [21] RFC 3501 (rfc3501) Internet Message Access Protocol version 4rev1 (http:/ / www. faqs. org/ rfcs/ rfc3501. html) [22] "RFC Style Guide", Table of decisions on consistent usage in RFC (http:/ / www. rfc-editor. org/ rfc-style-guide/ terms-online. txt) [23] Excerpt from the FAQ list of the Usenet newsgroup alt.usage.english (http:/ / alt-usage-english. org/ excerpts/ fxhowdoy. html) [24] "The Watsons: IBM's Troubled Legacy" (http:/ / hbswk. hbs. edu/ item/ 4138. html) [25] See File:Gestapo anti-gay telex.jpg [26] "Telex and TWX History" (http:/ / www. baudot. net/ docs/ kimberlin--telex-twx-history. pdf), Donald E. Kimberlin, 1986 [27] USPS Support Panel, Louis T Rader, Chair, Chapter IV: Systems, Electronic Message Systems for the U.S. Postal Service (http:/ / books. google. com/ books?id=5TQrAAAAYAAJ& pg=PA27), National Academy of Sciences, Washington, D.C., 1976; pages 27-35. [28] "CTSS, Compatible Time-Sharing System" (September 4, 2006), University of South Alabama, USA-CTSS (http:/ / www. cis. usouthal. edu/ faculty/ daigle/ project1/ ctss. htm). [29] an IBM 7094 [30] Tom Van Vleck, "The IBM 7094 and CTSS" (September 10, 2004), Multicians.org (Multics), web: Multicians-7094 (http:/ / www. multicians. org/ thvv/ 7094. html). [31] Tom Van Vleck. "The History of Electronic Mail" (http:/ / www. multicians. org/ thvv/ mail-history. html). . [32] Version 3 Unix mail(1) manual page from 10/25/1972 (http:/ / minnie. tuhs. org/ cgi-bin/ utree. pl?file=V3/ man/ man1/ mail. 1) [33] Version 6 Unix mail(1) manual page from 2/21/1975 (http:/ / minnie. tuhs. org/ cgi-bin/ utree. pl?file=V6/ usr/ man/ man1/ mail. 1) [34] APL Quotations and Anecdotes (http:/ / www. jsoftware. com/ papers/ APLQA. htm), including Leslie Goldsmith's story of the Mailbox [35] History of the Internet, including Carter/Mondale use of email (http:/ / www. actewagl. com. au/ Education/ communications/ Internet/ historyOfTheInternet/ InternetOnItsInfancy. aspx) [36] David Wooley, PLATO: The Emergence of an Online Community (http:/ / www. thinkofit. com/ plato/ dwplato. htm#pnotes), 1994. [37] Stromberg, Joseph (22 February 2012). "A Piece of Email History Comes to the American History Museum" (http:/ / blogs. smithsonianmag. com/ aroundthemall/ 2012/ 02/ a-piece-of-email-history-comes-to-the-american-history-museum/ ). Smithsonian Institution. . Retrieved 11 June 2012. [38] "...PROFS changed the way organizations communicated, collaborated and approached work when it was introduced by IBMs Data Processing Division in 1981..." (http:/ / www. ibm. com/ ibm100/ us/ en/ icons/ networkbus/ ), IBM.com [39] "1982 The National Security Council (NSC) staff at the White House acquires a prototype electronic mail system, from IBM, called the Professional Office System (PROFs)...." (http:/ / www. fas. org/ spp/ starwars/ offdocs/ reagan/ chron. txt), fas.org

Email
[40] Gordon Bell's timeline of Digital Equipment Corporation (https:/ / research. microsoft. com/ en-us/ um/ people/ gbell/ Digital/ timeline/ 1982. htm) [41] Ray Tomlinson. "The First Network Email" (http:/ / openmap. bbn. com/ ~tomlinso/ ray/ firstemailframe. html). . [42] Version 7 Unix manual: "UUCP Implementation Description" by D. A. Nowitz, and "A Dial-Up Network of UNIX Systems" by D. A. Nowitz and M. E. Lesk (http:/ / cm. bell-labs. com/ 7thEdMan/ vol2/ uucp. bun) [43] "BITNET History" (http:/ / www. livinginternet. com/ u/ ui_bitnet. htm), livinginternet.com [44] with various vendors supplying gateway software to link these incompatible systems [45] Email History (http:/ / www. livinginternet. com/ e/ ei. htm) [46] "The Technical Development of Internet Email" (http:/ / www. ir. bbn. com/ ~craig/ email. pdf) Craig Partridge, AprilJune 2008, p.5 [47] The First Email (http:/ / openmap. bbn. com/ ~tomlinso/ ray/ firstemailframe. html) [48] Wave New World,Time Magazine, October 19, 2009, p.48 [49] How E-mail Works (http:/ / www. webcastr. com/ videos/ informational/ how-email-works. html) (internet video). howstuffworks.com. 2008. . [50] Simpson, Ken (October 3, 2008). "An update to the email standards" (http:/ / blog. mailchannels. com/ 2008/ 10/ update-to-email-standards. html). Mail Channels Blog Entry. . [51] P. Resnick, Ed. (October 2008). "RFC 5322, Internet Message Format" (http:/ / tools. ietf. org/ html/ rfc5322). IETF. . [52] Moore, K (November 1996). "MIME (Multipurpose Internet Mail Extensions) Part Three: Message Header Extensions for Non-ASCII Text" (http:/ / tools. ietf. org/ html/ rfc2047). IETF. . Retrieved 2012-01-21. [53] A Yang, Ed. (February 2012). "RFC 6532, Internationalized Email Headers" (http:/ / tools. ietf. org/ html/ rfc6532). IETF. ISSN2070-1721. . [54] J. Yao, Ed., W. Mao, Ed. (February 2012). "RFC 6531, SMTP Extension for Internationalized Email Addresses" (http:/ / tools. ietf. org/ html/ rfc6531). IETF. ISSN2070-1721. . [55] RFC 5322, 3.6. Field Definitions (http:/ / tools. ietf. org/ html/ rfc5322#section-3. 6) [56] RFC 5322, 3.6.4. Identification Fields (http:/ / tools. ietf. org/ html/ rfc5322#section-3. 6. 4) [57] http:/ / www. iana. org/ assignments/ message-headers/ perm-headers. html [58] http:/ / www. iana. org/ assignments/ message-headers/ prov-headers. html [59] Microsoft, Auto Response Suppress, 2010, microsoft reference (http:/ / msdn. microsoft. com/ en-us/ library/ ee219609(v=EXCHG. 80). aspx), 2010 Sep 22 [60] RFC 5064 (http:/ / tools. ietf. org/ html/ rfc5064) [61] John Klensin (October 2008). "Trace Information" (https:/ / tools. ietf. org/ html/ rfc5321& #035;section-4. 4). Simple Mail Transfer Protocol (https:/ / tools. ietf. org/ html/ rfc5321). IETF. sec.4.4. RFC 5321. . [62] John Levine (14 January 2012). "Trace headers" (http:/ / www. ietf. org/ mail-archive/ web/ apps-discuss/ current/ msg04115. html). email message. IETF. . Retrieved 16 January 2012. "there are many more trace headers than those two" [63] This extensible field was defined by RFC 5451, that also defined a IANA registry of Email Authentication Parameters (http:/ / www. iana. org/ assignments/ email-auth/ email-auth. xml). [64] RFC 4408. [65] Defined in RFC 3834, and updated by RFC 5436. [66] RFC 5518. [67] Craig Hunt (2002). TCP/IP Network Administration. O'Reilly Media. p.70. ISBN978-0-596-00297-8. [68] "Email policies that prevent viruses" (http:/ / advosys. ca/ papers/ mail-policies. html). . [69] "When posting to a RootsWeb mailing list..." (http:/ / helpdesk. rootsweb. com/ listadmins/ plaintext. html) [70] "...Plain text, 72 characters per line..." (http:/ / www. openbsd. org/ mail. html) [71] How to Prevent the Winmail.dat File from Being Sent to Internet Users (http:/ / support. microsoft. com/ kb/ 138053) [72] In practice, some accepted messages may nowadays not be delivered to the recipient's InBox, but instead to a Spam or Junk folder which, especially in a corporate environment, may be inaccessible to the recipient [73] RFC 2368 section 3 : by Paul Hoffman in 1998 discusses operation of the "mailto" URL. [74] Barrett, Grant (December 23, 2007). "All We Are Saying." (http:/ / www. nytimes. com/ 2007/ 12/ 23/ weekinreview/ 23buzzwords. html?ref=weekinreview). New York Times. . Retrieved 2007-12-24. [75] "Email Right to Privacy Why Small Businesses Care" (http:/ / www. smallbiztrends. com/ 2007/ 06/ email-has-right-to-privacy-why-small-businesses-care. html). Anita Campbell. 2007-06-19. . [76] C. J. Hughes (February 17, 2011). "E-Mail May Be Binding, State Court Rules" (http:/ / www. nytimes. com/ 2011/ 02/ 20/ realestate/ 20posting. html). New York Times. . Retrieved 2011-02-20. [77] http:/ / www. plantronics. com/ north_america/ en_US/ howwework/ [78] By Om Malik, GigaOm. " Is Email a Curse or a Boon? (http:/ / gigaom. com/ collaboration/ is-email-a-curse-or-a-boon/ )" September 22, 2010. Retrieved October 11, 2010. [79] Martin, Brett A. S., Joel Van Durme, Mika Raulas, and Marko Merisavo (2003), "E-mail Marketing: Exploratory Insights from Finland" (http:/ / www. basmartin. com/ wp-content/ uploads/ 2010/ 08/ Martin-et-al-2003. pdf), Journal of Advertising Research, 43 (3), 293-300. [80] "Exchange 2007: Attachment Size Increase,..." (http:/ / technet. microsoft. com/ en-us/ magazine/ 2009. 01. exchangeqa. aspx?pr=blog). TechNet Magazine, Microsoft.com US. 2010-03-25. .

218

Email
[81] Lohr, Steve (2007-12-20). "Is Information Overload a $650 Billion Drag on the Economy?" (http:/ / bits. blogs. nytimes. com/ 2007/ 12/ 20/ is-information-overload-a-650-billion-drag-on-the-economy). New York Times. . Retrieved May 1, 2010. [82] Stross, Randall (2008-04-20). "Struggling to Evade the E-Mail Tsunami" (http:/ / www. nytimes. com/ 2008/ 04/ 20/ technology/ 20digi. html?_r=2& oref=slogin& oref=slogin). New York Times. . Retrieved May 1, 2010. [83] http:/ / gigaom. com/ collaboration/ is-email-a-curse-or-a-boon/ [84] "Did Darwin Skip Over Email?" (http:/ / www. foundrygroup. com/ blog/ archives/ 2008/ 04/ did-darwin-skip-over-email. php). Foundry Group. 2008-04-28. . [85] Radicati, Sara. "Email Statistics Report, 2010" (http:/ / www. radicati. com/ wp/ wp-content/ uploads/ 2010/ 04/ Email-Statistics-Report-2010-2014-Executive-Summary2. pdf). . [86] Gross, Doug (July 26, 2011). "Happy Information Overload Day!" (http:/ / articles. cnn. com/ 2010-10-20/ tech/ information. overload. day_1_mails-marsha-egan-rss?_s=PM:TECH). CNN. . [87] Rich Kawanagh. The top ten email spam list of 2005. ITVibe news, 2006, January 02, ITvibe.com (http:/ / itvibe. com/ news/ 3837/ ) [88] How Microsoft is losing the war on spam Salon.com (http:/ / dir. salon. com/ story/ tech/ feature/ 2005/ 01/ 19/ microsoft_spam/ index. html) [89] Spam Bill 2003 ( PDF (http:/ / www. aph. gov. au/ library/ pubs/ bd/ 2003-04/ 04bd045. pdf)) [90] M. Toorani, SMEmail A New Protocol for the Secure E-mail in Mobile Environments (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=4783292), Proceedings of the Australian Telecommunications Networks and Applications Conference (ATNAC'08), pp. 3944, Adelaide, Australia, December 2008. (arXiv:1002.3176) [91] Amy Harmon (2000-11-22). "Software That Tracks E-Mail Is Raising Privacy Concerns" (http:/ / query. nytimes. com/ gst/ fullpage. html?res=940CE0D9143AF931A15752C1A9669C8B63& sec=& spon=& pagewanted=print). The New York Times. . Retrieved 2012-01-13. [92] About.com (http:/ / email. about. com/ od/ emailbehindthescenes/ a/ html_return_rcp. htm) [93] Webdevelopersnotes.com (http:/ / www. webdevelopersnotes. com/ tips/ yahoo/ notification-when-yahoo-email-is-opened. php) [94] Microsoft.com (http:/ / support. microsoft. com/ kb/ 222163) [95] In re Request for declaratory ruling and investigation by Graphnet Systems, Inc., concerning the proposed E-COM service, FCC Docket No. 79-6 (September 4, 1979) [96] History of the United States Postal Service, USPS (http:/ / www. usps. com/ history/ history/ his1. htm) [97] Hardy, Ian R; The Evolution of ARPANET Email (http:/ / www. archive. org/ web/ */ http:/ www. ifla. org/ documents/ internet/ hari1. txt); 1996-05-13; History Thesis Paper; University of California at Berkeley [98] James Bovard, The Law Dinosaur: The US Postal Service, CATO Policy Analysis (February 1985) [99] Jay Akkad, The History of Email (http:/ / www. cs. ucsb. edu/ ~almeroth/ classes/ F04. 176A/ homework1_good_papers/ jay-akkad. html) [100] US Postal Service: Postal Activities and Laws Related to Electronic Commerce, GAO-00-188 (http:/ / www. gao. gov/ archive/ 2000/ gg00188. pdf) [101] Implications of Electronic Mail and Message Systems for the U.S. Postal Service , Office of Technology Assessment, Congress of the United States, August 1982 (http:/ / govinfo. library. unt. edu/ ota/ Ota_4/ DATA/ 1982/ 8214. PDF) [102] Email History, How Email was Invented, Living Internet (http:/ / www. livinginternet. com/ e/ ei. htm) [103] Cybertelecom : Internet History (http:/ / www. cybertelecom. org/ notes/ internet_history80s. htm) [104] Cybertelecom : SPAM Reference (http:/ / www. cybertelecom. org/ spam/ Spamref. htm) [105] Cybertelecom : Can Spam Act (http:/ / www. cybertelecom. org/ spam/ canspam. htm) [106] 2001: A Space Laptop | SpaceRef Your Space Reference (http:/ / www. spaceref. com/ news/ viewnews. html?id=213) [107] The Mac Observer This Week in Apple History August 2231: "Welcome, IBM. Seriously," Too Late to License (http:/ / www. macobserver. com/ columns/ thisweek/ 2004/ 20040831. shtml) [108] Linzmayer, Owen W. (2004). Apple confidential 2.0 : the definitive history of the world's most colorful company ([Rev. 2. ed.]. ed.). San Francisco, Calif.: No Starch Press. ISBN1-59327-010-0. [109] Bilton, Nick (January 22, 2010). "First Tweet from Space" (http:/ / bits. blogs. nytimes. com/ 2010/ 01/ 22/ first-tweet-from-space/ ). New York Times. .

219

Email

220

References Further reading


Cemil Betanov, Introduction to X.400, Artech House, ISBN 0-89006-597-7. Marsha Egan, " Inbox Detox and The Habit of Email Excellence (http://www.inboxdetox.com)", Acanthus Publishing ISBN 978-0-9815589-8-1 Lawrence Hughes, Internet e-mail Protocols, Standards and Implementation, Artech House Publishers, ISBN 0-89006-939-5. Kevin Johnson, Internet Email Protocols: A Developer's Guide, Addison-Wesley Professional, ISBN 0-201-43288-9. Pete Loshin, Essential Email Standards: RFCs and Protocols Made Practical, John Wiley & Sons, ISBN 0-471-34597-0. Partridge, Craig (AprilJune 2008). "The Technical Development of Internet Email" (http://www.ir.bbn.com/ ~craig/email.pdf) (PDF). IEEE Annals of the History of Computing (Berlin: IEEE Computer Society) 30 (2). ISSN1934-1547 Sara Radicati, Electronic Mail: An Introduction to the X.400 Message Handling Standards, Mcgraw-Hill, ISBN 0-07-051104-7. John Rhoton, Programmer's Guide to Internet Mail: SMTP, POP, IMAP, and LDAP, Elsevier, ISBN 1-55558-212-5. John Rhoton, X.400 and SMTP: Battle of the E-mail Protocols, Elsevier, ISBN 1-55558-165-X. David Wood, Programming Internet Mail, O'Reilly, ISBN 1-56592-479-7. Yoram M. Kalman & Sheizaf Rafaeli, Online Pauses and Silence: Chronemic Expectancy Violations in Written Computer-Mediated Communication (http://rafaeli.net/KalmanRafaeliChronemics2011.pdf), Communication Research, Vol. 38, pp.5469, 2011

External links
E-mail (http://www.dmoz.org/Computers/Internet/E-mail//) at the Open Directory Project IANA's list of standard header fields (http://www.iana.org/assignments/message-headers/perm-headers.html) The History of Email (http://emailhistory.org/) is Dave Crocker's attempt at capturing the sequence of 'significant' occurrences in the evolution of email; a collaborative effort that also cites this page. The History of Electronic Mail (http://www.multicians.org/thvv/mail-history.html) is a personal memoir by the implementer of an early email system The Official MCI Mail Blog! (http://mcimail.blogspot.com/) a blog about MCI Mail, one of the early commercial electronic mail services

Web content

221

Web content
Web content is the textual, visual or aural content that is encountered as part of the user experience on websites. It may include, among other things: text, images, sounds, videos and animations. In Information Architecture for the World Wide Web, Lou Rosenfeld and Peter Morville write, "We define content broadly as 'the stuff in your Web site.' This may include documents, data, applications, e-services, images, audio and video files, personal Web pages, archived e-mail messages, and more. And we include future stuff as well as present stuff."[1]

Beginnings of web content


While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator, appeared that the Internet become more than a file serving system. The use of hypertext, hyperlinks and a page-based model of sharing information, introduced with Mosaic and later Netscape, helped to define web content, and the formation of websites. Largely, today we categorize websites as being a particular type of website according to the content a website contains.

The page concept


Web content is dominated by the "page" concept. Having its beginnings in an academic settings, and in a setting dominated by type-written pages, the idea of the web was to link directly from one academic paper to another academic paper. This was a completely revolutionary idea in the late 1980s and early 1990s when the best a link could be made was to cite a reference in the midst of a type written paper and name that reference either at the bottom of the page or on the last page of the academic paper. When it was possible for any person to write and own a Mosaic page, the concept of a "home page" blurred the idea of a page.[2] It was possible for anyone to own a "Web page" or a "home page" which in many cases the website contained many physical pages in spite of being called "a page". People often cited their "home page" to provide credentials, links to anything that a person supported, or any other individual content a person wanted to publish. Even though "the web" may be the resource we commonly use to "get to" particular locations online, many different protocols[3] are invoked to access embedded information. When we are given an address, such as http:/ / www. youtube. com, we expect to see a range of web pages, but in each page we have embedded tools to watch "video clips".

HTML web content


Even though we may embed various protocols within web pages, the "web page" composed of "html" (or some variation) content is still the dominant way whereby we share content. And while there are many web pages with localized proprietary structure (most usually, business websites), many millions of websites abound that are structured according to a common core idea. Blogs are a type of website that contain mainly web pages authored in html (although the blogger may be totally unaware that the web pages are composed using html due to the blogging tool that may be in use). Millions of people use blogs online; a blog is now the new "home page", that is, a place where a persona can reveal personal information, and/or build a concept as to who this persona is. Even though a blog may be written for other purposes, such as promoting a business, the core of a blog is the fact that it is written by a "person" and that person reveals information from her/his perspective.

Web content Search engine sites are composed mainly of html content, but also has a typically structured approach to revealing information. A search engine results page (SERP) displays a heading, usually the name of the search engine, and then a list of websites and their addresses. What is being listed are the results from a query that may be defined as keywords. The results page lists webpages that are connected in some way with those keywords used in the query. Discussion boards are sites composed of "textual" content organized by html or some variation that can be viewed in a web browser. The driving mechanism of a discussion board is the fact that users are registered and once registered can write posts. Often a discussion board is made up of posts asking some type of question to which other users may provide answers to those questions. Ecommerce sites are largely composed of textual material and embedded with graphics displaying a picture of the item(s) for sale. However, there are extremely few sites that are composed page-by-page using some variant of HTML. Generally, webpages are composed as they are being served from a database to a customer using a web browser. However, the user sees the mainly text document arriving as a webpage to be viewed in a web browser. Ecommerce sites are usually organized by software we identify as a "shopping cart".

222

A wider view of web content


While there are many millions of pages that are predominantly composed of HTML, or some variation, in general we view data, applications, E-Services, images (graphics), audio and video files, personal web pages, archived e-mail messages, and many more forms of file and data systems as belonging to websites and web pages. While there are many hundreds of ways to deliver information on a website, there is a common body of knowledge of search engine optimization that needs to be read as an advisory of ways that anything but text should be delivered. Currently search engines are text based and are one of the common ways people using a browser locate sites of interest.

Content is king
The phrase can be interpreted to mean that - without original and desirable content, or consideration for the rights and commercial interests of content creators - any media venture is likely to fail through lack of appealing content, regardless of other design factors. Content can mean any creative work, such as text, graphics, images or video. "Content is King" is a current meme when organizing or building a website[4] (although Andrew Odlyzko in "Content is Not King" argues otherwise). Text content is particularly important for search engine placement. Without original text content, most search engines will be unable to match search terms to the content of a site.

Content management
Because websites are often complex, a term "content management" appeared in the late 1990s identifying a method or in some cases a tool to organize all the diverse elements to be contained on a website. [5] Content management often means that within a business there is a range of people who have distinct roles to do with content management, such as content author, editor, publisher, and administrator. But it also means there may be a content management system whereby each of the different roles are organized whereby to provide their assistance in operating the system and organizing the information for a website. Even though a business may organize to collect, contain and represent that information online, content needs organization in such a manner to provide the reader (browser) with an overall "customer experience" that is easy to use, the site can be navigated with ease, and the website can fulfill the role assigned to it by the business, that is, to sell to customers, or to market products and services, or to inform customers.

Web content

223

Geo targeting of web content


Geo targeting of web content in internet marketing and geo marketing is the method of determining the geolocation (the physical location) of a website visitor with geolocation software and delivering different content to that visitor based on his or her location, such as country, region/state, city, metro code/zip code, organization, Internet Protocol (IP) address, ISP or other criteria. Different content by choice A typical example for different content by choice in geo targeting is the FedEx website at FedEx.com where users have the choice to select their country location first and are then presented with different site or article content depending on their selection. Automated different content With automated different content in internet marketing and geomarketing the delivery of different content based on the geographical geolocation and other personal information is automated.

References
[1] Information Architecture for the World Wide Web (http:/ / www. oreilly. com/ catalog/ infotecture/ ), second edition, page 219 [2] NetValley (http:/ / www. netvalley. com/ archives/ mirrors/ davemarsh-timeline-1. htm) [3] Internet Tutorial - What Is the World Wide Web? (http:/ / www. centerspan. org/ tutorial/ www. htm#tour) [4] "Content is king - importance of good content - content is king - page content" (http:/ / www. akamarketing. com/ content-is-king. html). akamarketing. . Retrieved 2009-04-15. [5] Is Content King? (http:/ / www. skyrme. com/ updates/ u59_f2. htm)

File sharing
File sharing is the practice of distributing or providing access to digitally stored information, such as computer programs, multimedia (audio, images and video), documents, or electronic books. It may be implemented through a variety of ways. Common methods of storage, transmission and dispersion include manual sharing utilizing removable media, centralized servers on computer networks, World Wide Web-based hyperlinked documents, and the use of distributed peer-to-peer networking.

Types of file sharing


Peer-to-peer file sharing
Users can use software that connects in to a peer-to-peer network to search for shared files on the computers of other users (i.e. peers) connected to the network. Files of interest can then be downloaded directly from other users on the network. Typically, large files are broken down into smaller chunks, which may be obtained from multiple peers and then reassembled by the downloader. This is done while the peer is simultaneously uploading the chunks it already has to other peers.

History
Files were first exchanged on removable media. Computers were able to access remote files using filesystem mounting, bulletin board systems (1978), Usenet (1979), and FTP servers (1985). Internet Relay Chat (1988) and Hotline (1997) enabled users to communicate remotely through chat and to exchange files. The mp3 encoding, which was standardized in 1991 and which substantially reduced the size of audio files, grew to widespread use in the late 1990s. In 1998, MP3.com and Audiogalaxy were established, the Digital Millennium Copyright Act was

File sharing unanimously passed, and the first mp3 player devices were launched. In June 1999, Napster was released as a centralized unstructured peer-to-peer system,[1] requiring a central server for indexing and peer discovery. It is generally credited as being the first peer-to-peer file sharing system. Gnutella, eDonkey2000, and Freenet were released in 2000, as MP3.com and Napster were facing litigation. Gnutella, released in March, was the first decentralized file sharing network. In the gnutella network, all connecting software was considered equal, and therefore the network had no central point of failure. In July, Freenet was released and became the first anonymity network. In September the eDonkey2000 client and server software was released. In 2001, Kazaa and Poisoned for the Mac was released. Its FastTrack network was distributed, though unlike gnutella, it assigned more traffic to 'supernodes' to increase routing efficiency. The network was proprietary and encrypted, and the Kazaa team made substantial efforts to keep other clients such as Morpheus off of the FastTrack network. In July 2001, Napster was sued by several recording companies and lost in A&M Records, Inc. v. Napster, Inc..[2] In the case of Napster, it has been ruled that an online service provider could not use the "transitory network transmission" safe harbor in the DMCA if they had control of the network with a server.[3] Shortly after its loss in court, Napster was shut down to comply with a court order. This drove users to other P2P applications and file sharing continued its growth.[4] The Audiogalaxy Satellite client grew in popularity, and the LimeWire client and BitTorrent protocol were released. Until its decline in 2004, Kazaa was the most popular file sharing program despite bundled malware and legal battles in the Netherlands, Australia, and the United States. In 2002, a Tokyo district court ruling shut down File Rogue, and the Recording Industry Association of America (RIAA) filed a lawsuit that effectively shut down Audiogalaxy. From 2002 through 2003, a number of BitTorrent services were established, including Suprnova.org, isoHunt, TorrentSpy, and The Pirate Bay. In 2002, the RIAA was filing lawsuits against Kazaa users. As a result of such lawsuits, many universities added file sharing regulations in their school administrative codes (though some students managed to circumvent them during after school hours). With the shutdown of eDonkey in 2005, eMule became the dominant client of the eDonkey network. In 2006, police raids took down the Razorback2 eDonkey server and temporarily took down The Pirate Bay.

224

In 2009, the Pirate Bay trial ended in a guilty verdict for the primary founders of the tracker. The decision was appealed, leading to a second guilty verdict in November 2010. In October 2010, Limewire was forced to shut down following a court order in Arista Records LLC v. Lime Group LLC but the gnutella network remains active through open source clients like Frostwire and gtk-gnutella. Furthermore, multi-protocol file sharing software such as MLDonkey and Shareaza adapted in order to support all the major file sharing protocols, so users no longer had to install and configure multiple file sharing programs. On 19 January 2012, the United States Department of Justice shut down the popular domain of Megaupload (established 2005). The file sharing site has claimed to have over 50,000,000 people a day.[5] Kim Dotcom (formerly Kim Schmitz) was arrested in New Zealand and is awaiting extradition.[6] The case involving the downfall of the world's largest and most popular file sharing site was not well received, with hacker group Anonymous bringing down several sites associated with the take-down.[5] In the days following this, other file sharing sites began to cease services; Filesonic blocked public downloads on January 22, with Fileserve following suit on January 23.

Demonstrators protesting The Pirate Bay raid, 2006.

File sharing

225

Legality of file sharing


The legal debate surrounding file sharing has caused many lawsuits. In the United States, some of these lawsuits have even reached the Supreme Court in MGM v. Grokster. In that particular lawsuit, the Supreme Court has ruled that the creators of P2P networks can be held responsible if the intent of their program is clearly to infringe on copyright laws. On the other hand, file sharing is not necessarily illegal, even if the works being shared are covered by copyright. For example, some artists may choose to support freeware, shareware, open source, or anti-copyright, and advocate the use of file sharing as a free promotional tool. Nearly all freeware, and open source software may be shared, under the rules specified in the license for that specific piece of software. Content in the public domain can also be freely shared.

Ethics of file sharing


In 2004 there were an estimated 70 million people participating in online file sharing.[7] According to a CBS News poll in 2009, 58% of Americans who follow the file sharing issue, considered it acceptable "if a person owns the music CD and shares it with a limited number of friends and acquaintances"; with 18 to 29 year olds this percentage reached as much as 70%.[8]

Effects of file sharing


According to David Glenn, writing in The Chronicle of Higher Education, "A majority of economic studies have concluded that file sharing hurts sales".[9] A literature review by Professor Peter Tschmuck found 22 independent studies on the effects of music file sharing. "Of these 22 studies, 14 roughly two-thirds conclude that unauthorized downloads have a 'negative or even highly negative impact' on recorded music sales. Three of the studies found no significant impact while the remaining five found a positive impact."[10][11][12] A study by economists Felix Oberholzer-Gee and Koleman Strumpf in 2004 concluded that music file sharing's effect on sales was "statistically indistinguishable from zero".[13][14] This research was disputed by other economists, most notably Stan Liebowitz, who said Oberholzer-Gee and Strumpf had made multiple assumptions about the music industry "that are just not correct."[13][15][16] In June 2010, Billboard reported that Oberholzer-Gee and Strumpf had "changed their minds", now finding "no more than 20% of the recent decline in sales is due to sharing".[17] However, citing Nielsen SoundScan as their source, the co-authors maintained that illegal downloading had not deterred people from being original. "In many creative industries, monetary incentives play a reduced role in motivating authors to remain creative. Data on the supply of new works are consistent with the argument that file sharing did not discourage authors and publishers. Since the advent of file sharing, the production of music, books, and movies has increased sharply."[18] Glenn Peoples of Billboard disputed the underlying data, saying "SoundScan's number for new releases in any given year represents new commercial titles, not necessarily new creative works."[19] The RIAA likewise responded that "new releases" and "new creative works" are two separate things. "[T]his figure includes re-releases, new compilations of existing songs, and new digital-only versions of catalog albums. SoundScan has also steadily increased the number of retailers (especially non-traditional retailers) in their sample over the years, better capturing the number of new releases brought to market. What Oberholzer and Strumpf found was better ability to track new album releases, not greater incentive to create them."[20] A 2006 study prepared by Birgitte Andersen and Marion Frenz, published by Industry Canada, was "unable to discover any direct relationship between P2P file-sharing and CD purchases in Canada".[21] The results of this survey were similarly criticized by academics and a subsequent revaluation of the same data by Dr. George R. Barker of the Australian National University reached the opposite conclusion.[22] "In total, 75% of P2P downloaders responded that if P2P were not available they would have purchased either through paid sites only (9%), CDs only (17%) or through CDs and pay sites (49%). Only 25% of people say they would not have bought the music if it were not

File sharing available on P2P for free. This clearly suggests P2P network availability is reducing music demand of 75% of music downloaders which is quite contrary to Andersen and Frenz's much published claim."[23]

226

Market dominance
A paper in journal Management Science found that file sharing decreased the chance of survival for low ranked albums on music charts and increased exposure to albums that were ranked high on the music charts, allowing popular and well known artists to remain on the music charts more often. This had a negative impact for new and less known artists while promoting the work of already popular artists and celebrities.[24] A more recent study that examined pre-release file sharing of music albums, using BitTorrent software, also discovered positive impacts for "established and popular artists but not newer and smaller artists." According to Robert G. Hammond of North Carolina State University, an album that leaked one month early would see a modest increase in sales. "This increase in sales is small relative to other factors that have been found to affect album sales." "File-sharing proponents commonly argue that file sharing democratizes music consumption by 'leveling the playing field' for new/small artists relative to established/popular artists, by allowing artists to have their work heard by a wider audience, lessening the advantage held by established/popular artists in terms of promotional and other support. My results suggest that the opposite is happening, which is consistent with evidence on file-sharing behavior."[25] Billboard cautioned that this research looked only at the pre-release period and not continuous file sharing following a release date. "The problem in believing piracy helps sales is deciding where to draw the line between legal and illegal. ... Implicit in the study is the fact that both buyers and sellers are required in order for pre-release file sharing to have a positive impact on album sales. Without iTunes, Amazon and Best Buy, file-sharers would be just file sharers rather than purchasers. If you carry out the 'file sharing should be legal' argument to its logical conclusion, today's retailers will be tomorrow's file-sharing services that integrate with their respective cloud storage services."[26]

Availability
Many argue that file-sharing has forced the owners of entertainment content to make it more widely available legally through fee or advertising on demand on the internet, rather than remain static with TV, radio, DVD's, CD's, and the theater. Content for purchase has been higher than illegal in North America aggregate internet traffic since at least 2009.[27]As content becomes more available for pay streaming and legal action continues against illegal file sharing, illegal file sharing will likely decline further.[28]

References
[1] "Reliable distributed systems: technologies, Web services, and applications - Kenneth P. Birman - Google Books" (http:/ / books. google. ca/ books?id=KeIENcC2BPwC& pg=PA532& lpg=PA532& dq=napster+ first#PPA532,M1). Books.google.ca. . Retrieved 2012-01-20. [2] Menta, Richard (December 9, 1999). "RIAA Sues Music Startup Napster for $20 Billion" (http:/ / www. mp3newswire. net/ stories/ napster. html). MP3 Newswire. . [3] "EFF: What Peer-to-Peer Developers Need to Know about Copyright Law" (http:/ / w2. eff. org/ IP/ P2P/ p2p_copyright_wp. php). W2.eff.org. . Retrieved 2012-01-20. [4] Menta, Richard (July 20, 2001). "Napster Clones Crush Napster. Take 6 out of the Top 10 Downloads on CNet" (http:/ / www. mp3newswire. net/ stories/ 2001/ topclones. html). MP3 Newswire. . [5] "Department of Justice site hacked after Megaupload shutdown, Anonymous claims credit. Washington Post" (http:/ / www. washingtonpost. com/ business/ economy/ department-of-justice-site-hacked-after-megaupload-shutdown-anonymous-claims-credit/ 2012/ 01/ 20/ gIQAl5MNEQ_story. html?tid=pm_business_pop). Washingtonpost.com. . Retrieved 2012-01-30. [6] Schneider, Joe (2012-01-24). "Megauploads Dotcom in Custody as New Zealand Awaits Extradition Request, Bloomberg" (http:/ / www. bloomberg. com/ news/ 2012-01-24/ megaupload-s-dotcom-in-custody-as-new-zealand-awaits-extradition-request. html). Bloomberg.com. . Retrieved 2012-01-30. [7] "Law professors examine ethical controversies of peer-to-peer file sharing" (http:/ / news-service. stanford. edu/ news/ 2004/ march17/ fileshare-317. html). News-service.stanford.edu. 2004-03-17. . Retrieved 2012-01-20.

File sharing
[8] "Poll: Young Say File Sharing OK" (http:/ / www. cbsnews. com/ stories/ 2003/ 09/ 18/ opinion/ polls/ main573990. shtml). CBS News. 2009-02-11. . Retrieved 2012-01-20. [9] Glenn, David. Dispute Over the Economics of File Sharing Intensifies, Chronicle.com, July 17, 2008. [10] Hart, Terry. More Evidence for Copyright Protection (http:/ / www. copyhype. com/ 2012/ 02/ more-evidence-for-copyright-protection/ ), copyhype.com, February 1, 2012. "The literature review looked at a 23rd study but did not classify it here since the author presented a mixed conclusion: the overall effect of unauthorized downloads is insignificant, but for unknown artists, there is a 'strongly negative' effect on recorded music sales." [11] AJ Sokolov, Daniel . Wissenschaftler: Studien ber Tauschbrsen unbrauchbar (http:/ / www. heise. de/ ct/ meldung/ Wissenschaftler-Studien-ueber-Tauschboersen-unbrauchbar-1020532. html), c't magazine, June 11, 2010. [12] Tschmuck, Peter. The Economics of Music File Sharing A Literature Overview, Vienna Music Business Research Days, University of Music and Performing Arts, Vienna, June 9-10, 2010. [13] Levine, Robert. Free Ride: How the Internet Is Destroying the Culture Business and How the Culture Business Can Fight Back, Bodley Head, February 2011. [14] Oberholzer, Felix; Koleman Strumpf. "The Effect of File Sharing on Record Sales: An Empirical Analysis" (http:/ / www. unc. edu/ ~cigar/ papers/ FileSharing_March2004. pdf) (PDF). . Retrieved 2008-06-13. [15] Liebowitz, Stan J.. How Reliable is the Oberholzer-Gee and Strumpf Paper on File-Sharing?. SSRN1014399. [16] Liebowitz, Stan J.. "The Key Instrument in the Oberholzer-Gee/Strumpf File-Sharing Paper is Defective" (http:/ / musicbusinessresearch. files. wordpress. com/ 2010/ 06/ paper-stan-j-liebowitz1. pdf) (PDF). . Retrieved 2008-06-13. [17] Peoples, Glenn. Researchers Change Tune, Now Say P2P Has Negative Impact (http:/ / www. billboard. biz/ bbbiz/ content_display/ industry/ e3i82a006de3290b1a63323f3e4ee910ca9) Billboard. June 22, 2010. [18] Oberholzer & Strumpf. "File Sharing and Copyright" NBER Innovation Policy & the Economy, Vol. 10, No. 1, 2010. "Artists receive a significant portion of their remuneration not in monetary form many of them enjoy fame, admiration, social status, and free beer in bars suggesting a reduction in monetary incentives might possibly have a reduced impact on the quantity and quality of artistic production." [19] Peoples, Glenn. Analysis: Are Musicians Losing the Incentive to Create? (http:/ / www. billboard. biz/ bbbiz/ content_display/ industry/ e3ic193b6eacf48409b52f1ab027d2d2b6c) Billboard. July 26, 2010. [20] Friedlander, Joshua P. & Lamy, Jonathan. Illegal Downloading = Fewer Musicians (http:/ / www. ifpi. org/ content/ library/ view_35. pdf) ifpi.org, July 19, 2010. [21] The Impact of Music Downloads and P2P File-Sharing on the Purchase of Music: A Study for Industry Canada (http:/ / strategis. ic. gc. ca/ epic/ site/ ippd-dppi. nsf/ en/ h_ip01456e. html), Birgitte Andersen and Marion Frenz [22] Peoples, Glenn. A New Look at an Old Survey Finds P2P Hurts Music Purchases (http:/ / www. billboard. biz/ bbbiz/ industry/ digital-and-mobile/ business-matters-a-new-look-at-an-old-survey-1006083952. story), Billboard. February 2, 2012. [23] Barker, George R. Evidence of the Effect of Free Music Downloads on the Purchase of Music CDs (http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=1990153) Social Science Research Network. January 23, 2012. [24] Bhattacharjee, Sudip., Gopal, Ram D., Lertwachara, Kaveepan. Marsden, James R. & Telang, Rahul. The Effect of Digital Sharing Technologies on Music Markets: A Survival Analysis of Albums on Ranking Charts (http:/ / mansci. journal. informs. org/ content/ 53/ 9/ 1359. full. pdf+ html) Management Science 2007. [25] Hammond. Robert G. " Profit Leak? Pre-Release File Sharing and the Music Industry (http:/ / www4. ncsu. edu/ ~rghammon/ Hammond_File_Sharing_Leak. pdf)" May 2012. File sharing benets mainstream albums such as pop music but not albums in niche genres such as indie music. ... Further, the finding that le sharing redistributes sales toward established/popular artists is inconsistent with claims made by proponents of le sharing that le sharing democratizes music consumption." [26] Peoples, Glenn. Business Matters: Pre-release File Sharing Helps Album Sales, Says a Study. So Why Not Replicate This Legally? (http:/ / www. billboard. biz/ bbbiz/ industry/ record-labels/ business-matters-pre-release-file-sharing-1007125352. story) Billboard. May 22, 2012. [27] Global Internet Phenomena Report - Spring 2011 (http:/ / www. wired. com/ images_blogs/ epicenter/ 2011/ 05/ SandvineGlobalInternetSpringReport2011. pdf) Sandvine Global Internet Waterloo, Ontario, Canada. May 12, 2011 [28] Ryan Singel Most content online is now paid for, thanks to Netflix (http:/ / www. cnn. com/ 2011/ TECH/ web/ 05/ 18/ netflix. piracy. wired/ index. html) Wired via CNN.com May 18, 2011.

227

File sharing

228

Further reading
Levine, Robert. Free Ride: How the Internet Is Destroying the Culture Business and How the Culture Business Can Fight Back, Bodley Head, February 2011. Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models (http://shumans.com/ p2p-business-models.pdf). MIT Sloan School of Management, 2002 Silverthorne, Sean. Music Downloads: Pirates- or Customers? (http://hbswk.hbs.edu/item.jhtml?id=4206& t=innovation). Harvard Business School Working Knowledge, 2004. Ralf Steinmetz, Klaus Wehrle (Eds). Peer-to-Peer Systems and Applications (http://www.peer-to-peer.info/). ISBN 3-540-29192-X, Lecture Notes in Computer Science, Volume 3485, September 2005 Stephanos Androutsellis-Theotokis and Diomidis Spinellis. A survey of peer-to-peer content distribution technologies (http://www.spinellis.gr/pubs/jrnl/2004-ACMCS-p2p/html/AS04.html). ACM Computing Surveys, 36(4):335371, December 2004. doi:10.1145/1041680.1041681. Stefan Saroiu, P. Krishna Gummadi, and Steven D. Gribble. A Measurement Study of Peer-to-Peer File Sharing Systems (http://www.cs.ucsb.edu/~almeroth/classes/F02.276/papers/p2p-measure.pdf). Technical Report # UW-CSE-01-06-02. Department of Computer Science & Engineering. University of Washington. Seattle, WA, USA. Selected Papers (http://www.cs.huji.ac.il/labs/danss/p2p/resources.html) A collection of academic papers.

External links
Digital Britain (http://interactive.bis.gov.uk/digitalbritain/digital-economy-bill/.)

Search
A web search engine is designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SERP's). The information may be a specialist in web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.

History
Timeline (full list) Year Engine Current status Inactive Inactive Active, Aggregator Active, Yahoo Search Active

1993 W3Catalog Aliweb 1994 WebCrawler Go.com Lycos

Search

229
1995 AltaVista Daum Magellan Excite SAPO Yahoo! 1996 Dogpile Inktomi HotBot Ask Jeeves 1997 Northern Light Yandex 1998 Google MSN Search 1999 AlltheWeb GenieKnows Naver Teoma Vivisimo 2000 Baidu Exalead 2002 Inktomi 2003 Info.com 2004 Yahoo! Search Active, Yahoo Search Active Inactive Active Active Active, Launched as a directory Active, Aggregator Acquired by Yahoo! Active (lycos.com) Active (rebranded ask.com) Inactive Active Active Active as Bing Inactive (URL redirected to Yahoo!) Active, rebranded Yellowee.com Active Active Inactive Active Inactive Acquired by Yahoo! Active Active, Launched own web search (see Yahoo! Directory, 1995) Inactive Active Active Active Active Inactive Inactive Active Active Active as Bing, Launched as rebranded MSN Search Active Active

A9.com Sogou 2005 AOL Search Ask.com GoodSearch SearchMe 2006 wikiseek Quaero Ask.com Live Search

ChaCha Guruji.com

Search

230
2007 wikiseek Sproose Wikia Search Blackle.com 2008 Powerset Picollator Viewzi Boogami LeapFish Forestle 2009 Bing Inactive Inactive Inactive Active Inactive (redirects to Bing) Inactive Inactive Inactive Inactive Inactive (redirects to Ecosia) Active, Launched as rebranded Live Search Active Inactive due to a lack of funding Active Active Inactive Active, Launched global (English) search Active Active as Interredu Active, Launched Turkey search Active Active

Yebol Mugurdy Goby 2010 Blekko Cuil Yandex

Yummly 2011 Interred Yandex 2012 Volunia Interredu

During the early development of the web, there was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[1] As more webservers went online the central list could not keep up. On the NCSA site new servers were announced under the title "What's New!"[2] The very first tool used for searching on the Internet was Archie.[3] The name stands for "archive" without the "v". It was created in 1990 by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites since the amount of data was so limited it could be readily searched manually. The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed yet for the web, though numerous specialized catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that would periodically mirror these pages and rewrite them into a standard format which formed the basis for W3Catalog, the

Search web's first primitive search engine, released on September 2, 1993.[4] In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called 'Wandex'. The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format. JumpStation (released in December 1993[5]) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform on which it ran, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it let users search for any word in any webpage, which has become the standard for all major search engines since. It was also the first one to be widely known by the public. Also in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor. Soon after, many search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Yahoo! was among the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than full-text copies of web pages. Information seekers could also browse the directory instead of doing a keyword-based search. In 1996, Netscape was looking to give a single search engine an exclusive deal to be the featured search engine on Netscape's web browser. There was so much interest that instead a deal was struck with Netscape by five of the major search engines, where for $5 million per year each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.[6][7] Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s.[8] Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001. Around 2000, Google's search engine rose to prominence.[9] The company achieved better results for many searches with an innovation called PageRank. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal. By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999 the site began to display listings from Looksmart blended with results from Inktomi except for a short time in 1999 when results from AltaVista were used instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot). Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology.

231

Search

232

How web search engines work


A search engine operates in the following order: 1. Web crawling 2. Indexing 3. Searching[10] Web search engines work by storing information about many web pages, which they retrieve from the HTML itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) an automated Web browser which follows every link on the site. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words can be extracted from the titles, page content, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible.[10] Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it.[10] This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere. When a user enters a query into a search engine (typically by using keywords), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed.[10] As early as 2007 the Google.com search engine has allowed one to search by date by clicking 'Show search tools' in the leftmost column of the initial search results page, and then selecting the desired date range. Most search High-level architecture of a standard Web crawler engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords.[10] There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another.[10] The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work.

Search Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ, the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.

233

Market share
Search engine Market share in May 2011 Market share in December 2010[11] Google Yahoo! Baidu Bing Yandex Ask AOL 82.80% 6.42% 4.89% 3.91% 1.7% 0.52% 0.3% 84.65% 6.69% 3.39% 3.29% 1.3% 0.56% 0.42%

Google's worldwide market share peaked at 86.3% in April 2010.[12] Yahoo!, Bing and other search engines are more popular in the US than in Europe. According to Hitwise, market share in the U.S. for October 2011 was Google 65.38%, Bing-powered (Bing and Yahoo!) 28.62%, and the remaining 66 search engines 6%. However, an Experian Hit wise report released in August 2011 gave the "success rate" of searches sampled in July. Over 80 percent of Yahoo! and Bing searches resulted in the users visiting a web site, while Google's rate was just under 68 percent.[13][14] In the People's Republic of China, Baidu held a 61.6% market share for web search in July 2009.[15] In Russian Federation, Yandex holds around 60% of the market share as of April 2012.[16]

Search engine bias


Although search engines are programmed to rank websites based on their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide.[17][18] These biases could be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in its organic search results), and political processes (e.g., the removal of search results in order to comply with local laws).[19] Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons.

Customized results and filter bubbles


Many search engines such as Google and Bing provide customized results based on the user's activity history. This leads to an effect that has been called a filter bubble. The term describes a phenomenon in which websites use algorithms to selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information which agrees with the user's past viewpoint, effectively isolating the user in a bubble that tends to exclude contrary information. Prime examples are Google's personalized search results and Facebook's personalized news stream. According to Eli Pariser, who coined the term, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Pariser related an example in which one user searched Google for "BP" and got investment news about British Petroleum while another searcher got information about the Deepwater

Search Horizon oil spill and that the two search results pages were "strikingly different."[20][21][22] The bubble effect may have negative implications for civic discourse, according to Pariser.[23] Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking[24] or "bubbling"[25] users.

234

References
[1] "World-Wide Web Servers" (http:/ / www. w3. org/ History/ 19921103-hypertext/ hypertext/ DataSources/ WWW/ Servers. html). W3.org. . Retrieved 2012-05-14. [2] "What's New! February 1994" (http:/ / home. mcom. com/ home/ whatsnew/ whats_new_0294. html). Home.mcom.com. . Retrieved 2012-05-14. [3] "Internet History - Search Engines" (from Search Engine Watch), Universiteit Leiden, Netherlands, September 2001, web: LeidenU-Archie (http:/ / www. internethistory. leidenuniv. nl/ index. php3?c=7). [4] Oscar Nierstrasz (2 September 1993). "Searchable Catalog of WWW Resources (experimental)" (http:/ / groups. google. com/ group/ comp. infosystems. www/ browse_thread/ thread/ 2176526a36dc8bd3/ 2718fd17812937ac?hl=en& lnk=gst& q=Oscar+ Nierstrasz#2718fd17812937ac). . [5] "Archive of NCSA what's new in December 1993 page" (http:/ / web. archive. org/ web/ 20010620073530/ http:/ / archive. ncsa. uiuc. edu/ SDG/ Software/ Mosaic/ Docs/ old-whats-new/ whats-new-1293. html). Web.archive.org. 2001-06-20. . Retrieved 2012-05-14. [6] "Yahoo! And Netscape Ink International Distribution Deal" (http:/ / files. shareholder. com/ downloads/ YHOO/ 701084386x0x27155/ 9a3b5ed8-9e84-4cba-a1e5-77a3dc606566/ YHOO_News_1997_7_8_General. pdf). [7] Browser Deals Push Netscape Stock Up 7.8% (http:/ / articles. latimes. com/ 1996-04-01/ business/ fi-53780_1_netscape-home). Los Angeles Times. 1 April 1996. [8] Gandal, Neil (2001). "The dynamics of competition in the internet search engine market". International Journal of Industrial Organization 19 (7): 11031117. doi:10.1016/S0167-7187(01)00065-0. [9] "Our History in debth" (http:/ / www. google. com/ about/ company/ history/ ). W3.org. . Retrieved 2012-10-31. [10] Jawadekar, Waman S (2011), "8. Knowledge Management: Tools and Technology" (http:/ / books. google. com/ books?id=XmGx4J9daUMC& pg=PA278& dq="search+ engine+ operates"& hl=en& sa=X& ei=a-muUJ6UC4aeiAfI24GYAw& sqi=2& ved=0CDgQ6AEwBA), Knowlege Management: Text & Cases (http:/ / books. google. com/ books?id=XmGx4J9daUMC& printsec=frontcover& dq=knowledge+ management:+ text& hl=en& sa=X& ei=ou6uUP-cNqWTiAe2oICoAw& sqi=2& ved=0CDIQ6AEwAA), New Delhi: Tata McGraw-Hill Education Private Ltd, p.278, ISBN978-0-07-07-0086-4, , retrieved November 23 2012 [11] "Net Marketshare - World" (http:/ / marketshare. hitslink. com/ search-engine-market-share. aspx?qprid=4). Marketshare.hitslink.com. . Retrieved 2012-05-14. [12] "Net Market share - Google" (http:/ / marketshare. hitslink. com/ report. aspx?qprid=5& qpcustom=Google - Global& qptimeframe=M& qpsp=120& qpnp=25). Marketshare.hitslink.com. . Retrieved 2012-05-14. [13] "Google Remains Ahead of Bing, But Relevance Drops" (http:/ / news. yahoo. com/ google-remains-ahead-bing-relevance-drops-210457139. html). August 12, 2011. . [14] Experian Hitwise reports Bing-powered share of searches at 29 percent in October 2011 (http:/ / www. hitwise. com/ us/ about-us/ press-center/ press-releases/ bing-powered-share-of-searches-at-29-percent), Experian Hitwise, November 16, 2011 [15] "Search Engine Market Share July 2009 | Rise to the Top Blog" (http:/ / risetothetop. techwyse. com/ internet-marketing/ search-engine-market-share-july-2009/ ). Risetothetop.techwyse.com. 2009-08-04. . Retrieved 2012-05-14. [16] Pavliva, Halia (2012-04-02). "Yandex Internet Search Share Gains, Google Steady: Liveinternet" (http:/ / www. bloomberg. com/ news/ 2012-04-02/ yandex-internet-search-share-gains-google-steady-liveinternet. html). Bloomberg.com. . Retrieved 2012-05-14. [17] Segev, Elad (2010). Google and the Digital Divide: The Biases of Online Knowledge, Oxford: Chandos Publishing. [18] Vaughan, L. & Thelwall, M. (2004). Search engine coverage bias: evidence and possible causes, Information Processing & Management, 40(4), 693-707. [19] Berkman Center for Internet & Society (2002), Replacement of Google with Alternative Search Systems in China: Documentation and Screen Shots (http:/ / cyber. law. harvard. edu/ filtering/ china/ google-replacements/ ), Harvard Law School. [20] Parramore, Lynn (10 October 2010). "The Filter Bubble" (http:/ / www. theatlantic. com/ daily-dish/ archive/ 2010/ 10/ the-filter-bubble/ 181427/ ). The Atlantic. . Retrieved 2011-04-20. "Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google "BP," one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill...." [21] Weisberg, Jacob (10 June 2011). "Bubble Trouble: Is Web personalization turning us into solipsistic twits?" (http:/ / www. slate. com/ id/ 2296633/ ). Slate. . Retrieved 2011-08-15. [22] Gross, Doug (May 19, 2011). "What the Internet is hiding from you" (http:/ / articles. cnn. com/ 2011-05-19/ tech/ online. privacy. pariser_1_google-news-facebook-internet/ 2?_s=PM:TECH). CNN. . Retrieved 2011-08-15. "I had friends Google BP when the oil spill was happening. These are two women who were quite similar in a lot of ways. One got a lot of results about the environmental consequences of what was happening and the spill. The other one just got investment information and nothing about the spill at all."

Search
[23] Zhang, Yuan Cao; Saghdha, Diarmuid ; Quercia, Daniele; Jambor, Tamas (February 2012). "Auralist: Introducing Serendipity into Music Recommendation" (http:/ / www-typo3. cs. ucl. ac. uk/ fileadmin/ UCL-CS/ research/ Research_Notes/ RN_11_21. pdf). ACM WSDM. . [24] "donttrack.us" (http:/ / donttrack. us). . Retrieved 2012-04-29. [25] "dontbubble.us" (http:/ / dontbubble. us). . Retrieved 2012-04-29.

235

GBMW: Reports of 30-day punishment, re: Car maker BMW had its German website bmw.de delisted from Google, such as: Slashdot-BMW (http://slashdot.org/article.pl?sid=06/02/05/235218) (05-Feb-2006). INSIZ: Maximum size of webpages indexed by MSN/Google/Yahoo! ("100-kb limit"): Max Page-size (http:// www.sitepoint.com/article/indexing-limits-where-bots-stop) (28-Apr-2006).

Further reading
For a more detailed history of early search engines, see Search Engine Birthdays (http://searchenginewatch. com/showPage.html?page=3071951) (from Search Engine Watch), Chris Sherman, September 2003. Steve Lawrence; C. Lee Giles (1999). "Accessibility of information on the web". Nature 400 (6740): 1079. doi:10.1038/21987. PMID10428673. Bing Liu (2007), Web Data Mining: Exploring Hyperlinks, Contents and Usage Data (http://www.cs.uic.edu/ ~liub/WebMiningBook.html). Springer, ISBN 3-540-37881-2 Bar-Ilan, J. (2004). The use of Web search engines in information science research. ARIST, 38, 231-288. Levene, Mark (2005). An Introduction to Search Engines and Web Navigation. Pearson. Hock, Randolph (2007). The Extreme Searcher's Handbook. ISBN 978-0-910965-76-7 Javed Mostafa (February 2005). "Seeking Better Web Searches" (http://www.sciam.com/article. cfm?articleID=0006304A-37F4-11E8-B7F483414B7F0000). Scientific American Magazine. Ross, Nancy; Wolfram, Dietmar (2000). "End user searching on the Internet: An analysis of term pair topics submitted to the Excite search engine". Journal of the American Society for Information Science 51 (10): 949958. doi:10.1002/1097-4571(2000)51:10<949::AID-ASI70>3.0.CO;2-5. Xie, M. et al. (1998). "Quality dimensions of Internet search engines". Journal of Information Science 24 (5): 365372. doi:10.1177/016555159802400509. Information Retrieval: Implementing and Evaluating Search Engines (http://www.ir.uwaterloo.ca/book/). MIT Press. 2010.

External links
Search Engines (http://www.dmoz.org/Computers/Internet/Searching/Search_Engines//) at the Open Directory Project

Blogging

236

Blogging
A blog (a portmanteau of the term web log)[1] is a discussion or informational site published on the World Wide Web and consisting of discrete entries ("posts") typically displayed in reverse chronological order (the most recent post appears first). Until 2009 blogs were usually the work of a single individual, occasionally of a small group, and often were themed on a single subject. More recently "multi-author blogs" (MABs) have developed, with posts written by large numbers of authors and professionally edited. MABs from newspapers, other media outlets, universities, think tanks, interest groups and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into societal newstreams. Blog can also be used as a verb, meaning to maintain or add content to a blog. The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users. (Previously, a knowledge of such technologies as HTML and FTP had been required to publish content on the Web.) Although not a requirement, most good quality blogs are interactive, allowing visitors to leave comments and even message each other via GUI widgets on the blogs, and it is this interactivity that distinguishes them from other static websites.[2] In that sense, blogging can be seen as a form of social networking. Indeed, bloggers do not only produce content to post on their blogs, but also build social relations with their readers and other bloggers.[3] Many blogs provide commentary on a particular subject; others function as more personal online diaries; others function more as online brand advertising of a particular individual or company. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability of readers to leave comments in an interactive format is an important contribution to the popularity of many blogs. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or "vlogs"), music (MP3 blogs), and audio (podcasts). Microblogging is another type of blogging, featuring very short posts. In education, blogs can be used as instructional resources. These blogs are referred to as edublogs. As of 16 February 2011, there were over 156million public blogs in existence.[4] On October 13, 2012, there were around 77 million Tumblr[5] and 56.6 million WordPress[6] blogs in existence worldwide.

Blogging

237

History
The term "weblog" was coined by Jorn Barger[7] on 17 December 1997. The short form, "blog," was coined by Peter Merholz, who jokingly broke the word weblog into the phrase we blog in the sidebar of his blog Peterme.com in April or May 1999.[8][9][10] Shortly thereafter, Evan Williams at Pyra Labs used "blog" as both a noun and verb ("to blog," meaning "to edit one's weblog or to post to one's weblog") and devised the term "blogger" in connection with Pyra Labs' Blogger product, leading to the popularization of the terms.[11]

Origins
Before blogging became popular, digital communities took many forms, including Early example of a "diary" style blog consisting of text and images transmitted Usenet, commercial online services such as wirelessly in real time from a wearable computer with headup display, 1995 GEnie, BiX and the early CompuServe, February 22nd e-mail lists[12] and Bulletin Board Systems (BBS). In the 1990s, Internet forum software, created running conversations with "threads". Threads are topical connections between messages on a virtual "corkboard". From June 14, 1993 Mosaic Communications Corporation maintained their "Whats New" [13] list of new websites, updated daily and archived monthly. The page was accessible by a special "What's New" button in the Mosaic web browser. The modern blog evolved from the online diary, where people would keep a running account of their personal lives. Most such writers called themselves diarists, journalists, or journalers. Justin Hall, who began personal blogging in 1994 while a student at Swarthmore College, is generally recognized as one of the earlier bloggers,[14] as is Jerry Pournelle.[15] Dave Winer's Scripting News is also credited with being one of the older and longer running weblogs.[16][17] The Australian Netguide magazine maintained the Daily Net News [18] on their web site from 1996. Daily Net News ran links and daily reviews of new websites, mostly in Australia. Another early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site in 1994. This practice of semi-automated blogging with live video together with text was referred to as sousveillance, and such journals were also used as evidence in legal matters. Early blogs were simply manually updated components of common Web sites. However, the evolution of tools to facilitate the production and maintenance of Web articles posted in reverse chronological order made the publishing process feasible to a much larger, less technical, population. Ultimately, this resulted in the distinct class of online publishing that produces blogs we recognize today. For instance, the use of some sort of browser-based software is now a typical aspect of "blogging". Blogs can be hosted by dedicated blog hosting services, or they can be run using blog software, or on regular web hosting services. Some early bloggers, such as The Misanthropic Bitch, who began in 1997, actually referred to their online presence as a zine, before the term blog entered common usage.

Blogging

238

Rise in popularity
After a slow start, blogging rapidly gained in popularity. Blog usage spread during 1999 and the years following, being further popularized by the near-simultaneous arrival of the first hosted blog tools: Bruce Ableson launched Open Diary in October 1998, which soon grew to thousands of online diaries. Open Diary innovated the reader comment, becoming the first blog community where readers could add comments to other writers' blog entries. Brad Fitzpatrick started LiveJournal in March 1999. Andrew Smales created Pitas.com in July 1999 as an easier alternative to maintaining a "news page" on a Web site, followed by Diaryland in September 1999, focusing more on a personal diary community.[19] Evan Williams and Meg Hourihan (Pyra Labs) launched Blogger.com in August 1999 (purchased by Google in February 2003)

Political impact
An early milestone in the rise in importance of blogs came in 2002, when many bloggers focused on comments by U.S. Senate Majority Leader Trent Lott.[20] Senator Lott, at a party honoring U.S. Senator Strom Thurmond, praised Senator Thurmond by suggesting that the United States would have been better off had Thurmond been elected president. Lott's critics saw these comments as a tacit approval of racial segregation, a policy advocated by Thurmond's 1948 presidential campaign. This view was reinforced by documents and recorded interviews dug up by bloggers. (See Josh Marshall's Talking Points Memo.) Though Lott's comments were made at a public event attended by the media, no major media organizations reported on his controversial comments until after blogs broke the story. Blogging helped to create a political crisis that forced Lott to step down as majority leader. Similarly, blogs were among the driving forces behind the "Rathergate" scandal. To wit: (television journalist) Dan Rather presented documents (on the CBS show 60 Minutes) that conflicted with accepted accounts of President Bush's military service record. Bloggers declared the documents to be forgeries and presented evidence and arguments in support of that view. Consequently, CBS apologized for what it said were inadequate reporting techniques (see Little Green Footballs). Many bloggers view this scandal as the advent of blogs' acceptance by the mass media, both as a news source and opinion and as means of applying political pressure. The impact of these stories gave greater credibility to blogs as a medium of news dissemination. Though often seen as partisan gossips, bloggers sometimes lead the way in bringing key information to public light, with mainstream media having to follow their lead. More often, however, news blogs tend to react to material already published by the mainstream media. Meanwhile, an increasing number of experts blogged, making blogs a source of in-depth analysis. In Russia, some political bloggers have started to challenge the dominance of official, overwhelmingly pro-government media. Bloggers such as Rustem Adagamov and Alexey Navalny have many followers and the latter's nickname for the ruling United Russia party as the "party of crooks and thieves" and been adopted by anti-regime protesters.[21] This led to the Wall Street Journal calling Navalny "the man Vladimir Putin fears most" in March 2012.[22]

Mainstream popularity
By 2004, the role of blogs became increasingly mainstream, as political consultants, news services, and candidates began using them as tools for outreach and opinion forming. Blogging was established by politicians and political candidates to express opinions on war and other issues and cemented blogs' role as a news source. (See Howard Dean and Wesley Clark.) Even politicians not actively campaigning, such as the UK's Labour Party's MP Tom Watson, began to blog to bond with constituents.

Blogging In January 2005, Fortune magazine listed eight bloggers whom business people "could not ignore": Peter Rojas, Xeni Jardin, Ben Trott, Mena Trott, Jonathan Schwartz, Jason Goldman, Robert Scoble, and Jason Calacanis.[23] Israel was among the first national governments to set up an official blog.[24] Under David Saranga, the Israeli Ministry of Foreign Affairs became active in adopting Web 2.0 initiatives, including an official video blog[24] and a political blog.[25] The Foreign Ministry also held a microblogging press conference via Twitter about its war with Hamas, with Saranga answering questions from the public in common text-messaging abbreviations during a live worldwide press conference.[26] The questions and answers were later posted on IsraelPolitik, the country's official political blog.[27] The impact of blogging upon the mainstream media has also been acknowledged by governments. In 2009, the presence of the American journalism industry had declined to the point that several newspaper corporations were filing for bankruptcy, resulting in less direct competition between newspapers within the same circulation area. Discussion emerged as to whether the newspaper industry would benefit from a stimulus package by the federal government. U.S. President Barack Obama acknowledged the emerging influence of blogging upon society by saying "if the direction of the news is all blogosphere, all opinions, with no serious fact-checking, no serious attempts to put stories in context, then what you will end up getting is people shouting at each other across the void but not a lot of mutual understanding.[28]

239

Types
There are many different types of blogs, differing not only in the type of content, but also in the way that content is delivered or written. Personal blogs The personal blog, an ongoing diary or commentary by an individual, is the traditional, most common blog. Personal bloggers usually take pride in their blog posts, even if their blog is never read. Blogs often become more than a way to just communicate; they become a way to reflect on life, or works of art. Blogging can have a sentimental quality. Few personal blogs rise to fame and the mainstream but some personal blogs quickly garner an extensive following. One type of personal blog, referred to as a microblog, is extremely detailed and seeks to capture a moment in time. Some sites, such as Twitter, allow bloggers to share thoughts and feelings instantaneously with friends and family, and are much faster than emailing or writing. Microblogging Microblogging is the practice of posting small pieces of digital contentwhich could be text, pictures, links, short videos, or other mediaon the Internet. Microblogging offers a portable communication mode that feels organic and spontaneous to many and has captured the public imagination. Friends use it to keep in touch, business associates use it to coordinate meetings or share useful resources, and celebrities and politicians (or their publicists) microblog about concert dates, lectures, book releases, or tour schedules. A wide and growing range of add-on tools enables sophisticated updates and interaction with other applications, and the resulting profusion of functionality is helping to define new possibilities for this type of communication.[29] Corporate and organizational blogs A blog can be private, as in most cases, or it can be for business purposes. Blogs used internally to enhance the communication and culture in a corporation or externally for marketing, branding or public relations purposes are called corporate blogs. Similar blogs for clubs and societies are called club blogs, group blogs, or by similar names; typical use is to inform members and other interested parties of club and member activities. By genre Some blogs focus on a particular subject, such as political blogs, health blogs, travel blogs (also known as travelogs), gardening blogs, house blogs,[30][31] fashion blogs, project blogs, education blogs, niche blogs, classical music blogs, quizzing blogs and legal blogs (often referred to as a blawgs) or dreamlogs. Two

Blogging common types of genre blogs are art blogs and music blogs. A blog featuring discussions especially about home and family is not uncommonly called a mom blog and one made popular is by Erica Diamond who created Womenonthefence.com which is syndicated to over two million readers monthly.[32][33][34][35][36][37] While not a legitimate type of blog, one used for the sole purpose of spamming is known as a Splog. By media type A blog comprising videos is called a vlog, one comprising links is called a linklog, a site containing a portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog.[38] Blogs with shorter posts and mixed media types are called tumblelogs. Blogs that are written on typewriters and then scanned are called typecast or typecast blogs; see typecasting (blogging). A rare type of blog hosted on the Gopher Protocol is known as a Phlog. By device Blogs can also be defined by which type of device is used to compose it. A blog written by a mobile device like a mobile phone or PDA could be called a moblog.[39] One early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site. This practice of semi-automated blogging with live video together with text was referred to as sousveillance. Such journals have been used as evidence in legal matters. Reverse blog A Reverse Blog is composed by its users rather than a single blogger. This system has the characteristics of a blog, and the writing of several authors. These can be written by several contributing authors on a topic, or opened up for anyone to write. There is typically some limit to the number of entries to keep it from operating like a Web Forum.

240

Community and cataloging


The Blogosphere The collective community of all blogs is known as the blogosphere. Since all blogs are on the internet by definition, they may be seen as interconnected and socially networked, through blogrolls, comments, linkbacks (refbacks, trackbacks or pingbacks) and backlinks. Discussions "in the blogosphere" are occasionally used by the media as a gauge of public opinion on various issues. Because new, untapped communities of bloggers and their readers can emerge in the space of a few years, Internet marketers pay close attention to "trends in the blogosphere".[40] Blog search engines Several blog search engines are used to search blog contents, such as Bloglines, BlogScope, and Technorati. Technorati, which is among the more popular blog search engines, provides current information on both popular searches and tags used to categorize blog postings.[41] The research community is working on going beyond simple keyword search, by inventing new ways to navigate through huge amounts of information present in the blogosphere, as demonstrated by projects like BlogScope. Blogging communities and directories Several online communities exist that connect people to blogs and bloggers to other bloggers, including BlogCatalog and MyBlogLog.[42] Interest-specific blogging platforms are also available. For instance, Blogster has a sizable community of political bloggers among its members. Global Voices aggregates international bloggers, "with emphasis on voices that are not ordinarily heard in international mainstream media."[43] Blogging and advertising

Blogging It is common for blogs to feature advertisements either to financially benefit the blogger or to promote the blogger's favorite causes. The popularity of blogs has also given rise to "fake blogs" in which a company will create a fictional blog as a marketing tool to promote a product.[44]

241

Popularity
Researchers have analyzed the dynamics of how blogs become popular. There are essentially two measures of this: popularity through citations, as well as popularity through affiliation (i.e., blogroll). The basic conclusion from studies of the structure of blogs is that while it takes time for a blog to become popular through blogrolls, permalinks can boost popularity more quickly, and are perhaps more indicative of popularity and authority than blogrolls, since they denote that people are actually reading the blog's content and deem it valuable or noteworthy in specific cases.[45] The blogdex project was launched by researchers in the MIT Media Lab to crawl the Web and gather data from thousands of blogs in order to investigate their social properties. It gathered this information for over 4 years, and autonomously tracked the most contagious information spreading in the blog community, ranking it by recency and popularity. It can therefore be considered the first instantiation of a memetracker. The project was replaced by tailrank.com which in turn has been replaced by spinn3r.com [46]. Blogs are given rankings by Technorati based on the number of incoming links and Alexa Internet based on the Web hits of Alexa Toolbar users. In August 2006, Technorati found that the most linked-to blog on the internet was that of Chinese actress Xu Jinglei.[47] Chinese media Xinhua reported that this blog received more than 50 million page views, claiming it to be the most popular blog in the world.[48] Technorati rated Boing Boing to be the most-read group-written blog.[47]

Blurring with the mass media


Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different channel. Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public. Some critics worry that bloggers respect neither copyright nor the role of the mass media in presenting society with credible news. Bloggers and other contributors to user-generated content are behind Time magazine naming their 2006 person of the year as "You". Many mainstream journalists, meanwhile, write their own blogs well over 300, according to CyberJournalist.net's J-blog list. The first known use of a blog on a news site was in August 1998, when Jonathan Dube of The Charlotte Observer published one chronicling Hurricane Bonnie.[49] Some bloggers have moved over to other media. The following bloggers (and others) have appeared on radio and television: Duncan Black (known widely by his pseudonym, Atrios), Glenn Reynolds (Instapundit), Markos Moulitsas Zniga (Daily Kos), Alex Steffen (Worldchanging), Ana Marie Cox (Wonkette), Nate Silver (FiveThirtyEight.com), and Ezra Klein (Ezra Klein blog in The American Prospect, now in the Washington Post). In counterpoint, Hugh Hewitt exemplifies a mass media personality who has moved in the other direction, adding to his reach in "old media" by being an influential blogger. Similarly, it was Emergency Preparedness and Safety Tips On Air and Online blog articles that captured Surgeon General of the United States Richard Carmona's attention and earned his kudos for the associated broadcasts by talk show host Lisa Tolliver and Westchester Emergency Volunteer Reserves-Medical Reserve Corps Director Marianne Partridge.[50][51][52][53] Blogs have also had an influence on minority languages, bringing together scattered speakers and learners; this is particularly so with blogs in Gaelic languages. Minority language publishing (which may lack economic feasibility) can find its audience through inexpensive blogging.

Blogging There are many examples of bloggers who have published books based on their blogs, e.g., Salam Pax, Ellen Simonetti, Jessica Cutler, ScrappleFace. Blog-based books have been given the name blook. A prize for the best blog-based book was initiated in 2005,[54] the Lulu Blooker Prize.[55] However, success has been elusive offline, with many of these books not selling as well as their blogs. Only blogger Tucker Max made the New York Times Bestseller List.[56] The book based on Julie Powell's blog "The Julie/Julia Project" was made into the film Julie & Julia, apparently the first to do so.

242

Consumer-generated advertising in blogs


Consumer-generated advertising is a relatively new and controversial development and it has created a new model of marketing communication from businesses to consumers. Among the various forms of advertising on blog, the most controversial are the sponsored posts.[57] These are blog entries or posts and may be in the form of feedback, reviews, opinion, videos, etc. and usually contain a link back to the desired site using a keyword/s. Blogs have led to some disintermediation and a breakdown of the traditional advertising model where companies can skip over the advertising agencies (previously the only interface with the customer) and contact the customers directly themselves. On the other hand, new companies specialised in blog advertising have been established, to take advantage of this new development as well. However, there are many people who look negatively on this new development. Some believe that any form of commercial activity on blogs will destroy the blogospheres credibility.[58]

Legal and social consequences


Blogging can result in a range of legal liabilities and other unforeseen consequences.[59]

Defamation or liability
Several cases have been brought before the national courts against bloggers concerning issues of defamation or liability. U.S. payouts related to blogging totaled $17.4 million by 2009; in some cases these have been covered by umbrella insurance.[60] The courts have returned with mixed verdicts. Internet Service Providers (ISPs), in general, are immune from liability for information that originates with third parties (U.S. Communications Decency Act and the EU Directive 2000/31/EC). In Doe v. Cahill, the Delaware Supreme Court held that stringent standards had to be met to unmask the anonymous bloggers, and also took the unusual step of dismissing the libel case itself (as unfounded under American libel law) rather than referring it back to the trial court for reconsideration.[61] In a bizarre twist, the Cahills were able to obtain the identity of John Doe, who turned out to be the person they suspected: the town's mayor, Councilman Cahill's political rival. The Cahills amended their original complaint, and the mayor settled the case rather than going to trial. In January 2007, two prominent Malaysian political bloggers, Jeff Ooi and Ahiruddin Attan, were sued by a pro-government newspaper, The New Straits Times Press (Malaysia) Berhad, Kalimullah bin Masheerul Hassan, Hishamuddin bin Aun and Brenden John a/l John Pereira over an alleged defamation. The plaintiff was supported by the Malaysian government.[62] Following the suit, the Malaysian government proposed to "register" all bloggers in Malaysia in order to better control parties against their interest.[63] This is the first such legal case against bloggers in the country. In the United States, blogger Aaron Wall was sued by Traffic Power for defamation and publication of trade secrets in 2005.[64] According to Wired Magazine, Traffic Power had been "banned from Google for allegedly rigging search engine results."[65] Wall and other "white hat" search engine optimization consultants had exposed Traffic Power in what they claim was an effort to protect the public. The case addressed the murky legal question of who is liable for comments posted on blogs.[66] The case was dismissed for lack of personal jurisdiction, and Traffic Power failed to appeal within the allowed time.[67]

Blogging In 2009, a controversial and landmark decision by The Hon. Mr Justice Eady refused to grant an order to protect the anonymity of Richard Horton. Horton was a police officer in the United Kingdom who blogged about his job under the name "NightJack".[68] In 2009, NDTV issued a legal notice to Indian blogger Kunte for a blog post criticizing their coverage of the Mumbai attacks.[69] The blogger unconditionally withdrew his post, which resulted in several Indian bloggers criticizing NDTV for trying to silence critics.[70]

243

Employment
Employees who blog about elements of their place of employment can begin to affect the brand recognition of their employer. In general, attempts by employee bloggers to protect themselves by maintaining anonymity have proved ineffective.[71] Delta Air Lines fired flight attendant Ellen Simonetti because she posted photographs of herself in uniform on an airplane and because of comments posted on her blog "Queen of Sky: Diary of a Flight Attendant" which the employer deemed inappropriate.[72][73] This case highlighted the issue of personal blogging and freedom of expression versus employer rights and responsibilities, and so it received wide media attention. Simonetti took legal action against the airline for "wrongful termination, defamation of character and lost future wages".[74] The suit was postponed while Delta was in bankruptcy proceedings (court docket).[75] In early 2006, Erik Ringmar, a tenured senior lecturer at the London School of Economics, was ordered by the convenor of his department to "take down and destroy" his blog in which he discussed the quality of education at the school.[76] Mark Cuban, owner of the Dallas Mavericks, was fined during the 2006 NBA playoffs for criticizing NBA officials on the court and in his blog.[77] Mark Jen was terminated in 2005 after 10 days of employment as an Assistant Product Manager at Google for discussing corporate secrets on his personal blog, then called 99zeros and hosted on the Google-owned Blogger service.[78] He blogged about unreleased products and company finances a week before the company's earnings announcement. He was fired two days after he complied with his employer's request to remove the sensitive material from his blog.[79] In India, blogger Gaurav Sabnis resigned from IBM after his posts questioned the claims of a management school IIPM.[80] Jessica Cutler, aka "The Washingtonienne",[81] blogged about her sex life while employed as a congressional assistant. After the blog was discovered and she was fired,[82] she wrote a novel based on her experiences and blog: The Washingtonienne: A Novel. Cutler is presently being sued by one of her former lovers in a case that could establish the extent to which bloggers are obligated to protect the privacy of their real life associates.[83] Catherine Sanderson, a.k.a. Petite Anglaise, lost her job in Paris at a British accountancy firm because of blogging.[84] Although given in the blog in a fairly anonymous manner, some of the descriptions of the firm and some of its people were less than flattering. Sanderson later won a compensation claim case against the British firm, however.[85] On the other hand, Penelope Trunk wrote an upbeat article in the Boston Globe back in 2006, entitled "Blogs 'essential' to a good career".[86] She was one of the first journalists to point out that a large portion of bloggers are professionals and that a well-written blog can help attract employers.

Blogging

244

Political dangers
Blogging can sometimes have unforeseen consequences in politically sensitive areas. Blogs are much harder to control than broadcast or even print media. As a result, totalitarian and authoritarian regimes often seek to suppress blogs and/or to punish those who maintain them. In Singapore, two ethnic Chinese were imprisoned under the countrys anti-sedition law for posting anti-Muslim remarks in their blogs.[87] Egyptian blogger Kareem Amer was charged with insulting the Egyptian president Hosni Mubarak and an Islamic institution through his blog. It is the first time in the history of Egypt that a blogger was prosecuted. After a brief trial session that took place in Alexandria, the blogger was found guilty and sentenced to prison terms of three years for insulting Islam and inciting sedition, and one year for insulting Mubarak.[88] Egyptian blogger Abdel Monem Mahmoud was arrested in April 2007 for anti-government writings in his blog.[89] Monem is a member of the then banned Muslim Brotherhood. After the 2011 Egyptian revolution, the Egyptian blogger Maikel Nabil Sanad was charged with insulting the military for an article he wrote on his personal blog and sentenced to 3 [90][91][92][93][94][95][96][97][98][99][100][101][102][103] years. After expressing opinions in his personal blog about the state of the Sudanese armed forces, Jan Pronk, United Nations Special Representative for the Sudan, was given three days notice to leave Sudan. The Sudanese army had demanded his deportation.[104][105][106] In Myanmar, Nay Phone Latt, a blogger, was sentenced to 20 years in jail for posting a cartoon critical of head of state Than Shwe.[107]

Personal safety
One consequence of blogging is the possibility of attacks or threats against the blogger, sometimes without apparent reason. Kathy Sierra, author of the innocuous blog "Creating Passionate Users",[108] was the target of such vicious threats and misogynistic insults that she canceled her keynote speech at a technology conference in San Diego, fearing for her safety.[109] While a blogger's anonymity is often tenuous, Internet trolls who would attack a blogger with threats or insults can be emboldened by anonymity. Sierra and supporters initiated an online discussion aimed at countering abusive online behavior[110] and developed a blogger's code of conduct.

Behavior
The Blogger's Code of Conduct is a proposal by Tim O'Reilly for bloggers to enforce civility on their blogs by being civil themselves and moderating comments on their blog. The code was proposed due to threats made to blogger Kathy Sierra.[111] The idea of the code was first reported by BBC News, who quoted O'Reilly saying, "I do think we need some code of conduct around what is acceptable behaviour, I would hope that it doesn't come through any kind of regulation it would come through self-regulation."[112] O'Reilly and others came up with a list of seven proposed ideas:[113][114][115][116][117] 1. 2. 3. 4. 5. 6. Take responsibility not just for your own words, but for the comments you allow on your blog. Label your tolerance level for abusive comments. Consider eliminating anonymous comments. Ignore the trolls. Take the conversation offline, and talk directly, or find an intermediary who can do so. If you know someone who is behaving badly, tell them so.

7. Don't say anything online that you wouldn't say in person.

Blogging

245

References
[1] Blood, Rebecca (September 7, 2000). "Weblogs: A History And Perspective" (http:/ / www. rebeccablood. net/ essays/ weblog_history. html). . [2] Mutum, Dilip; Wang, Qing (2010). "Consumer Generated Advertising in Blogs". In Neal M. Burns, Terry Daugherty, Matthew S. Eastin. Handbook of Research on Digital Media and Advertising: User Generated Content Consumption. 1. IGI Global. pp.248261. [3] Gaudeul, Alexia and Peroni, Chiara (2010). "Reciprocal attention and norm of reciprocity in blogging networks" (http:/ / ideas. repec. org/ a/ ebl/ ecbull/ eb-10-00198. html). Economics Bulletin 30 (3): 22302248. . [4] "BlogPulse" (http:/ / www. blogpulse. com/ ). The Nielsen Company. February 16, 2011. . Retrieved 2011-02-17. [5] About Tumblr.com. Accessed October 13, 2012 (http:/ / www. tumblr. com/ about) [6] WordPress.com Stats. Accessed October 13, 2012 (http:/ / www. wordpress. com/ stats) [7] "After 10 Years of Blogs, the Future's Brighter Than Ever" (http:/ / www. wired. com/ entertainment/ theweb/ news/ 2007/ 12/ blog_anniversary). Wired. 2007-12-17. . Retrieved 2008-06-05. [8] "It's the links, stupid" (http:/ / www. economist. com/ surveys/ displaystory. cfm?story_id=6794172). The Economist. 2006-04-20. . Retrieved 2008-06-05. [9] Merholz, Peter (1999). "Peterme.com" (http:/ / web. archive. org/ web/ 19991013021124/ http:/ / peterme. com/ index. html). The Internet Archive. Archived from the original (http:/ / peterme. com/ index. html) on 1999-10-13. . Retrieved 2008-06-05. [10] Kottke, Jason (2003-08-26). "kottke.org" (http:/ / www. kottke. org/ 03/ 08/ its-weblog-not-web-log). . Retrieved 2008-06-05. [11] Origins of "Blog" and "Blogger" (http:/ / listserv. linguistlist. org/ cgi-bin/ wa?A2=ind0804C& L=ADS-L& P=R16795& I=-3), American Dialect Society Mailing List (Apr. 20, 2008). [12] The term "e-log" has been used to describe journal entries sent out via e-mail since as early as March 1996.Norman, David (2005-07-13). "Users confused by blogs" (http:/ / web. archive. org/ web/ 20070607235110/ http:/ / lists. drupal. org/ archives/ development/ 2005-07/ msg00208. html) ( Scholar search (http:/ / scholar. google. co. uk/ scholar?hl=en& lr=& q=author:Norman+ intitle:Users+ confused+ by+ blogs& as_publication=& as_ylo=2005& as_yhi=2005& btnG=Search)). Archived from the original (http:/ / lists. drupal. org/ archives/ development/ 2005-07/ msg00208. html) on 2007-06-07. . Retrieved 2008-06-05 "Research staff and students welcome E-Log" (http:/ / www. ucl. ac. uk/ news-archive/ archive/ 2003/ december-2003/ latest/ newsitem. shtml?03120901). University College London. December 2003. . Retrieved 2008-06-05. [13] http:/ / home. mcom. com/ home/ whats-new. html [14] Harmanci, Reyhan (2005-02-20). "Time to get a life pioneer blogger Justin Hall bows out at 31" (http:/ / www. sfgate. com/ cgi-bin/ article. cgi?file=/ c/ a/ 2005/ 02/ 20/ MNGBKBEJO01. DTL). San Francisco Chronicle. . Retrieved 2008-06-05. [15] Pournelle, Jerry. "Chaos Manor in Perspective" (http:/ / www. jerrypournelle. com/ #whatabout). Jerry Pournelle's blog. . [16] Paul Festa (2003-02-25). "Newsmaker: Blogging comes to Harvard" (http:/ / news. cnet. com/ 2008-1082-985714. html). CNET. . Retrieved 2007-01-25. [17] "...Dave Winer... whose Scripting News (scripting.com) is one of the oldest blogs."David F. Gallagher (2002-06-10). "Technology; A rift among bloggers" (http:/ / query. nytimes. com/ gst/ fullpage. html?res=9E0DE3DE103DF933A25755C0A9649C8B63). New York Times. . [18] http:/ / web. archive. org/ web/ 19961112042649/ http:/ / netguide. aust. com/ daily/ index. html [19] Jensen, Mallory A Brief History of Weblogs (http:/ / cjrarchives. org/ issues/ 2003/ 5/ blog-jensen. asp?printerfriendly=yes) [20] Massing, Michael (2009-08-13). "The News About the Internet" (http:/ / www. nybooks. com/ articles/ 22960). New York Review of Books (The New York Review of Books) 56 (13): 2932. . Retrieved 2009-10-10. [21] Daniel Sandford, BBC News: "Russians tire of corruption spectacle", http:/ / www. bbc. co. uk/ news/ world-europe-15972326 [22] Matthew Kaminski (March 3, 2012). "The Man Vladimir Putin Fears Most (the weekend interview)" (http:/ / online. wsj. com/ article/ SB10001424052970203986604577257321601811092. html). The Wall Street Journal. . [23] {{cite journal|journal=Fortune|date=January 2005|website=Fortune.com|url=http:/ / www. fortune. com/ fortune/ technology/ articles/ 0,15114,1011763-1,00. html| [24] Israel Video Blog aims to show the world 'the beautiful face of real Israel' (http:/ / www. ynetnews. com/ articles/ 0,7340,L-3220593,00. html), Ynet, February 24, 2008. [25] Latest PR venture of Israel's diplomatic mission in New York attracts large Arab audience (http:/ / www. ynetnews. com/ articles/ 0,7340,L-3220593,00. html), Ynet, June 21, 2007. [26] Battlefront Twitter (http:/ / fr. jpost. com/ servlet/ Satellite?cid=1230456533492& pagename=JPost/ JPArticle/ ShowFull), Haviv Rettig Gur, The Jerusalem Post, December 30, 2008. [27] The Toughest Qs Answered in the Briefest Tweets (http:/ / www. nytimes. com/ 2009/ 01/ 04/ weekinreview/ 04cohen. html), Noam Cohen, The New York Times, January 3, 2009. Retrieved January 5, 2009. [28] Journalists deserve subsidies too (http:/ / www. delawareonline. com/ article/ 20091103/ OPINION16/ 91102031/ 1004/ OPINION/ Journalists-deserve-subsidies-too), Robert W. Mcchesney and John Nichols, Delaware Online, November 3, 2009. Retrieved November 10, 2009. [29] "7 Things You Should Know About Microblogging" (http:/ / www. educause. edu/ library/ resources/ 7-things-you-should-know-about-microblogging). Educause.Edu. 2009-07-07. . Retrieved 2012-10-25. [30] Stephan Metcalf, "Fixing a Hole", New York Times, March 2006 [31] Jennifer Saranow, "Blogwatch: This Old House", Wall Street Journal, September 2007

Blogging
[32] Casserly, Meghan and Goudreau, Jenna. Top 100 Websites For Women 2011 (http:/ / www. forbes. com/ 2011/ 06/ 23/ 100-best-web-sites-for-women-blogs-2011. html), Forbes, June 23, 2011 [33] Paul, Pamela (2004-04-12). "The New Family Album" (http:/ / www. time. com/ time/ magazine/ article/ 0,9171,993832-3,00. html). TIME. . Retrieved 2010-03-31. [34] Carpenter, MacKenzie (2007-10-31). "More women are entering the blogosphere satirizing, sharing and reaching a key demographic" (http:/ / www. post-gazette. com/ pg/ 07304/ 829747-51. stm). Post-gazette.com. . Retrieved 2010-03-31. [35] Brown, Jonathan (2005-02-05). "The drooling minutiae of childhood revealed for all to see as 'Mommy blogs' come of age" (http:/ / www. independent. co. uk/ news/ science/ the-drooling-minutiae-of-childhood-revealed-for-all-to-see-as-mommy-blogs-come-of-age-485573. html). The Independent (London). . Retrieved 2010-03-30. [36] "Living" (http:/ / www. omaha. com/ index. php?u_page=1219& u_sid=10322842). Omaha.com. . Retrieved 2010-03-31. [37] Jesella, Kara (2008-07-27). "Bloggings Glass Ceiling" (http:/ / www. nytimes. com/ 2008/ 07/ 27/ fashion/ 27blogher. html?_r=2& sq=blogher women blogging& st=cse& adxnnl=1& scp=1& adxnnlx=1228493929-MAKTyKJ3qiW/ + fidCwXbFg). The New York Times. . Retrieved 2010-03-26. [38] "What is a photoblog" (http:/ / wiki. photoblogs. org/ wiki/ What_is_a_Photoblog). Photoblogs.org Wiki. . Retrieved 2006-06-25. [39] "Blogging goes mobile" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 2783951. stm). BBC News. 2003-02-23. . Retrieved 2008-06-05. [40] See for instance: Mesure, Susie (2009-08-23). "Is it a diary? Is it an ad? It's a mummy blog" (http:/ / www. independent. co. uk/ life-style/ gadgets-and-tech/ news/ is-it-a-diary-is-it-an-ad-its-a-mummy-blog-1776163. html). The Independent (London): p.11. . Retrieved 2009-10-10. [41] "Welcome to Technorati" (http:/ / web. archive. org/ web/ 20080505011927/ http:/ / www. technorati. com/ about). unknown. Archived from the original (http:/ / technorati. com/ about) on May 5, 2008. . Retrieved 2008-06-25. [42] "About MyBlogLog" (http:/ / www. mybloglog. com/ buzz/ help/ #a200502282152271). MyBlogLog. . Retrieved 2007-06-29. [43] "Global Voices: About" (http:/ / globalvoicesonline. org/ about/ ). GlobalVoices.org. . Retrieved 2011-04-02. [44] Gogoi, Pallavi (2006-10-09). "Wal-Mart's Jim and Laura: The Real Story" (http:/ / www. businessweek. com/ bwdaily/ dnflash/ content/ oct2006/ db20061009_579137. htm). BusinessWeek. . Retrieved 2008-08-06. [45] Marlow, C. Audience, structure and authority in the weblog community (http:/ / alumni. media. mit. edu/ ~cameron/ cv/ pubs/ 04-01. pdf). Presented at the International Communication Association Conference, May, 2004, New Orleans, LA. [46] http:/ / spinn3r. com [47] Fickling, David, Internet killed the TV star (http:/ / blogs. guardian. co. uk/ news/ archives/ 2006/ 08/ 15/ internet_killed_the_tv_star. html), The Guardian NewsBlog, 15 August 2006 [48] "Xu Jinglei most popular blogger in world" (http:/ / www. chinadaily. com. cn/ china/ 2006-08/ 24/ content_672747. htm). China Daily. 2006-08-24. . Retrieved 2008-06-05. [49] "Blogging Bonnie." (http:/ / www. poynter. org/ column. asp?id=52& aid=48413/ ). Poynter.org. 2003-09-18. . [50] "National Safety Month" (http:/ / www. nsc. org/ nsc_events/ Nat_Safe_Month/ Pages/ home. aspx). Nsc.org. . Retrieved 2010-04-09. [51] "Flavor Flav Celebrates National Safety Month" (http:/ / blogcritics. org/ archives/ 2006/ 06/ 21/ 173419. php). Blogcritics. . [52] "Lisa Tolliver show notes" (http:/ / tolliveretips. blogspot. com/ 2006/ 06/ lisa-tolliver-show-guests-flavor-flav. html). Emergency Preparedness and Safety Tips On Air and Online. . [53] "Lisa Tolliver's Show Notes" (http:/ / lisatolliver. blogspot. com/ 2006/ 05/ how-my-radio-shows-publications-and. html). Lisa Tolliver On Air and Online. . [54] "Blooker rewards books from blogs" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 4326908. stm). BBC News. 2005-10-11. . Retrieved 2008-06-05. [55] "Blooker prize honours best blogs" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 6446271. stm). BBC News. 2007-03-17. . Retrieved 2008-06-05. [56] St, Warren (2006-04-16). "Dude, here's my book" (http:/ / www. nytimes. com/ 2006/ 04/ 16/ fashion/ sundaystyles/ 16CADS. html?ex=1302840000& en=778087aa367d0620& ei=5090& partner=rssuserland& emc=rss). Nytimes.com. . Retrieved 2010-07-31. [57] Mutum, Dilip and Wang, Qing (2010). Consumer Generated Advertising in Blogs. In Neal M. Burns, Terry Daugherty, Matthew S. Eastin (Eds) Handbook of Research on Digital Media and Advertising: User Generated Content Consumption (Vol 1), IGI Global, 248-261. [58] Techchrunch.com (http:/ / www. techcrunch. com/ 2006/ 06/ 30/ payperpostcom-offers-to-buy-your-soul/ ): Payperpost.com offers to sell your soul, Kirkpatrick, M. (2006, June 30). [59] "Article Window" (http:/ / epaper. timesofindia. com/ Default/ Scripting/ ArticleWin. asp?From=Search& Key=ETBG/ 2008/ 01/ 25/ 12/ Ar01201. xml& CollName=ET_BANGALORE_ARCHIVE_2007& DOCID=108812& Keyword=(<many><stem>Chandu<and><many><stem>Gopalakrishnan<and><many><stem>libel<and><many><stem>blog)& skin=pastissues2& AppName=2& ViewMode=HTML). Epaper.timesofindia.com. . Retrieved 2012-10-25. [60] McQueen MP. (2009). Bloggers, Beware: What You Write Can Get You Sued (http:/ / online. wsj. com/ article/ SB124287328648142113. html). WSJ. [61] Doe v. Cahill (http:/ / caselaw. lp. findlaw. com/ data2/ delawarestatecases/ 266-2005. pdf), 884 A.2d 451 (Del. 2005). [62] "New Straits Times staffers sue two bloggers" (http:/ / web. archive. org/ web/ 20080608220312/ http:/ / www. rsf. org/ article. php3?id_article=20489). Reporters Without Borders. 2007-01-19. Archived from the original (http:/ / www. rsf. org/ article. php3?id_article=20489) on 2008-06-08. . Retrieved 2008-06-05.

246

Blogging
[63] "Government plans to force bloggers to register" (http:/ / web. archive. org/ web/ 20080611025330/ http:/ / www. rsf. org/ article. php3?id_article=21606). Reporters Without Borders. 2007-04-06. Archived from the original (http:/ / www. rsf. org/ article. php3?id_article=21606) on 2008-06-11. . Retrieved 2008-06-05. [64] Kesmodel, David (2005-08-31). Wall Street Journal "Blogger Faces Lawsuit Over Comments Posted by Readers" (http:/ / online. wsj. com/ public/ article/ SB112541909221726743-_vX2YpePQV7AOIl2Jeebz4FAfS4_20060831. html?mod=blogs). Wall Street Journal Online. Wall Street Journal. Retrieved 2008-06-05. [65] Wired Magazine, Legal Showdown in Search Fracas (http:/ / www. wired. com/ culture/ lifestyle/ news/ 2005/ 09/ 68799), Sept 8, 2005 [66] "Slashdot, Aug 31" (http:/ / yro. slashdot. org/ yro/ 05/ 08/ 31/ 1427228. shtml?tid=123). Yro.slashdot.org. 2005-08-31. . Retrieved 2010-07-31. [67] Sullivan, Danny (2006-04-13). "SearchEngineWatch" (http:/ / blog. searchenginewatch. com/ blog/ 060413-084431). Blog.searchenginewatch.com. . Retrieved 2010-07-31. [68] Ruling on NightJack author Richard Horton kills blogger anonymity (http:/ / technology. timesonline. co. uk/ tol/ news/ tech_and_web/ the_web/ article6509677. ece) [69] "Barkha versus blogger" (http:/ / www. thehoot. org/ web/ home/ story. php?storyid=3629& mod=1& pg=1& sectionId=6& valid=true). The Hoot. . Retrieved 2009-02-02. [70] Indian bloggers criticizing NDTV (http:/ / www. abhishekarora. com/ 2009/ 02/ chyetanya-kunte-vs-burkha-dutt-ndtv. html) [71] Sanderson, Cathrine (2007-04-02). "Blogger beware!" (http:/ / commentisfree. guardian. co. uk/ catherine_sanderson/ 2007/ 04/ blogger_beware. html). London: Guardian Unlimited. . Retrieved 2007-04-02. [72] Twist, Jo (2004-11-03). "US Blogger Fired by her Airline" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3974081. stm). BBC News. . Retrieved 2008-06-05. [73] "Delta employee fired for blogging sues airline" (http:/ / www. usatoday. com/ travel/ news/ 2005-09-08-delta-blog_x. htm). USA Today. 2005-09-08. . Retrieved 2008-06-05. [74] "Queen of the Sky gets marching orders" (http:/ / www. theregister. co. uk/ 2004/ 11/ 03/ airline_blogger_sacked/ ). The Register. 2004-11-03. . Retrieved 2008-06-05. [75] Deltadocket.com (http:/ / deltadocket. com/ delta_downloads/ delta_downloads_CourtFiledDocuments/ Twelfth_OmnibusClaimsObjection. pdf) [76] MacLeod, Donald (2006-05-03). "Lecturer's Blog Sparks Free Speech Row" (http:/ / education. guardian. co. uk/ higher/ news/ story/ 0,,1766663,00. html). London: The Guardian. . Retrieved 2008-06-05. See also Forget the Footnotes (http:/ / ringmar. net/ forgethefootnotes/ ) [77] "NBA fines Cuban $200K for antics on, off court" (http:/ / sports. espn. go. com/ nba/ playoffs2006/ news/ story?id=2440355). ESPN. 2006-05-11. . Retrieved 2008-06-05. [78] Hansen, Evan (2005-02-08). "Google blogger has left the building" (http:/ / news. cnet. com/ Google-blogger-has-left-the-building/ 2100-1038_3-5567863. html). CNET News. . Retrieved 2007-04-04. [79] "Official Story, straight from the source" (http:/ / blog. plaxoed. com/ 2005/ 02/ 11/ the-official-story-straight-from-the-source/ ). . [80] "Express India" (http:/ / cities. expressindia. com/ fullstory. php?newsid=152721). Cities.expressindia.com. . Retrieved 2011-01-30. [81] Washingtoniennearchive.blogspot.com (http:/ / washingtoniennearchive. blogspot. com/ ) [82] "The Hill's Sex Diarist Reveals All (Well, Some)" (http:/ / www. washingtonpost. com/ wp-dyn/ articles/ A48909-2004May22. html). The Washington Post. 2004-05-23. . Retrieved 2008-06-05. [83] "Steamy D.C. Sex Blog Scandal Heads to Court" (http:/ / www. msnbc. msn. com/ id/ 16366256/ ). The Associated Press, MSNBC. 2006-12-27. . Retrieved 2008-06-05. [84] "Bridget Jones Blogger Fire Fury" (http:/ / edition. cnn. com/ 2006/ WORLD/ europe/ 07/ 19/ france. blog/ index. html?section=cnn_tech). CNN. 2006-07-19. . Retrieved 2008-06-05. [85] "Sacked "petite anglaise" blogger wins compensation claim" (http:/ / findarticles. com/ p/ articles/ mi_kmafp/ is_200703/ ai_n18772706). AFP. 2007-03-30. Archived from the original (http:/ / news. yahoo. com/ s/ afp/ 20070330/ tc_afp/ lifestyleinternet_070330205230) on 2007-03-30. . Retrieved 2008-06-05. [86] Boston.com (http:/ / www. boston. com/ business/ globe/ articles/ 2006/ 04/ 16/ blogs_essential_to_a_good_career/ ) [87] Kierkegaard, Sylvia (2006). "Blogs, lies and the doocing: The next hotbed of litigation?". Computer Law & Security Report 22 (2): 127. doi:10.1016/j.clsr.2006.01.002. [88] "Egypt blogger jailed for "insult"" (http:/ / news. bbc. co. uk/ 2/ hi/ middle_east/ 6385849. stm). BBC News. 2007-02-22. . Retrieved 2008-06-05. [89] Ana-ikhwan.blogspot.com (http:/ / www. ana-ikhwan. blogspot. com/ ) [90] VOA News: Imprisoned Egyptian Blogger's Hunger Strike Fights Military Rule (http:/ / www. voanews. com/ english/ news/ middle-east/ Imprisoned-Egyptian-Bloggers-Hunger-Strike-Fights-Military-Rule-131793268. html) (13 October 2011) [91] "EGYPT: Egyptian pacifist Maikel Nabil Sanad arrested for insulting the military | War Resisters' International" (http:/ / www. wri-irg. org/ node/ 12513). Wri-irg.org. . Retrieved 2011-08-14. [92] "Maikel Nabil Sanad : The Army and The People Were Never One Hand" (http:/ / www. maikelnabil. com/ 2011/ 03/ army-and-people-wasnt-ever-one-hand. html). Maikelnabil.com. 2011-03-08. . Retrieved 2011-08-14. [93] "EGYPT: Imprisoned pacifist blogger Maikel Nabil Sanad in solitary confinement | War Resisters' International" (http:/ / www. wri-irg. org/ node/ 13004). Wri-irg.org. . Retrieved 2011-08-14.

247

Blogging
[94] "Advocates: Egyptian blogger Nabil on hunger strike may only have days left Politics Egypt Ahram Online" (http:/ / english. ahram. org. eg/ NewsContent/ 1/ 64/ 22687/ Egypt/ Politics-/ Advocates-Egyptian-blogger-Nabil-on-hunger-strike-. aspx). English.ahram.org.eg. . Retrieved 2012-10-25. [95] "Maikel Nabil Sanads two-year jail term "insults spirit of Egyptian revolution"" (http:/ / en. rsf. org/ egypte-maikel-nabil-sanad-s-two-year-jail-14-12-2011,41301. html). Reporters Without Borders. . Retrieved 2011-12-29. [96] "Maikel Nabil Sanad, On Hunger Strike in Egypt, Is Dying" (http:/ / www. huffingtonpost. com/ 2011/ 09/ 15/ hunger-strike-egyptian-pr_n_963916. html). http:/ / www. huffingtonpost. com. . Retrieved 2011-12-29. [97] "Press Release: The Imminent Death of Blogger Maikel Nabil, Imprisoned by the Egyptian Military" (http:/ / en. nomiltrials. com/ 2011/ 12/ press-release-imminent-death-of-blogger. html). No Military Trials for Civilians. . Retrieved 2011-12-29. [98] "The Story of Maikel Nabil SCAF Crimes" (http:/ / www. youtube. com/ watch?v=vBHs1Xs2lF4). www.youtube.com. . Retrieved 2011-12-29. [99] Jack Shenker (22 January 2012). "Egypt pardons jailed blogger as generals brace for anniversary protests" (http:/ / www. guardian. co. uk/ world/ 2012/ jan/ 22/ egypt-pardons-blogger-anniversary-protests?newsfeed=true). The Guardian. . Retrieved 23 January 2012. [100] Amnesty International: Egypt military court toying with life of jailed blogger (https:/ / www. amnesty. org/ en/ news-and-updates/ egypt-military-court-toying-life-jailed-blogger-2011-10-13) (13 October 2011) [101] Amnesty International: Egypt blogger on hunger strike must be released as health fails (https:/ / www. amnesty. org/ en/ news-and-updates/ egypt-blogger-hunger-strike-must-be-released-health-fails-2011-09-26) (26 September 2011) [102] Reporters Without Borders: The blogger and conscientious objector Maikel Nabil Sanad arrested (http:/ / en. rsf. org/ + the-blogger-and-conscientious+ . html) (28 March 2011) [103] Reporters Without Borders: Authorities urged to free blogger at military court appeal (http:/ / en. rsf. org/ egypt-jailed-blogger-resumes-hunger-17-09-2011,41015. html) (3 October 2011) [104] "Sudan expels U.N. envoy for blog" (http:/ / www. cnn. com/ 2006/ WORLD/ africa/ 10/ 22/ sudan. darfur. un/ index. html). CNN. 2006-10-22. . Retrieved 2007-03-14. [105] "UN envoy leaves after Sudan row" (http:/ / news. bbc. co. uk/ 2/ hi/ africa/ 6076022. stm). BBC NEWS (BBC). 23 October 2006. . Retrieved 2006-10-24. [106] "Annan confirms Pronk will serve out his term as top envoy for Sudan" (http:/ / www. un. org/ apps/ news/ story. asp?NewsID=20396& Cr=sudan& Cr1=). UN News Centre (UN). 27 October 2006. . Retrieved 2008-06-05. [107] "Burma blogger jailed for 20 years" (http:/ / news. bbc. co. uk/ 2/ hi/ asia-pacific/ 7721271. stm). BBC News. 2008-11-11. . Retrieved 2010-03-26. [108] Headrush.typepad.com (http:/ / headrush. typepad. com/ ) [109] Pham, Alex (2007-03-31). "Abuse, threats quiet bloggers' keyboards" (http:/ / web. archive. org/ web/ 20080625081401/ http:/ / www. imsafer. com/ images/ LAtimes_3_31_07. pdf). Los Angeles Times. Archived from the original (http:/ / www. latimes. com/ business/ la-fi-internet31mar31,0,4064392. story?coll=la-home-headlines) on 2007-04-02. . Retrieved 2008-06-05. [110] "Blog death threats spark debate" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 6499095. stm). BBC News. 2007-03-27. . Retrieved 2008-06-05. [111] Tim O'Reilly (2007-03-03). "Call for a Blogger's Code of Conduct" (http:/ / radar. oreilly. com/ archives/ 2007/ 03/ call_for_a_blog_1. html). O'Reilly Radar. . Retrieved 2007-04-14. [112] "Call for blogging code of conduct" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 6502643. stm). BBC News. 2007-03-28. . Retrieved 2007-04-14. [113] "Draft Blogger's Code of Conduct" (http:/ / radar. oreilly. com/ archives/ 2007/ 04/ draft_bloggers_1. html). Radar.oreilly.com. . Retrieved 2011-01-30. [114] "Blogger's Code of Conduct at Blogging Wikia" (http:/ / blogging. wikia. com/ wiki/ Blogger's_Code_of_Conduct). Blogging.wikia.com. . Retrieved 2011-01-30. [115] "MilBlogs Rules of Engagement" (http:/ / www. yankeesailor. us/ ?p=113). Yankeesailor.us. 2005-05-20. . Retrieved 2011-01-30. [116] Code of Conduct: Lessons Learned So Far (http:/ / radar. oreilly. com/ archives/ 2007/ 04/ code_of_conduct. html), by Tim O'Reilly [117] "Blogger Content Policy" (http:/ / www. blogger. com/ content. g). Blogger.com. . Retrieved 2011-01-30.

248

Blogging

249

Further reading
Alavi, Nasrin. We Are Iran: The Persian Blogs, Soft Skull Press, New York, 2005. ISBN 1-933368-05-5. Bruns, Axel, and Joanne Jacobs, eds. Uses of Blogs, Peter Lang, New York, 2006. ISBN 0-8204-8124-6. Blood, Rebecca. "Weblogs: A History and Perspective" (http://www.rebeccablood.net/essays/weblog_history. html). "Rebecca's Pocket". Kline, David; Burstein, Dan. Blog!: How the Newest Media Revolution is Changing Politics, Business, and Culture, Squibnocket Partners, L.L.C., 2005. ISBN 1-59315-141-1. Michael Gorman. "Revenge of the Blog People!" (http://www.libraryjournal.com/article/CA502009.html). Library Journal. Ringmar, Erik. A Blogger's Manifesto: Free Speech and Censorship in the Age of the Internet (http://www. archive.org/download/ABloggersManifestoFreeSpeechAndCensorshipInTheAgeOfTheInternet/ ErikRingmarABloggersManifesto.pdf) (London: Anthem Press, 2007). Rosenberg, Scott, Say Everything: how blogging Began, what it's becoming, and why it matters (http://books. google.com/books?id=opmZQrBNPssC&printsec=frontcover), New York : Crown Publishers, 2009. ISBN 978-0-307-45136-1

External links
Computer Law and Security Report Volume 22 Issue 2, Pages 127-136 (http://www.sciencedirect.com/ science?_ob=ArticleURL&_udi=B6VB3-4JH47F6-5&_user=10& _handle=V-WA-A-W-AB-MsSAYZW-UUA-U-AAVYYUUEZC-AAVZBYADZC-YBADCWEZW-AB-U& _fmt=summary&_coverDate=12/31/2006&_rdoc=5&_orig=browse& _srch=#toc#5915#2006#999779997#619171!&_cdi=5915&view=c&_acct=C000050221&_version=1& ;_urlVersion=0&_userid=10&md5=3a78d26b9ff73d0a9c37060c8bed6dbc) blogs, Lies and the Doocing by Sylvia Kierkegaard (2006) Legal Guide for bloggers (http://www.eff.org/bloggers/lg/) by the Electronic Frontier Foundation

Microblogging

250

Microblogging
Microblogging is a broadcast medium in the form of blogging. A microblog differs from a traditional blog in that its content is typically smaller in both actual and aggregate file size. Microblogs "allow users to exchange small elements of content such as short sentences, individual images, or video links".[1] These small messages are sometimes called microposts.[2][3] As with traditional blogging, microbloggers post about topics ranging from the simple, such as "what I'm doing right now," to the thematic, such as "sports cars." Commercial microblogs also exist, to promote websites, services and/or products, and to promote collaboration within an organization. Some microblogging services offer features such as privacy settings, which allow users to control who can read their microblogs, or alternative ways of publishing entries besides the web-based interface. These may include text messaging, instant messaging, E-mail, digital audio or digital video.

Services
The first microblogs were known as tumblelogs. The term was coined by why the lucky stiff in a blog post on April12, 2005, while describing Christian Neukirchen's Anarchaia.[4]

Blogging has mutated into simpler forms (specifically, link- and mob- and aud- and vid- variant), but I dont think Ive seen a blog like Chris Neukirchens Anarchaia, which fudges together a bunch of disparate forms of citation (links, quotes, flickrings) into a very long and narrow and distracted tumblelog.

Jason Kottke described tumblelogs on October 19, 2005:[5]


A tumblelog is a quick and dirty stream of consciousness, a bit like a remaindered links style linklog but with more than just links. They remind me of an older style of blogging, back when people did sites by hand, before Movable Type made post titles all but mandatory, blog entries turned into short magazine articles, and posts belonged to a conversation distributed throughout the entire blogosphere. Robot Wisdom and Bifurcated Rivets are two older style weblogs that feel very much like these tumblelogs with minimal commentary, little cross-blog chatter, the barest whiff of a finished published work, almost pure editing...really just a way to quickly publish the "stuff" that you run across every day on the web

However, by 2006 and 2007, the term microblog came into greater usage for such services provided by Tumblr and Twitter. In May 2007, 111 microblogging sites were counted internationally. Among the most notable services are Twitter, Tumblr, Cif2.net, Plurk, Jaiku and identi.ca. Varieties of services and software with the feature of microblogging have been developed. Plurk has a timeline view which integrates video and picture sharing. Flipter uses microblogging as a platform for people to post topics and gather audience's opinions. Emote.in has a concept of sharing emotions, built over microblogging, with a timeline. PingGadget is a location based microblogging service. Pownce, developed by Digg founder Kevin Rose among others, integrates microblogging with file sharing and event invitations. Pownce was merged into SixApart in December 2008.[6] Other leading social networking websites Facebook, MySpace, LinkedIn, Diaspora*, JudgIt, Yahoo Pulse, Google Buzz, Google+ and XING, also have their own microblogging feature, better known as "status updates". Services such as Lifestream and Profilactic will aggregate microblogs from multiple social networks into a single list while other services, such as Ping.fm, will send out your microblog to multiple social networks. Internet users in China are facing a different situation. Foreign microblogging services like Twitter, Facebook, Plurk, and Google+ are censored in China. The users use Chinese weibo services such as Sina Weibo and Tencent Weibo. Tailored to Chinese people, these weibos are like hybrids of Twitter and Facebook, they implement basic features of Twitter and, allow users to comment to others' posts, post with graphical emoticons or attach image, music, video files.

Microblogging

251

Usage
Several studies, most notably by Harvard Business School and Sysomos, have tried to analyze the usage behavior of microblogging services.[7][8] Several of these studies show that for services such as Twitter, there is a small group of active users contributing to most of the activity.[9] Sysomos' Inside Twitter [8] survey, based on more than 11 million users, shows that 10% of Twitter users account for 86% of all activity. Twitter, Facebook, and other microblogging services are also becoming a platform for marketing and public relations,[10] with a sharp growth in the number of social media marketers. The Sysomos study shows that this specific group of marketers on Twitter is much more active than general user population, with 15% following more than 2,000 people. This is in sharp contrast to only 0.29% of overall Twitter users who follow more than 2,000 people.[8] Microblogging services have also emerged as an important source of real-time news updates for recent crisis situations, such as the Mumbai terror attacks or Iran protests.[11][12] The short nature of updates allow users to post news items quickly, reaching its audience in seconds. Microblogging services have revolutionized the way information is consumed. It has empowered citizens themselves to act as sensors or sources of data which could lead to important pieces of information. People now share what they observe in their surroundings, information about events, and what their opinions are about certain topics, for example government policies in healthcare. Moreover, these services store various metadata from these posts, such as the location and time of these shared posts. Aggregate analysis of this data includes different dimensions like space, time, theme, sentiment, network structure etc., and gives researchers an opportunity to understand social perceptions of the people about certain events of interest.[13] Microblogging also promotes authorship. On the microblogging platform Tumblr the reblogging feature links the post back to the original creator. The findings of a study by Emily Pronin of Princeton University and Harvard University's Daniel Wegner have been cited as a possible explanation for the rapid growth of microblogging. The study suggests a link between short bursts of activity and feelings of elation, power and creativity.[14]

Microblogging for organizational usage


Users and organizations can set up their own microblogging service: free and open source software is available for this purpose.[15] Hosted microblogging platforms are also available for commercial and organizational use. Microblogging has the potential to become a new, informal communication medium, especially for collaborative work within organizations.[16][17] Over the last few years communication patterns have shifted primarily from face-to-face to more online communication in email, IM, text messaging, and other tools. However, some argue that email is now a slow and inefficient way to communicate.[18] For instance, time-consuming 'email chains' can develop, whereby two or more people are involved in lengthy communications for simple matters, such as arranging a meeting.[19] The 'one-to-many' broadcasting offered by microblogs is thought to increase productivity by circumventing this. Another implication of remote collaboration is that there are fewer opportunities for face-to-face informal conversations. However, microblogging has the potential to support informal communication among coworkers. Many individuals like sharing their whereabouts and status updates with microblogging. Microblogging is therefore expected to improve the social and emotional welfare of the workforce, as well as streamline the information flow within an organization.[16] It can increase opportunities to share information,[17][20] help realize and utilize expertise within the workforce,[17] and help build and maintain common ground between coworkers.[16] As microblogging use continues to grow every year, it is quickly becoming a core component of Enterprise Social Software.

Microblogging

252

Issues with microblogging


Some issues with microblogging are privacy, security, and integration.[16] Privacy is arguably a major issue because users may broadcast sensitive personal information to anyone who views their public feed. Microblog platform providers can also cause privacy issues through altering or presetting users' privacy options in a way users feel compromises their personal information. An example would be Googles Buzz platform which incited controversy in 2010 by automatically publicizing users email contacts as followers.[21] Google later amended these settings. Security concerns have been voiced within the business world, since there is potential for sensitive work information to be publicized on microblogging sites such as Twitter.[22][23] This includes information which may be subject to a superinjunction.[24] Integration could be the hardest issue to overcome, since it can be argued that corporate culture must change to accommodate microblogging.

Related concepts
Live Blogging is a derivative of microblogging that generates a continuous feed on a specific web page. Instant messaging systems display status, but generally only one of a few choices, such as: available, off-line, away. Away messages (messages displayed when the user is away) form a kind of microblogging. In the Finger protocol, the .project and .plan files are sometimes used for status updates similar to microblogging.

References
[1] Kaplan Andreas M., Haenlein Michael (2011) The early bird catches the news: Nine things you should know about micro-blogging, Business Horizons, 54(2). [2] (http:/ / research. hypios. com/ msm2011/ ) [3] S. Lohmann et al. (2012). "'Visual Analysis of Microblog Content Using Time-Varying Co-occurrence Highlighting in Tag Clouds'" (http:/ / www. vis. uni-stuttgart. de/ ~lohmansn/ publications/ MicroblogAnalyzer. pdf). AVI 2012 Conference. . [4] (http:/ / redhanded. hobix. com/ inspect/ tumbleloggingAssortedLarvae. html) [5] Tumblelogs (kottke.org) (http:/ / www. kottke. org/ 05/ 10/ tumblelogs) [6] Pownce website (http:/ / www. pownce. com/ ) [7] "New Twitter Research: Men Follow Men and Nobody Tweets" (http:/ / blogs. harvardbusiness. org/ cs/ 2009/ 06/ new_twitter_research_men_follo. html). Harvard Business School. 2009-06-01. . Retrieved 2009-06-23. [8] "Inside Twitter: An In-depth Look Inside the Twitter World" (http:/ / www. sysomos. com/ insidetwitter/ ). Sysomos. 2009-06-10. . Retrieved 2009-06-23. [9] "The More Followers You Have, The More You Tweet. Or Is It The Other Way Around?" (http:/ / www. techcrunch. com/ 2009/ 06/ 10/ the-more-followers-you-have-the-more-you-tweet-or-is-it-the-other-way-around/ ). TechCrunch. 2009-06-10. . Retrieved 2010-06-23. [10] Jin, Liyun (2009-06-21). "Businesses using Twitter, Facebook to market goods" (http:/ / www. post-gazette. com/ pg/ 09172/ 978727-96. stm). Pittsburgh Post-Gazette. . Retrieved 2009-06-23. [11] "First Hand Accounts Of Terrorist Attacks In India On Twitter, Flickr" (http:/ / www. techcrunch. com/ 2008/ 11/ 26/ first-hand-accounts-of-terrorist-attacks-in-india-on-twitter/ ). TechCrunch. 2008-11-26. . Retrieved 2009-06-23. [12] "Twitter on Iran: A Go-to Source or Almost Useless?" (http:/ / www. findingdulcinea. com/ news/ technology/ 2009/ June/ Twitter-on-Iran-a-Go-to-Source-or-Almost-Useless. html). 2009-06-22. . Retrieved 2009-06-23. [13] M. Nagarajan et. al. "'Spatio-Temporal-Thematic Analysis of Citizen-Sensor Data Challenges and Experiences'" (http:/ / knoesis. wright. edu/ library/ resource. php?id=00559). WISE 2009 Conference. . [14] "Could this be a factor in the allure of microblogs?" (http:/ / thelaughingbuddha. wordpress. com/ 2009/ 04/ 19/ could-this-be-a-factor-in-the-allure-of-microblogs/ ). . [15] "StatusNet Open Source microblogging service" (http:/ / status. net/ ). . Retrieved 2010-01-05. [16] Dejin Zhao & Mary Beth Rosson (May 2009). "'How and why people Twitter: the role that micro-blogging plays in informal communication at work'" (http:/ / portal. acm. org/ citation. cfm?id=1531710). ACM GROUP2009 Conference. . [17] D. Zhao, et al. (May 2011). "'Microblogging's impact on collaboration awareness: A field study of microblogging within and between project teams'" (http:/ / ieeexplore. ieee. org/ xpl/ login. jsp?tp=& arnumber=5928662& url=http:/ / ieeexplore. ieee. org/ xpls/ abs_all. jsp?arnumber=5928662). IEEE CTS2011 Conference. .

Microblogging
[18] Ross Mayfield (October 15, 2008). "'Email hell'" (http:/ / www. forbes. com/ 2008/ 10/ 15/ cio-email-manage-tech-cio-cx_rm_1015email. html). Forbes. . Retrieved March 25, 2010. [19] "Delicious Productivity Improvements For This Flavor Partner" (http:/ / www. socialtext. com/ customers/ casestudy_fona. php). Socialtext.com. . Retrieved March 25, 2010. [20] Joab Jackson (November 20, 2009). "NASA program proves the benefits of social networking" (http:/ / gcn. com/ Articles/ 2009/ 11/ 30/ A-Space-side-NASA-social-networking. aspx?Page=1). Government Computer News. . Retrieved March 25, 2010. [21] "'Google Buzz redesigned after privacy complaints'" (http:/ / www. telegraph. co. uk/ technology/ google/ 7240754/ Google-Buzz-redesigned-after-privacy-complaints. html). The Telegraph (London). February 15, 2010. . Retrieved March 25, 2010. [22] Emma Barnett (March 20, 2010). "'Have business networking sites finally come of age?'" (http:/ / www. telegraph. co. uk/ technology/ 7482116/ Have-business-networking-sites-finally-come-of-age. html). The Telegraph (London). . Retrieved March 25, 2010. [23] "'A world of connections'" (http:/ / www. economist. com/ specialreports/ displaystory. cfm?story_id=15351002). The Economist. Jan 28, 2010. . Retrieved March 25, 2010. [24] "Twitter outings undermine "super injunctions"" (http:/ / uk. reuters. com/ article/ 2011/ 05/ 09/ uk-britain-superinjunctions-twitter-idUKTRE7481YV20110509). Reuters. 2011-05-09. .

253

Social networking
A social networking service is an online service, platform, or site that focuses on facilitating the building of social networks or social relations among people who, for example, share interests, activities, backgrounds, or real-life connections. A social network service consists of a representation of each user (often a profile), his/her social links, and a variety of additional services. Most social network services are web-based and provide means for users to interact over the Internet, such as e-mail and instant messaging. Online community services are sometimes considered as a social network service, though in a broader sense, social network service usually means an individual-centered service whereas online community services are group-centered. Social networking sites allow users to share ideas, activities, events, and interests within their individual networks. The main types of social networking services are those that contain category places (such as former school year or classmates), means to connect with friends (usually with self-description pages), and a recommendation system linked to trust. Popular methods now combine many of these, with American-based services Facebook, Google+, and Twitter widely used worldwide; Nexopia in Canada;[1] Badoo,[2] Bebo,[3] VKontakte, Draugiem.lv (mostly in Latvia), Hi5, Hyves (mostly in The Netherlands), iWiW (mostly in Hungary), Nasza-Klasa (mostly in Poland), Skyrock, The Sphere, StudiVZ (mostly in Germany), Tagged, Tuenti (mostly in Spain), and XING[4] in parts of Europe;[5] Hi5 and Orkut in South America and Central America;[6] LAGbook in Africa;[7] and Cyworld, Mixi, Orkut, renren,weibo and Wretch in Asia and the Pacific Islands. There have been attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see the FOAF standard and the Open Source Initiative). A 2011 survey found that 47% of American adults use a social networking service.[8]

History
The potential for computer networking to facilitate newly improved forms of computer-mediated social interaction was suggested early on.[9] Efforts to support social networks via computer-mediated communication were made in many early online services, including Usenet,[10] ARPANET, LISTSERV, and bulletin board services (BBS). Many prototypical features of social networking sites were also present in online services such as America Online, Prodigy, CompuServe, ChatNet, and The WELL.[11] Early social networking on the World Wide Web began in the form of generalized online communities such as Theglobe.com (1995),[12] Geocities (1994) and Tripod.com (1995). Many of these early communities focused on bringing people together to interact with each other through chat rooms, and encouraged users to share personal information and ideas via personal webpages by providing easy-to-use publishing tools and free or inexpensive webspace. Some communities - such as Classmates.com - took a different approach by simply having people link to each other via email addresses. In the late 1990s, user profiles became a central feature

Social networking of social networking sites, allowing users to compile lists of "friends" and search for other users with similar interests. New social networking methods were developed by the end of the 1990s, and many sites began to develop more advanced features for users to find and manage friends.[13] This newer generation of social networking sites began to flourish with the emergence of SixDegrees.com in 1997,[14] followed by Makeoutclub in 2000,[15][16] Hub Culture and Friendster in 2002,[17] and soon became part of the Internet mainstream. Friendster was followed by MySpace and LinkedIn a year later, and eventually Bebo. Attesting to the rapid increase in social networking sites' popularity, by 2005, it was reported that MySpace was getting more page views than Google. Facebook,[18] launched in 2004, became the largest social networking site in the world[19] in early 2009.[20]

254

Social impact
Web-based social networking services make it possible to connect people who share interests and activities across political, economic, and geographic borders.[21] Through e-mail and instant messaging, online communities are created where a gift economy and reciprocal altruism are encouraged through cooperation. Information is particularly suited to gift economy, as information is a nonrival good and can be gifted at practically no cost.[22][23] Facebook and other social networking tools are increasingly the object of scholarly research. Scholars in many fields have begun to investigate the impact of social-networking sites, investigating how such sites may play into issues of identity, privacy,[24] social capital, youth culture, and education.[25] Several websites are beginning to tap into the power of the social networking model for philanthropy. Such models provide a means for connecting otherwise fragmented industries and small organizations without the resources to reach a broader audience with interested users.[26] Social networks are providing a different way for individuals to communicate digitally. These communities of hypertexts allow for the sharing of information and ideas, an old concept placed in a digital environment. In 2011, HCL Technologies conducted research that showed that 50% of British employers had banned the use of social networking sites/services during office hours.[27][28]

Features
Typical features
According to Boyd and Ellison's (2007) article, "Why Youth (Heart) Social Network Sites: The Role of Networked Publics in Teenage Social Life", social networking sites (SNSs) share a variety of technical features that allows individuals to: construct a public/semi-public profile, articulate list of other users that they share a connection with, and view their list of connections within the system (6). The most basic of these are visible profiles with a list of "friends" who are also users of the site. In an article entitled "Social Network Sites: Definition, History, and Scholarship," Boyd and Ellison adopt Sunden's (2003) description of profiles as unique pages where one can "type oneself into being."[29] A profile is generated from answers to questions, such as age, location, interests, etc. Some sites allow users to upload pictures, add multimedia content or modify the look and feel of the profile. Others, e.g., Facebook, allow users to enhance their profile by adding modules or "Applications."[29] Many sites allow users to post blog entries, search for others with similar interests and compile and share lists of contacts. User profiles often have a section dedicated to comments from friends and other users. To protect user privacy, social networks typically have controls that allow users to choose who can view their profile, contact them, add them to their list of contacts, and so on.

Social networking

255

Additional features
Some social networks have additional features, such as the ability to create groups that share common interests or affiliations, upload or stream live videos, and hold discussions in forums. Geosocial networking co-opts Internet mapping services to organize user participation around geographic features and their attributes. There is a trend towards more interoperability between social networks led by technologies such as OpenID and OpenSocial. In most mobile communities, mobile phone users can now create their own profiles, make friends, participate in chat rooms, create chat rooms, hold private conversations, share photos and videos, and share blogs by using their mobile phone. Some companies provide wireless services that allow their customers to build their own mobile community and brand it; one of the most popular wireless services for social networking in North America is Facebook Mobile.

Emerging trends

The things you share are things that make you look good, things which you are happy to tie into your identity.

[30]

Hilary Mason, chief data scientist, bitly, VentureBeat, 2012

As the increase in popularity of social networking is on a constant rise,[31] new uses for the technology are constantly being observed. At the forefront of emerging trends in social networking sites is the concept of "real-time web" and "location-based." Real-time allows users to contribute content, which is then broadcast as it is being uploaded - the concept is analogous to live radio and television broadcasts. Twitter set the trend for "real-time" services, wherein users can broadcast to the world what they are doing, or what is on their minds within a 140-character limit. Facebook followed suit with their "Live Feed" where users' activities are streamed as soon as it happens. While Twitter focuses on words, Clixtr, another real-time service, focuses on group photo sharing wherein users can update their photo streams with photos while at an event. Facebook, however, remains the largest photo sharing site - Facebook application and photo aggregator Pixable estimates that Facebook will have 100 billion photos by Summer 2011.[32] In April, 2012, the image-based social media network Pinterest had become the third largest social network in the United States.[33] Companies have begun to merge business technologies and solutions, such as cloud computing, with social networking concepts. Instead of connecting individuals based on social interest, companies are developing interactive communities that connect individuals based on shared business needs or experiences. Many provide specialized networking tools and applications that can be accessed via their websites, such as LinkedIn. Others companies, such as Monster.com, have been steadily developing a more "socialized" feel to their career center sites to harness some of the power of social networking sites. These more business related sites have their own nomenclature for the most part but the most common naming conventions are "Vocational Networking Sites" or "Vocational Media Networks", with the former more closely tied to individual networking relationships based on social networking principles. Foursquare gained popularity as it allowed for users to "check-in" to places that they are frequenting at that moment. Gowalla is another such service that functions in much the same way that Foursquare does, leveraging the GPS in phones to create a location-based user experience. Clixtr, though in the real-time space, is also a location-based social networking site, since events created by users are automatically geotagged, and users can view events occurring nearby through the Clixtr iPhone app. Recently, Yelp announced its entrance into the location-based social networking space through check-ins with their mobile app; whether or not this becomes detrimental to Foursquare or Gowalla is yet to be seen, as it is still considered a new space in the Internet technology industry.[34]

Social networking One popular use for this new technology is social networking between businesses. Companies have found that social networking sites such as Facebook and Twitter are great ways to build their brand image. According to Jody Nimetz, author of Marketing Jive,[35] there are five major uses for businesses and social media: to create brand awareness, as an online reputation management tool, for recruiting, to learn about new technologies and competitors, and as a lead generation tool to intercept potential prospects.[35] These companies are able to drive traffic to their own online sites while encouraging their consumers and clients to have discussions on how to improve or change products or services.

256

Social networks and science


One other use that is being discussed is the use of social networks in the science communities. Julia Porter Liebeskind et al. have published a study on how new biotechnology firms are using social networking sites to share exchanges in scientific knowledge.[36] They state in their study that by sharing information and knowledge with one another, they are able to "increase both their learning and their flexibility in ways that would not be possible within a self-contained hierarchical organization." Social networking is allowing scientific groups to expand their knowledge base and share ideas, and without these new means of communicating their theories might become "isolated and irrelevant".

Social networks and education


The advent of social networking platforms may also be impacting the way(s) in which learners engage with technology in general. For a number of years, Prensky's (2001) dichotomy between Digital Natives and Digital Immigrants has been considered a relatively accurate representation of the ease with which people of a certain age rangein particular those born before and after 1980use technology. Prensky's theory has been largely disproved, however, and not least on account of the burgeoning popularity of social networking sites and other metaphors such as White and Le Cornu's "Visitors" and "Residents" (2011) are greater currency.

The European Southern Observatory uses social networks to engage people in astronomical [37] observations.

The use of online social networks by school libraries is also increasingly prevalent and they are being used to communicate with potential library users, as well as extending the services provided by individual school libraries. Social networks and their educational uses are of interest to many researchers. According to Livingstone and Brake (2010), Social networking sites, like much else on the internet, represent a moving target for researchers and policy makers.[38] Recent trends indicate that 47% of American adults use a social network.[39] A national survey in 2009 found that 73% of online teenagers use SNS, which is an increase from 55% three years earlier. (Lenhart, Purcell, Smith, & Zickuhr, 2010)[40] Recent studies have shown that social network services provide opportunities within professional education, curriculum education, and learning. However, there are constraints in this area. Professional uses within education Professional use of social networking services refers to the employment of a network site to connect with other professionals within a given field of interest. SNSs like LinkedIn, a social networking website geared towards companies and industry professionals looking to make new business contacts or keep in touch with previous co-workers, affiliates, and clients. Other network sites are now being used in this manner, Twitter has become [a] mainstay for professional development as well as promotion[41] and online SNSs support both the maintenance of existing social ties and the formation of new connections. Much of the early research on online communities assume that individuals using these systems would be connecting with others outside their preexisting social group or

Social networking location, liberating them to form communities around shared interests, as opposed to shared geography.[42] Other researchers have suggested that the professional use of network sites produce social capital. For individuals, social capital allows a person to draw on resources from other members of the networks to which he or she belongs. These resources can take the form of useful information, personal relationships, or the capacity to organize groups. As well, networks within these services also can be established or built by joining special interest groups that others have made, or creating one and asking others to join.[43] Curriculum uses within education According to Doering, Beach and OBrien, a future English curriculum needs to recognize a major shift in how adolescents are communicating with each other.[44] Curriculum uses of social networking services also can include sharing curriculum-related resources. Educators tap into user-generated content to find and discuss curriculum-related content for students. Responding to the popularity of social networking services among many students, teachers are increasingly using social networks to supplement teaching and learning in traditional classroom environments as they can provide new opportunities for enriching existing curriculum through creative, authentic and/or flexible, non-linear learning experiences.[45] Some social networks, such as English, baby! and LiveMocha, are explicitly education-focused and couple instructional content with an educational peer environment.[46] The new Web 2.0 technologies built into most social networking services promote conferencing, interaction, creation, research on a global scale, enabling educators to share, remix, and repurpose curriculum resources. In short, social networking services can become research networks as well as learning networks.[47] Learning uses within education Educators and advocates of new digital literacies are confident that social networking encourages the development of transferable, technical, and social skills of value in formal and informal learning.[48] In a formal learning environment, goals or objectives are determined by an outside department or agency. Tweeting, instant messaging, or blogging enhances student involvement. Students who would not normally participate in class are more apt to partake through social network services. Networking allows participants the opportunity for just-in-time learning and higher levels of engagement.[49] The use of SNSs allow educators to enhance the prescribed curriculum. When learning experiences are infused into a website, students utilize everyday for fun, students realize that learning can and should be a part of everyday life. It does not have to be separate and unattached.[50] Informal learning consists of the learner setting the goals and objectives. It has been claimed that media no longer just influence our culture. They are our culture.[51] With such a high number of users between the ages of 13-18, a number of skills are developed. Participants hone technical skills in choosing to navigate through social networking services. This includes elementary items such as sending an instant message or updating a status. The development of new media skills are paramount in helping youth navigate the digital world with confidence. Social networking services foster learning through what Jenkins (2006) describes as a "Participatory Culture."[52] A participatory culture consists of a space that allows engagement, sharing, mentoring, and an opportunity for social interaction. Participants of social network services avail of this opportunity. Informal learning, in the forms of participatory and social learning online, is an excellent tool for teachers to sneak in material and ideas that students will identify with and therefore, in a secondary manner, students will learn skills that would normally be taught in a formal setting in the more interesting and engaging environment of social learning.[53] Sites like Twitter provide students with the opportunity to converse and collaborate with others in real time. Social networking services provide a virtual space for learners. James Gee (2004) suggests that affinity spaces instantiate participation, collaboration, distribution, and dispersion of expertise, and relatedness.[54] Registered users share and search for knowledge which contributes to informal learning.

257

Social networking Constraints of social networking services in education In the past social networking services were viewed as a distraction and offered no educational benefit. Blocking these social networks was a form of protection for students against wasting time, bullying, and privacy protection. In an education setting, Facebook is seen by many instructors and educators as a frivolous, time-wasting distraction from schoolwork, and it is not uncommon to have Facebook banned at junior high or high school computer labs.[50] Cyberbullying has become an issue of concern with social networking services. According to the UK Children Go Online survey of 9-19 year olds found that a third have received bullying comments online.[55] To avoid this problem, many school districts/boards have blocked access to social networking services like Facebook, MySpace, Twitter, and so on, within the school environment. Social networking services often include a lot of personal information posted publicly and many believe that sharing personal information is a window into privacy theft. Schools have taken action to protect students from this. It is believed that this outpouring of identifiable information and the easy communication vehicle that social networking services opens the door to sexual predators, cyberbullying, and cyberstalking.[56] However, there is evidence to contradict this. 69% of social media using teens and 85% of adults believe that people are mostly kind to one another on social network sites[40] Recent research suggests that there has been a shift in blocking the use of social networking services. In many cases, the oppo