This action might not be possible to undo. Are you sure you want to continue?
This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedtbrough, substandard margins, and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.
Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.
University Microfilms International A Bell & Howell Information Company
300 North Zeeb Road. Ann Arbor. M148106-1346 USA 313/761-4700 800/521-0600
Order Number 9503730
From ARPANET to Internet: A history of ARPA-sponsored computer networks, 1966-1988
Abbate, Janet Ellen, Ph.D.
University of Pennsylvania, 1994
300 N. Zeeb Rd.
Ann Arbor, MI 48106
FROM ARPANET TO INTERNET:
A HISfORY OF ARPA-SPONSORED COMPUTER NETWORKS, 1966-1938
Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
Supervisor of Dissertation
Graduate Group Chairperson
To my parents, Anne and Mario Abbate, for years of patient support; to Matthew and Beth, for providing supper and sanity;
and to Sonya, who was always there
I gratefully acknowledge the intellectual sustenance and moral support of the students and faculty, particularly Karin Calvert and Murray G. Murphey, of the University of Pennsylvania Department of American Civilization. Financial support and access to invaluable materials on the history of ARPA's networking programs were provided by the Charles Babbage Institute at the University of Minnesota; I wish to thank Arthur Norberg and Judy O'Neill for their assistance. The MIT Program in Science, Technology, and Society offered a collegial welcome and a chance to participate in many stimulating discussions during my residence in Cambridge. Thomas P. Hughes of the University of Pennsylvania has been a source of theoretical insight, mentor, and friend; his unfailing advice and encouragement helped make the long and often difficult process of creating this work a rewarding one as well.
FROM ARPANET TO INTERNET:
A l-llSTORY OF ARPA-SPONSORED COMPUTER NETWORKS, 1%6-1988 JANET ABBATE
THOMAS P. HUGHES
The ARPANET and Internet were pioneering computer networks that established the technical groundwork and social expectations for wide-area networking in the United States today. Created by the U.S. Defense Advanced Research Projects Agency (ARPA), the ARPANET was a testing ground for innovative concepts such as packet switching, distributed topology and routing, and the connection of heterogeneous computer systems. ARPA dealt with the complexities of this project using a management style that fostered collegial interaction and a technical strategy known as "layering" that allowed network components to be developed independently. The highly visible success of the ARPANET brought its techniques into the computer science mainstream and made it an influential model for subsequent research and commercial networks. ARPA followed the ARPANET with experimental packet radio and satellite networks; the need to connect these diverse systems led ARPA to begin its Internet Program, which developed techniques for interconnecting networks. These techniques were used to connect other research networks to the ARPANET, forming the basis for today's Internet, a world wide "network of networks."
The ARPANET and Internet were socially constructed artifacts whose design was shaped by the interests and world views of their creators. Different networking techniques had different implications for the performance, economics, and social dynamics of the resulting system, so that technical choices
can be understood as trade-offs between competing values. Analysis of the ARPANET design decisions reveals how the network was shaped by social considerations such as a preference for decentralized organization and a concern for military "survivability." Network users were also instrumental in constructing the ARPANET's identity: their unexpected enthusiasm for electronic mail turned a system intended primarily for remote computing into a medium for communication between people. The social values embodied in the ARPANET and Internet are further illuminated by contrast with alternative networking systems representing different social aims and interests that were introduced in the 1970s by international standards organizations.
Table of Contents
Chapter 1: Introduction 1
Chapter 2: The Drive to Build Networks 11
Chapter 3: Building the ARPANET: Technical and Organizational Strategies 38
Chapter 4; From ARPANET to Internet 80
Chapter 5: The Internet in the International Arena 112
Chapter 6: Conclusion 144
List of Illustrations
Figure 2.1. Time-sharing 18
Figure 2.2. Terminal networks 22
Figure 2.3. Network topologies 25
Figure 2.4. Switching systems 28
Figure 3.1A. Main IPTO research centers at time of ARPANET creation 39
Figure 3.1B. The 15-node ARPANET, 1971 39
Figure 3.2. Network model showing communications subnet 44
Figure 3.3. Two-layer model of the ARPANET 46
Figure 3.4. ARPANET organization 49
Figure 3.5. Three-layer model of the ARPANET 59
Figure 4.1. Packets from several users sharing a random access channel 98
Figure 4.2. Map of PRNET, 1977 101
Figure 4.3. SA TNET 103
Figure 5.1. Timeline of selected networking events 116
Figure 5.2. ARPA intemetworking scheme 121
Figure 5.3. ccm intemetworking scheme 121
Figure 5.4. Intemetworking with X.25 vs. X.75 133
Figure 5.5. OSI protocols 137
Chapter 1 Introduction
In October 1972, over a thousand people who had traveled to Washington, D.C., for the First International Conference on Computer Communications witnessed a remarkable technological feat. From a demonstration area containing dozens of computer terminals, conference attendees were able to access computers located hundreds or thousands of miles across the United States; there was even a temporary link to Paris. Participants could use interactive software programs including meteorological models, an air traffic simulator, conferencing systems, MIT's MACSYMA mathematics system, experimental data bases, a system for displaying Chinese characters, even a computerized chess player (Roberts and Kahn, 1972). The combined variety of terminals, computers, and programs, all operating successfully and responsively from across the country, made a powerful impression. One observer described visiting engineers as "just as excited as little kids, because all these neat things were going on," while another recalled, "There was more than one person exclaiming, 'Wow! What is this thing?'" (Cerf, 1990,25; Lynch and Rose, 1993, 10).
The technology showcased at the ICCC conference was the brand-new ARPANET, the world's first wide-area computer-to-computer network. A creation of the U.S. Department of Defense, the ARPANET represented a turning point in computer communications. Most of the computing experts at the conference had never attempted to combine computers from different manufacturers in a single network or to maintain continuous long-distance data connections. The ARPANET did both, and accomplished this using complex and
little understood techniques whose feasibility had been doubted by many in the communications industry. The array of concrete benefits the ARPANET made available to researchers in their daily work raised public and professional awareness of what networks could do. The trade journal Electronics reported that "with the great interest in computer networks indicated by ... the crowds in the Arpanet demonstration room, networks clearly are the wave of the future" (Electronics, 1972, 36).
Computer networks surround us today in many guises. They may be local, regional, or wide-area, and make use of such varying transmission media as cables, telephone lines, microwave transmitters, radios, and satellites. Specialized networks exist for different purposes, including research networks, automation networks that control factory machines, and remote transaction networks linking central offices with bank machines, reservation systems, and point-of-sale terminals. We take it for granted that data can travel long distances instantaneously. But computer communication before networking was analogous to human communication before the telegraph: it was difficult to share information without moving a human being or a physical storage device, such as a reel of magnetic tape or a stack of punch cards, from one location to another. Incompatibilities between machines compounded the problems of distance. The ARPANET was the first to break these barriers and allow routine long-distance data transfers between different types of computers.
Throughout America's history the development of communications and information technologies has been a critical factor in social and economic change, with the twentieth century often characterized as the era of a communications or information "revolution." Computer networks, a product of both "revolutions," have played an important role in America as a highly visible information
resource, with the potential to become as essential to American life as the highway or telephone systems to which they are often compared. In the past decade there have been several important social studies of computers and of computer networks. Most of these focus on the ways computers are used and their ability to reinforce or challenge social relationships and power; examples include Turkle (1984), Kling (1991), Marvin and Winther (1983), Zuboff (1988), Kramerae (1988), Sproull and Kiesler (1991), and Taylor et al. (1993). These works demonstrate the myriad competing ways in which the technology has been imagined and used. Most, however, take the technology itself as a given, rarely exploring the technical and non-technical factors that shape the production of networks. But without knowing how computer networks came to be as they are, our ability to understand current technologies and guide their future development is limited. The cultural history of the ARPANET and Internet that follows is a first step toward filling this gap.
Since the history of networks is not well known, a brief overview may be useful. The ARPANET was the brainchild of the U.S. Defense Department's Advanced Research Projects Agency (ARPA), a research arm of the Department of Defense.! Founded in 1958 in response to the Soviet Sputnik launch, ARPA's mission is to keep the U.S. ahead of its military rivals by pursuing research that promises a significant advance in defense-related fields. ARPA is a small agency with no laboratories of its own; ARPA staff initiate and manage projects, but the actual research and development is done by academic and industry contractors. Successful technologies are turned over to the armed services for operational use. The private sector is also encouraged to exploit ARPA's research results,
1 In 1972 ARPA was given the status of a separate Defense agency and its name was changed to DARPA (Defense Advanced Research Projects Agency). The name was changed back to ARPA in 1993, apparently to signal a renewed commitment to research that would benefit civilian as well as defense industries. For consistency I will use ARPA throughout this work except in direct quotations that use DARPA.
most of which are unclassified; this serves the military indirectly by making advanced technologies commercially available to it.
ARPA has several project offices that fund research in different areas, such as behavioral sciences, materials sciences, and ballistic missile defense; these offices are created or disbanded as defense priorities change. When ARPA established its Information Processing Techniques Office (IPTO) in 1%2 it became a major funder of computer science in the United States. IPTO's record in conceiving and managing computing research projects is remarkable: the office has been the driving force behind several important areas of computing research in the U.S., including graphics, artificial intelligence, time-sharing operating systems, and networking (Norberg and O'Neill, 1992,96). The agency is recognized even by critics for its good management and rapid development of new technologies (Pollack, 1989,8). IPTO's expertise at managing research and development would be crucial to the success of the ARPANET.
In 1966 IPTO director Robert Taylor began planning a network to connect the computing centers of ARPA contractors. The proposed ARPANET would have two main goals: to save computing costs by allowing contractors across the country to share computer resources, and to advance the state of the art in the transfer of information between machines and over distances, known as data communications. ARPA also hoped the network would encourage collaboration between researchers in different locations. Taylor's successor, Lawrence Roberts, managed the project, beginning development in 1968. The first four ARPANET sites were installed by the end of 1969, and from 1970 to 1972 the ARPANET team expanded, tested, and modified the network hardware and software. The public demonstration at the International Conference on Computer
Communications marked the network's transition from an experiment to an operational system.
IPTO built on the success of the ARPANET with several new projects. First it adapted ARPANET technologies for use with radio and satellite transmission. Then in 1973 lITO began its Internet program to develop technologies to connect or "internetwork" different computer networks. Program managers Vinton Cerf and Robert Kahn developed a set of networking protocols (rules guiding the interaction between computers) called TCP / IP, which allowed ARPA to connect its radio and satellite networks to the ARPANET. The set of connected networks that communicated using TCP / IP became known as the Internet. The TCP / IP protocols were widely adopted by commercial and research network builders, and during the 1980s the protocols became a de facto standard in the United States. When the National Science Foundation decided in 1985 to build the NSFNET to connect its supercomputers, it chose to use the ARPA internet protocols.
The ARPANET proved extremely popular with its users, and traffic increased steadily until by the late 1980s it had outgrown the network's capacity to provide fast and reliable service. ARPA decided the network had become obsolete, and began decommissioning the original ARPANET hardware in 1988. To provide continued support for researchers, NSF arranged to have the NSFNET take over as the backbone of the Internet, which had grown into a world-wide network of networks linking millions of computer users. Though the original ARPANET no longer exists, the Internet perpetuates both its technology and its role in bringing together computer users.
The ARPANET reshaped standards and expectations in the computing profession. By its very existence the ARPANET proved the validity of novel
tech r iiques such as packet switching and distributed communications (described in Chapter 2), as well as the feasibility of linking heterogeneous groups of computers. Project manager Roberts had claimed that the ARPANET would advance by an order of magnitude the nation's experience and expertise in computer networking, and this expectation was met (Roberts, 1967, 1). Many of those who had worked on the ARPANET went on to provide commercial networking systems or consulting services; the universities involved became centers of networking research; and by making technical specifications and implementations freelv available, ARPA significantly lowered the learning curve
a J. _ _ _
for those who followed. At the same time, the ARPANET's influence
incorporated military priorities into civilian technologies, and established norms for computer communications that favored independent, technically sophisticated computer owners and users.
The ARPANET changed the scope and nature of computing for the users it served. Before, people had typically shared computer resources within a single campus; with the ARPANET and Internet they could access resources across the country or around the world. Incompatibilities between manufacturers' standards no longer determined the limits of connectivity. Collaboration became more feasible both within the U.S. and across national borders. Network services
such as electronic mail, news, and bulletin boards allowed people to create "virtual communities" -geographically separated communities of interest linked via the network-that spread from academic and business uses to the personal realm. By the late 1980s networking had become a household word and ordinary people were struggling with the implications for personal rights and responsibilities, government regulation, business opportunities, questions of funding, control, and access of public networks, and their own hopes and fears
for the kinds of human interactions made possible by computers. The evolution of the ARPANET in the period from 1966 to 1988, as it first pioneered a new technology and then ushered in the era of internetworking, established the technical groundwork and social expectations for wide-area networking in the United States today.
My theoretical approach draws on two currents that have been prominent in recent scholarship in the history of technology: systems theory and social construction. Systems, as described by Hughes (1983), join people, material things, and ideas in the service of a goal set by the "system builder." Systems follow a life cycle from invention through development into maturation and finally obsolescence. I use systems theory as a framework for identifying and organizing the disparate elements that have gone into building the ARPANET and its successors, and for describing the evolution of the system from invention to obsolescence. The theory of social construction, introduced by LaTour (1979) and applied to technological systems by Bijker et al (1987), posits that technical choices are based not on an unmediated understanding of natural facts, but rather on a "construction" of the technical situation that is shaped by the designers' training and goals as well as the influence of competing social actors who have a stake in the system. Social construction provides a vocabulary for discussing how computer networks are shaped by social conditions and discourses. Taking the idea of social construction beyond the initial creation of the system, I also advance the argument that in interactive systems such as computer networks, part of the design process takes place in the act of using them.
The most important sources for this work have been first-hand accounts by ARPANET participants, especially those gathered for the Charles Babbage Institute's DARPA oral history project, and the technical literature on computer networking. Key technical journals include the Proceedings of the Institute for Electrical and Electronics Engineers (IEEE), Communications of the Association for Computing Machinery (ACM), ACM Computer Communication Review, Computer Networking, Proceedings of American Federation of Information Processing Societies (AFIPS) Conferences, International Conference on Computer Communication (lCCC) Proceedings, as well as minutes from International Telecommunication Union (lTU) meetings. I have made extensive use of the Internet itself to retrieve official ARPANET documents, correspond electronically with informants, and exchange information with on-line technical and historical communities. As well as an invaluable resource, my contact with the network has provided a constant reminder of its growing influence.
Throughout this dissertation I emphasize how social and technical factors are interwoven in computer networking, while individual chapters explore several specific ways in which the ARPANET is, in Levi-Strauss's term, "good to think with." Chapter 2 surveys the inventions underlying the ARPANET and explores some of the social implications of different data communications practices. It describes the issues that drove computer users to experiment with networking, the different goals they pursued, the problems they encountered, and a variety of approaches they employed to solve them. With many networking techniques available and none universally recognized as "best," network builders often made choices based on their interpretation of how the available alternatives would serve particular social interests.
Chapter 3 focuses on the development of the ARPANET. The central problem faced by ARPA's network builders was the complexity of the system. Their response was twofold: nurturing a social environment that encouraged cooperative problem solving, and adopting a technical strategy of viev v ing the network as a set of distinct but interacting layers of hardware and software. The resulting ARPANET system was in many ways a physical analog of the social network within which it was created.
Chapter 4 charts the transition from the ARPANET to the Internet. During this period ARPANET technology was transferred in several directions: to the military, to commercial systems, and to new ARPA networking projects. As ARPA researchers experimented with different network media, including radio and communications satellites, ARPA saw a need to connect these diverse systems, and began its Internet Program. Just as the ARPANET had entailed a new way of thinking about computers, so the Internet required a new way of thinking about networks: in both cases, ARPA had to answer the question of how to connect heterogeneous systems, and how to manage the rerulting "meta-system."
ARPA's approach to internetworking reflected the technical constraints and social organization of ARPA's own network systems. During the 1970s data networks embodying a different set of values were being built in other countries, especially Canada, the countries of western Europe, and Japan. Unlike the ARPANET, systems in these countries were usually controlled by and modeled after the national telecommunications monopolies. Chapter 5 describes how networking issues became internationalized, focusing on a series of debates over network protocol standards that brought the networking paradigms of ARPA and the telecommunications carriers into conflict. The contrast highlights
the cultural assumptions that underlay the U.S. system, as well as political and economic factors in the international standards arena.
Network design decisions have never been purely technical-or purely social.
System builders choose techniques on the basis of their perceptions of technical and economic constraints, as well as their own tacit or explicit social goals. Users further shape the system by choosing certain applications over others. Understanding the history behind the networks we use today can help us evaluate and participate in the choices that must be made in building networks for the future.
The Drive to Build Networks
Like distant islands sundered by the sea, we had no sense of one community.
We lived and worked apart and rarely knew
that others searched with us for knowledge, too ....
But, could these new resources not be shared? Let links be built; machines and men be paired! Let distance be no barrier! They set
that goal: design and build the ARPANET!
Vint Cerf, "Requiem for the ARPANET"
The drive to build networks was part of a general movement to make computers more accessible. Networking needed no inventor; for computer owners of the 1960s, linking their "distant islands" was a common goal that was brought a step closer by each advance in computing or communications equipment and by each awkward attempt at in connecting computers. But networking did not have a single fixed meaning, technique, or purpose. Many choices existed for building networks, based on different technical paradigms and representing different aims and interests.
The significance of network design decisions can be understood in terms of the social construction of technology. This theoretical approach, presented in Bijker et al (1987) and deriving from LaTour's work on the social' construction of science (LaTour, 1979), holds that, just as scientific theories do not necessarily prevail because they are the "best" description of reality, so technological artifacts or processes are not necessarily adopted because they are the best (or easiest or most obvious) solution to a problem. Pinch and Bijker (1987) propose that new technologies begin in a state of "interpretive flexibility" in which their
form and cultural function are still uncertain (40-42). Multiple variations of an artifact compete for acceptance by "relevant social groups" -aggregates of producers, users, or interested observers for whom the technology has a shared meaning (30-34). When a critical mass of these groups adopts a particular design as being consistent with their goals, they construct a stable form for the artifact
The concept of social construction is useful in explaining the significance of design variations in computer networking. Early network designs were indeed characterized by interpretive flexibility. Computer owners such as corporations or universities who wanted to build networks had differing goals for their systems and competing views on appropriate methods for achieving a given end or avoiding a particular problem. Many of the technical options were not well known or understood even by computer experts; thus network builders evaluated these techniques based not only on their goals but also on their technical training and experience and their attitudes toward adopting unproven techniques. In addition, many design features that were desirable in themselves interacted adversely with other parts of the total network system, forcing designers to weigh trade-offs between different objectives (such as low cost and high capacity, or simplicity and robustness). Thus the design of any network was the result of a series of choices that reflected the resources and goals of the relevant social groups, which might include computer owners, computer manufacturers, or telecommunications providers.
The state of camp u ting ill mid-1960s America
Computing in the mid-1960s was a rapidly expanding field. The number of computers in use worldwide rose from a few hundred in 1955 to several thousand in 1959; it quadrupled between 1960 and 1965 and again between 1965 12
and 1970 (phister, 1979, 42). General-purpose computers were becoming more common and better adapted to the needs of their users. In 1964 IBM introduced its extremely System/ 360 series, the first "modular" line of computers: each computer in the series could use the same software, consoles, tape drives, and printers, so that owners could switch models without worrying about compatibility. The integrated circuit, patented in 1961, ushered in the era of minicomputers, smaller, cheaper machines that did not have to be confined to special machine rooms. Introduced commercially in the late 1960s, minicomputers gave many smaller enterprises their first taste of the power of computers. Government and business were also developing special-purpose "supercomputers" with increased speed and power to perform massive calculations.
In software, high-level languages such as FORTRAN (FORmula TRANslator, for scientific applications) and COBOL (COmmon Business-Oriented Language) had been developed in the 1950s and were now in general use. Computer science was also gaining recognition as a distinct discipline. It began to be taught in universities, and several professional organizations published journals and organized conferences, the most prominent being the Association for Computing Machinery (ACM), the American Federation of Information Processing Societies (AFIPS), and the Computer Society of the Institute for Electrical and Electronics Engineers (IEEE).
Despite these rapid advances, there were many obstacles to using computers easily and efficiently. The hardware and software used to interact with the computer (the user illterface) were awkward to work with; people were rarely able to type commands directly to the computer, and "user-friendly" interfaces featuring graphical icons to represent data or input devices such as the mouse
were unheard of (see e.g. Licklider, 1960,9-11). Often users who wanted to transfer data from their local computer to another at a different site found that their only option was to transfer the data to magnetic tapes or punch cards and carry or mail it to the destination. In addition, users were generally confined to exchanging data between computers of the same design. No standards existed for communications between computers of different manufacturers, so transferring data or software required human intervention and often extensive re-programming (Norberg and O'Neill, 1992,25-27; Marill and Roberts, 1966, 425-426). In response to these problems, a number of computing researchers turned their attention to improving user interfaces and data communications.
One of the most influential proponents of improved human interfaces for computers was J. C. R. Licklider. Licklider had been on the faculty of Harvard and at MIT's Lincoln Laboratory before joining Bolt Beranek and Newman, Inc. as vice president in charge of psycho-acoustics, engineering psychology, and information systems. In 1960 Licklider wrote a paper entitled "Man-Computer Symbiosis" that became a manifesto for reorienting computer science and technology to serve the needs and aspirations of the human user, rather than the reverse. Licklider argued that computers could aid scientists and managers by taking over the routine aspects of intellectual work, such as looking up information or plotting data points, leaving the human user free to focus on important ideas:
The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today .... Those years should be intellectually the most creative and exciting in the history of mankind (4-5).
He identified specific changes in the practice of computing that were needed to bring about "symbiosis," including computers designed for continuous
interaction with the user, more intuitive methods for retrieving data, higherlevel programming languages, better input and output devices, and networks.
Licklider's work was widely read and cited, his belief in the importance of developing human-centered technologies striking a chord with many people working with computers. Among them was Robert Taylor. who had been a systems engineer in the aerospace industry and at NASA (Taylor, 1989, 1-2). Inspired by Licklider's vision of interactive computing, Taylor would join ARPA and initiate the ARPANET project. Taylor regarded the development of networks as part of a necessary evolution in computers away from raw calculation and toward interaction with people. Later, looking back at the state of computing in the early 1960s, Taylor recalled that a new paradigm had been needed:
The arithmetic engine view established priorities which focused attention on the internals of the computer itself-and at the expense of the human users .... If we were to have computer systems which would correct this imbalance, we would have to design them with humans in mindhumans should be considered as part of the system (Taylor, 1975, 844).
For those who adopted interactivity and ease of use as design criteria, technical choices took on a new meaning. Instead of only looking for ways to maximize the efficiency of the machine, they started to ask how they could increase the productivity of the people using it.
The main obstacle to "symbiosis" between users and machines was that computers were expensive and in high demand. Economic pressure to use them as efficiently as possible had led to practices that maximized the potential of the computer at the expense of its human users. Chief among these was batch processing. In a batch processing system, a number of computer programs' would be collected on magnetic tapes and loaded into the computer together to
1 A computer program is a series of instructions meant to produce a specific result, such as the answer to a mathematical calculation, when executed by the computer.
be executed in succession. Running programs in batches was efficient because it minimized the time the computer spent idly waiting for data to be loaded or unloaded.
The disadvantage of batch processing was that it did not allow users direct interaction with or an immediate response from the computer. In the typical programming cycle, the user would write out a program on paper, then the user or a keypunch operator would punch holes in a set of computer cards t·-:represent the written instructions. The user brought the deck of punched cards to the computer center, where an operator fed it into a punched card reader and transferred it to tape. When the computer became available, the tape would be loaded and its batch of programs run. If the program had errors, the user had to laboriously rewrite it, repunch the cards, and submit them again, perhaps waiting hours for a chance to rerun the program and collect the results. Often users had to repeat this cycle numerous times before a program worked correctly. Batch processing was frustrating and inefficient for the programmer, but programs could not be edited or debugged in direct interaction with the machine because a large computer was much too valuable to devote to the exclusive use of a single person (Main, 1967; Norberg and O'Neill, 1992,29-31; Moreau, 1984, 120-121).
To make efficient use of the machine but also allow users to program interactively, computer scientists began investigating a new type of operating system. (An operating system is a program such as UNIX or DOS that provides an interface between the user and the other software on the computer.) This new type of system was called time sharing. Instead of running a single program at a time from start to finish, a time-sharing operating system would serve a number of users, cycling between them in rapid succession. Because they accommodate
many people at once, time-sharing computers are referred to as host computers or hosts. People would interact with the computer through devices called terminals that provided a keyboard for input and a display (printer or screen) for outputEach person at a terminal would have the computer's attention for a tiny interval and would then wait for all the others to be served before gaining access again (Figure 2.1). Because of the computer's great processing speed, the wait between cycles would be too short to be noticeable. Each user would have the impression of continuous interaction with the computer-just as moviegoers have the impression of seeing continuous motion on the screen, rather than a rapid succession of still frames. Time sharing made it economically feasible to use computers interactively, because a user could log in for an extended session without monopolizing the machine's resources.
Several people independently presented early proposals for time-sharing operating systems. In 1959 Christopher Strachey of the National Research Development Corporation in London gave the first public paper on time sharing, while John McCarthy of MIT circulated similar ideas among his colleagues. MIT began developing its Compatible Time-Sharing System in 1960; RAND began a time-sharing project the same year, and Bolt Beranek and Newman created an experimental time-sharing system in 1962. In 1962 Dartmouth College started a time-sharing project and developed the easy-to-use BASIC language to provide a hospitable programming environment for students (Norberg and O'Neill, 1992, 45, 191-200).
Two years after writing "Mart-Machine Symbiosis," J. c. R. Licklider became the first director of ARPA's Information Processing Techniques Office (I PTa),
2 The name "terminal" refers to the fact that the device forms the terminus or endpoint of a communication link with the computer. The general name for devices attached to a computersuch as terminals, printers, or magnetic disk or tape drives-is peripherals. Both names emphasize the centrality of the cpu (central processing unit), where the actual computing is done.
and the agency began funding time sharing on a large scale. IPTO funded timesharing projects at the System Development Corporation in Santa Monica and Carnegie Mellon University in Pittsburgh, both of which had operational
systems by 1963, and at the University of California at Berkeley, which had produced a system by 1965 (Norberg and ONeill, 1992,48-50). IPTO channeled $3 million a year into MIT's Project MAC (the name stood for Machine-Aided Cognition), which produced a time-sharing system called MULTICS in 1969. ARPA also financed purchases of time-sharing equipment at many of its research sites.
User 1 Time 1 • User 1
Time 2 • User 2
Time3 • User 3
Time4 • User 4
Time5 • User 1
-: TimeS ... • User 2 ...
/ '\ / '\
User 3 User 4
-, -, ./ Figure 2.1. Time sharing.
Later in the decade time-sharing systems began to enter the market. In 1965 General Electric, which had provided equipment for Dartmouth, and mM, which had provided equipment for Project MAC, each announced their intention to provide commercial time-sharing systems (Main, 1%7). Researchers at AT&T's Bell Labs who had been involved in the MUL TICS effort at MIT began developing a time-sharing system for their own use in 1969. The new system was designed to run on minicomputers, which Bell Labs was beginning to use extensively, and was named UNIX to indicate that it was more streamlined than the rather cumbersome MULTICS (Holbrook and Brown, 1982, 17). AT&T made the UNIX system cheaply available to universities, and the system's simple, flexible design and ability to run on different machines soon made it a de facto standard for research computing. By 1976 over 80 academic sites were using UNIX (Holbrook and Brown, 1982, 18; Norberg and O'Neill, 1992, 225).
Time sharing was not without its detractors. If a computer facility was overcrowded, users might have to wait hours to get access to a time-sharing terminal; and when the computer was overloaded, the response time for terminal users could become aggravatingly slow (Norberg ann O'Neill, 1992, 217). Adapting a computer to switch back and forth between different programs was not easy, and early systems tended to make inefficient use of the computer's processor and memory (Moreau, 1984, 123-124). With its benefits open to interpretation, time sharing did not simply replace batch processing: both systems remained in use, with batch processing dominating the commercial compute: services market until the 1970s (Norberg and O'Neill, 1992,216).
Yet, while time sharing was not an unqualified success, the enthusiasm for time sharing in the research community and the technical experience gained paved the way for networking in several respects. Time sharing supporters
constructed a new identity for computing that focused on interactivity, which became the norm even for computers that were not shared (Norberg and O'Neill, 1992,228), Once people could use computers interactively, they began to want terminals in locations convenient to them, rather than in a central facility; this spurred the development of communications hardware (Norberg and O'Neili, 1992,219; Frisch and Frank, 1975, 109-110). Designers of time-sharing computers had been obliged to create mechanisms to allocate and account for resources among many users; having these administrative mechanisms in place made it easier to accommodate outside users without giving up local control of resources (Carr et al., 1970,79). And many computer users became enthusiastic about networking precisely because they saw it as an extension of time sharing: while time sharing spread the resources of a single computer among many users, networking would allow users to share the resources of many computers. Time sharing and networking were alike in encouraging communication and sharing of resources. As the managers of the ARPANET project would point out, "Within a local community, time-sharing systems already permit the sharing of software resources. An effective network would eliminate the size and distance limitations on such communities" (Roberts and Wessler, 1970,543; also Carr et al., 1970, 589). ARPA itself became involved in networking from an early date, beginning preliminary experiments at UCLA and Berkeley in 196, (Norberg and ONeill, 1992,55-56).
Network Design Issues
Computer networks today can make use of a variety of layouts and transmission schemes, and well-tested and accepted techniques are available. In the early 1960s, however, computer owners had to face several fundamental design issues for the first time. One general category of design problems
concerned communications, or how to move data between computers. Discussions of communications issues tended to be expressed in terms of economics. A second class of issues dealt with how to create and make use of the data that would be transmitted, and focused on the problem of incompatibility between machines.
Communications systems for computers evolved through several stages. At first, terminals were attached with cables and located in the immediate area of the computer, but large organizations soon began using special cables or leased lines to expand the range of their terminals to cover an entire multi-building site. Terminals could also connect to computers using modems. A modem is a device that converts digital data to analog form for transmission over analog phone lines and back to digital form at the far end, a process known as
modulation! demodulation (hence the name). Modems were originally developed for the Defense Department's SAGE project in the early 1950s; since 1958 the FCC had allowed computer users to transmit data over public telephone lines using modems supplied by AT&T (Moreau, 1984, 128-130; Mathison and Walker, 1970). The ability to bring terminals to the people who needed to use them, rather than vice versa, proved very popular. AT&T's sales of modems, starting at about 1,000 in 1958, grew by 50% each year. By 1968 AT&T was selling 85,000 modems annually; that same year the FCC opened the market to third-party suppliers, and within two years 100 firms had entered the modem market (von Alven, 1974).
Moderns made it possible to create "networks" of terminals connected to a computer center. But heavy use of remote terminals drove up communications costs until they might amount to half the cost of the computer itself (Frisch and Frank, 1975, 110). Network builders could try to cut costs by using concentrators,
devices that collect the input from many terminals and send it over a single link to the computer, and front-ends, small computers devoted to handling the data traffic between large computers and their terminals (Frisch and Frank, 1975, 110). Figure 2.2 illustrates these alternatives for terminal networks. Computer users could also take advantage of telecommunications deregulation: with the FCC's 1959 decision to allow private microwave telephone systems, new providers such as MCI began offering data transmission for less than AT&T. The increased competition prompted AT&T to respond in 1961 with Telpak, a bulk-rate
A. Basic terminal network
Figure 2.2: Terminal Networks
B. Use of front end and terminal concentrator
Even with these improvements, relying on the telephone system for transmission remained problematic for computer users. Long-distance dial-up telephone connections were expensive. An ARPA-funded study found that in 1968 the telephone charges for an hour-long connection to a time-sharing machine in Boston would cost the local Boston user only 60¢, but a user in New York would pay $12, in Washington $18, and in Los Angeles $30. Since an hour's
use of the computer itself would cost only $10-$20, these communications costs were disproportionate (Gold and Selwyn, 1968, 1474). High costs put pressure on users to work quickly, sacrificing the user-friendly quality that was supposed to be the hallmark of a time-sharing system (Gold and Selwyn, 1968, 1475). Thus the adoption of time sharing altered the way network builders evaluated the economics of computer communications.
One reason telephone connections between computers were expensive was that the phone network had been optimized for the transmission of voices rather than data. Human conversations feature a continuous flow of information between the two parties, so that the line is in use at a constant rate. Computer messages, however, tend to come in large bursts with long pauses in between, so computer users paid for telephone connections that were idle much of the time. Another problem was that setting up or disconnecting a dial-up connection could take 20 seconds or more, which slowed down interactive use, especially since the data transmission itself might only require a fraction of a second (Roberts, 1967, 3). One option for computer users was leasing a line between a pair of computer sites. This avoided the long wait for a connection and decreased the cost per minute, and since the connection used a fixed route the phone company could arrange to "condition" the lines so that they had a lower error rate (James and Muench, 1972, 1346). However, overall cost savings could be realized only if the connection was heavily used, making leased lines efficient only for pairs of sites that regularly exchanged large amounts of data.
Experiments with topology
The drawbacks of using telephone service lias is" prompted some computer scientists to reconstruct the role of the telephone network: rather than treating it as an independent system, they conceived it as a subordinate part of a larger
system that would provide less expensive and more reliable data communications. One way to adapt the phone system for data communications was to experiment with network topology, or the physical layout of communications lines between computers. In discussions of topology, the computers connected by lines are referred to as nodes, a term borrowed from geometry. The most obvious topology was to connect each pair of nodes directly, using dial-up or leased lines. A fully connected network had the advantage of simplicity, but, as noted above, was expensive. An alternative was to connect only selected pairs of nodes; nodes not connected directly to each other would have to communicate through an intermediate node, which would switch the traffic from one line to another. Switched systems promised to be more efficient, since the same communications link could serve as part of several different routes.
Figure 2.3A illustrates a simple type of switched system: a star network, which has a central node through which all connections are routed. To connect each of the six nodes in this example with every other would require fifteen telephone links; the star topology requires only five links. Star networks are easy to design, and routing messages is simple, since there is only one possible path between any pair of computers. Stars had the additional appeal of mimicking the centralized structure typical of many organizations. (A 1971 computer textbook observed, "The most interesting aspect of this network is that it has a general hierarchical structure and is like other hierarchical organizations" (Bell and Newell, 1971,5».
There are many early examples of star networks. In 1956 a small network had been built to connect terminals in three New Jersey banks with a central computer. In the early 1960s American Airlines and IBM created the SABRE on-
line reservation system, which connected 2,000 terminals across the United States to a central computer. The Lawrence Radiation Laboratory in California started a network in 1964 which connected five large computers and a number of terminals to a single switching node; this allowed the system's 1,000 users to access any of the computers from any terminal (Bell and Newell, 1971, 507).
A. Star Network
Figure 2.3. Network Topologies
B. Distributed Network
Dartmouth College had developed a network to connect student terminals to its central time-sharing computer in 1964; with the help of an NSF grant it expanded the network in 1967 to connect schools in Massachusetts and New Jersey (Frisch and Frank, 1975, 112). In 1965 General Electric's Information Systems division, which had provided equipment for Dartmouth, used a version of the Dartmouth system to build a network to support its own commercial
time-sharing service. A terminal network for stock quotations, the National Association of Securities Dealers Automated Quotation System (NASDAQ), was built in 1970 and started operation in 1971; by 1975 it had about 1,700 terminals attached (Frisch and Frank, 1975, 110). Some of the larger networks, such as NASDAQ had multiple "stars" connected together, but the basic topology of a central point with links radiating outward was common to all.
Despite its popularity, the star topology had several drawbacks. The network is vulnerable to failure: a single line failure will disconnect one of the hosts, and a problem in the central switch can disrupt the entire network. Star networks also do not scale up well: the central switch can accommodate only a finite number of physical connections to hosts; and, since all traffic passes through the center, it tends to become congested.
An alternative to the star was described in a 1964 report by Paul Baran of the RAND corporation called On Distributed Communications, one of the founding texts on wide-area networking (Baran, 1964). Written during the heart of the Cold War (Baran's report was published the same year as the cinematic black comedy Dr. Strangelove), the report was RAND's response to a request by the U.S. Air Force to come up with a communications system that could survive a Soviet nuclear attack. Given this mandate, Baran was less concerned with simplicity or cost-cutting than with high performance and "survivability" -the ability to sustain an attack and still function.
Instead of the conventional star, Baran proposed creating a web of connections with at least two links attached to each node; he called this a distributed network (Figure 2.3B). Distributing the links throughout the network would even out the traffic load. The redundant links also ensured that there were multiple routes between pairs of hosts, so that if one were disabled an alternate
route could be used. For example, in the star network in Figure 2.3A there is only one path from node A to node B: ADB. A single failure in node D or in the line to A or B would destroy this path. In the distributed version of the network in Figure 2.3B, there are five distinct paths between A and B: AB, ADFB, ACEB, ACDFB, and ADCEB. At least three failures must occur in nodes or lines before all these paths are destroyed. Baran also specified that control of networking functions would be spread throughout the system. While a network with a central management station could be disabled with a single strike, a distributed network could, in theory, endure one or more attacks and still function.
The Air Force was enthusiastic about Baran's plan and proposed building a network based on it. This project did not come to fruition, but Baran's ideas quickly entered networking discourse and practice) Baran discussed his ideas with many computing and communications experts and his report was widely read by others. Network designers would thus have been aware of distributed networking as an option. Choosing it would require a willingness to risk a complex and untried technique for the possibility of greater efficiency and survivability; it would also imply an organizational structure that could accommodate a decentralized system.
Neui approaches to switching
Both star and distributed topologies required a system for establishing a path from the source through the intermediate node or nodes to the destination. One option, used in the telephone system, was circuit switching. The telephone
3 The Air Force network fell victim to institutionaJ politics. The Department of Defense was trying to centralize administrative services under its own agencies instead of having each of the armed services provide its own. The Defense Department insisted that its Defense Communications Agency, rather than the Air Force, be in charge of building the proposed network. The Air Force did not agree with this arrangement and the network was never built (Baran, 1990).
network established a connection between two speakers by determining a path between them and reserving some of the bandwidth (communications capacity) of each link along the path for the exclusive use of that connection (Figure 2.4A). The path, called a circuit, would be set up before the connection was made, and could not be changed without breaking the connection and setting up a new circuit. Circuit-switched computer networks created their own communications paths using leased lines and switches. The network would reserve a circuit for the exclusive use of a single pair of nodes for the duration of their connection. Thus circuit-switched computer networks were most efficient for transmitting an even flow of data.
A. Circu~ sw~ching. Data is sent along a single fixed path or circun (indicated by heavy lines).
B. Packet sw~ching. Message is sent as a series of packets. Packets 1-3 and 4-6 have been sent via different routes; note that packets may arrive out of order.
Figure 2.4. Switcizing systems
An early example of a circuit-switched data network is TYMNET, built in 1969 by Tymshare, Inc., a commercial time-sharing service. TYMNET made it possible for Tymshare to concentrate its 26 host computers in a single computing center and still make them accessible to customers in several major U.S. cities (Rinde, 1976). TYMNET was able to even out the flow of data on its network by pooling traffic from many customers, each of whom would generate only a small amount of data at a time. But TYMNET handled only terminal-computer traffic; circuit switching would be less efficient for a network that connected pairs of computers, which would require transmitting large, intermittent data bursts. Circuit switching could also be considered vulnerable to failure: if any of the lines or switches along the path failed, the connection would be lost and would have to be reinitiated; it could not be rerouted in mid-stream. If survivability were an important consideration, circuit switching might present a problem.
Once again, Baran's Oil Distributed Communications suggested an alternative, a technique called packet switching that promised more efficient and flexible use of computer links. Packet switching was a variation on message suntcliing, in which, instead of reserving an entire circuit for a data transmission, the user's data (the message) is forwarded from point to point, with each switch determining the next link of the journey based on the message's destination address. The postal system is an example of message switching. The packet-switching system described by Baran would refine this technique by dividing messages into discrete segments or "packets." Transmitting small packets instead of large messages would make it easier to smooth out the flow of network traffic; packets from different messages could be interleaved on the same connection, increasing the efficiency of line usage and decreasing the transmission cost per
unit of data. Packet switching could also be more robust (resistant to failure) than message switching. If an error occurred in one packet, only that packet would need to be retransmitted, not the entire message. If a node or line failed in the middle of a connection, subsequent packets could take a different route and the transmission would not be interrupted (Figure 2.4B).
Packet switching was greeted with much skepticism. Some of this was due to the different technical training that informed engineers' responses to it. Telephone engineers, well-versed in the limits of existing analogies technologies but often unfamiliar with digital techniques, assured Baran that packet-switching could never work. A few years later, when ARPA's Lawrence Roberts spoke publicly about his plans to use packet switching in the ARPANET, he
encountered similar reactions:
Communications professionals reacted with considerable anger and hostility, usually saying I did not know what I was talking about since I did not know all their jargon vocabulary .... I learned a major lesson from that experience: People hate to change the basic postulates upon which considerable knowledge has been built (Roberts, 1988, 150).
As experts in communications systems, telephone engineers were familiar with the difficulty of routing messages individually through a network, and it was clear that breaking messages into packets would only add to the complexity of the system. From their perspective, the activities required of packet-switching nodes seemed too complex to be done quickly, reliably, and automatically (Baran, 1990, 19-21; Roberts, 1978, 1307). Other critics predicted that having to reorder and reassemble packets at their destination would require excessive amounts of computer memory, or that having multiple, constantly changing routes could result in packets looping endlessly through the network (Rinde, 1976,271; Roberts, 1978, 1308). TYMNET personnel compared packet switching unfavorably with their own circuit-switched system, arguing that the need to
transmit packet addresses and routing information would create extra traffic, undoing any efficiency gained. They also pointed out that reliability could be increased within a circuit-switched system by leasing multiple lines between nodes (Rinde, 1976). Packet switching was in no sense a straightforward replacement of circuit switching; the two alternatives coexisted and their relative merits remained a subject of dispute.
While communications engineers expressed doubts, computer scientists and engineers were more willing to take a risk with packet switching, trusting in their own abilities to deal with its complexity. They were also more attuned to recent advances in computer hardware, and some believed that with the advent of smaller, comparatively low-cost minicomputers it had become feasible to build a fast, affordable packet-switching node. This was an important point, since computer-intensive techniques only made economic sense if the cost of computing did not exceed the savings on communications (Roberts, 1978). Network builders' evaluations of packet switching thus depended in part on their interpretation of contemporary economic trends.
One early application of packet switching was a terminal network designed by the Societe Intemationale de Telecommunications Aeronautiques (SIT A). SIT A is a cooperative of 175 airlines that was founded in 1949 to share teletype message facilities (Hirsch, 1974). In 1965 the airlines began planning a more advanced data communications system, and chose packet switching for its efficiency, ability to use alternate routes, and decentralized control. In the SIT A network, nine switching centers-in Amsterdam, Brussels, Frankfort, Hong Kong, London, Madrid, New York, Paris, and Rome-were connected by leased telephone lines. Terminals used by airline agents concentrated their traffic and transmitted it to the nearest switching center, where the messages were
forwarded from switch to switch until they reached their destination. SIT A was considered a great success by its users, who pointed to lower communications costs and higher traffic volumes; by 1973 the volume of SIT A traffic exceeded all international telegraph traffic (Hirsch, 1974, 61).
A more ambitious packet-switching project was initiated by Donald Davies at England's National Physical Laboratory (NPL). Davies had conceived the idea of packet switching independently, based on his work on time sharing at NPL. In 1965 he had come to MIT to give a seminar on time sharing. He was well aware of the problems with using conventional telephone connections to transmit computer data, and discussions with colleagues at MIT convinced Davies that if time sharing were to fulfill its potential, better data communications techniques would be needed (Davies, 1986,6-7; Roberts, 1988, 144). Shortly after this experience Davies began his work on packet switching; later he also read and drew on Baran's work.
Davies's team at NPL planned to build a network to connect research sites across England. Their objectives included speed, to support interactive computing; generality, to allow a variety of uses from interactive computing to computer-controlled machine operation; reliability; and affordability (Davies, 1967). In the proposed system, a "high-level" network of packet-switching nodes would be connected in a distributed layout by long-distance digital telephone links+ Computers and terminals at each site would be attached to a single "interface computer," which would form the center of a local star network; the interface computer would also be attached to a node of the high-level network and would pass data between the high-level network and the local network. In
4 The digital telephone lines Davies proposed using were not yet available either in the U.K. or in the U.S. The technology existed, however, and it was widely anticipated in the data communications field that the carriers would offer digital services as soon as they seemed economically viable (e.g., Davies, 1967, Stuehrk, 1974). In the U.s., digital transmission services became commercially available in the mid-1970s (von Alven, 1974).
1970 NPL began operating an experimental system called the Mark I, which consisted of the local portion of the proposed network. NPL was unable to obtain the funds to build the full system including the high-level network. However, Davies's work was seen as a feasible implementation of packet switching, and was influential on later projects including the ARPANET.
All of the issues described thus far involve the communications portion of the network-the layout of lines and choice of switching system. Creating and using the messages themselves raised another whole class of issues. The difficulty of processing messages at the endpoints of the network depended on the type of machines that would be connected and the purposes for which they would be used.
The simplest networks, and the first to appear, connected terminals to computers. Such networks could help researchers access time-sharing machines at a university, or allow business transactions to be carried on in a decentralized way. Terminal networks, though they greatly increased the range and efficiency of user access, were an extension of ordinary terminal-to-computer connections. Designers of terminal networks did not have to interface different computer
Connecting computers with each other in such a way that they could exchange data or share processing tasks such as responding to user commands, running programs, or displaying or printing output was a far more complex problem. Since different types of computers tended to be incompatible, the earliest computer-to-computer networks used homogeneous machines. These networks were generally used for load eharing, in which programming jobs submitted to the network were sent to the least busy machine, thereby balancing
the load throughout the system and increasing the efficiency of machine usage. A typical load-sharing project was the 1966 Triangle Universities Computation Center Network, a cooperative project of Duke University, North Carolina State University, and North Carolina University. This connected IBM 360 computers at the three computing centers using leased lines, allowing users to transfer data between computers at the three sites (Farber, 1972,39). ARPA funded a similar network connecting IBM 360 computers at Carnegie-Mellon and Princeton (Carr et al., 1970,77; Norberg and O'Neill, 1992,57). In addition to load sharing, corporations saw homogeneous computer networks as a way to improve efficiency by sharing programs and data; pooling the efforts of skilled personnel to develop and support software jointly; and promoting standard practices for programming and organizing data (Peck, 1971).
Load sharing among homogeneous computer systems increased the sheer computing power available to users. But if a network connected heterogeneous systems, it could offer users access to a wider range of hardware and software, a benefit known as resource sharing. A resource-sharing network could benefit users who needed access to specialized machines, such as a supercomputers, that could not be provided at every site. In addition, it would make it easier for people to use software that was incompatible with their local computer. Despite the introduction of high-level languages that were theoretically machineindependent, most software was designed to run on a particular type of hardware. Users wanting access to programs or data at other sites often had to reprogram the software or reformat the data because the software formats of the other computer were incompatible with those of their local machine. As ARPA Director Eberhardt Rechtin told Congress in 1969, "When one user wants to take advantage of another's developments, he presently has little recourse
except to buy an appropriate machine or to convert all of the original software to his own machines" (U.S. Congress, 1969b, 809). Incompatibility wasted time and programming resources, and remained an obstacle to collaborative workf
Not all of the relevant interest groups agreed on the desirability of heterogeneous networking. Large computer manufacturers saw incompatibility between computer systems as a way to discourage their customers from switching to other manufacturer's products. If it were easy to connect heterogeneous computers, the dominant manufacturers would lose this advantage. IBM in particular used its dominance of the market to set its own standards, and during the 1960s opposed the adoption of nonproprietary standards such as the COBOL programming language and the ASCII character code (Brock, 1975,88). Manufacturers were beginning to provide compatibility among their own families of computers (such as the IBM System/ 360 series), but not with outside models. While using identical or compatible hardware from a single manufacturer was one option for achieving compatibility, this obviously reduced user choice and was difficult to enforce even within a single institution, let alone across research or business environments. For an organization with existing investments in different types of computers, a heterogeneous network could be an attractive alternative.
Because of the incompatibilities between different machines, heterogeneous networks were the most challenging to design and the last to appear. One of the first attempts at connecting different types of computers was made in 1966 by Lawrence Roberts of MIT's Lincoln Labs, working with Thomas Marill of the
5 ARPA was not the only military organization that faced incompatibility problems. For instance, a 1 %9 analysis of the Defense Department's World Wide Military Command and Control System found that its 109 data processing centers used 30 different computer languages, and that three quarters of its programs were in nontransferable machine-specific codes. Not surprisingly, most data processing in this system was performed locally, with little interaction between computers (Peck, 1971, 562).
Computer Corporation of America. Roberts had received his Ph.D. in Electrical Engineering from MIT in 1959. He first became interested in the problem of networking computers during discussions with J. c. R. Licklider and others at a 1964 computer science conference, and when Donald Davies visited MIT in 1965, Roberts, Davies, and Licklider exchanged ideas about the need for improvement in data communications (Roberts, 1988, 143-144). The following year Roberts began his own experiments in connecting computers.
Using a line leased from Western Union and their own software, Roberts and Marill connected a TX-2 computer at Lincoln Labs in Cambridge and a Q-32 computer at System Development Corporation in Santa Monica. In the process of describing their experiment, Roberts and Marill articulated several important concepts. In their view, the "elementary approach" to attaching two computers was for each computer to treat the other as a terminal. Such a connection required little modification of the computers, but was slow (since terminals operate at much lower data rates than computers) and did not provide a generalpurpose way to access a remote system (since each user's program, and not the shared operating system, was responsible for managing the connection). But if one were willing to modify the computer operating systems, one could add a higher-speed computer-to-computer interface instead of relying on the ordinary terminal-to-computer interface, and could implement general-purpose networking commands that could be used by all users. Roberts and Marill called the set of rules for handling a network connection the "message protocol" (Marill and Roberts, 1966, 428). The message protocol would specify formatshow long a message could be; which special codes would be used to indicate the beginning or end of a message or to send commands to the other computer-as well as actions to be taken upon receiving different types of messages. The
willingness to modify the operating system and the use of a standard protocol to mediate between different systems would both prove essential to creating a heterogeneous network.
Even after projects like the ARPANET had begun to show the feasibility of heterogeneous networking, not all groups agreed that the advantages were worth the effort. In 1971 the MITRE Corporation, a systems consulting firm that had grown out of MIT's Lincoln Labs, advised corporations with multiple data processing sites to stick with homogeneous networks. Acknowledging the success of some heterogeneous networks, the author pointed out that "these networks are research-oriented, not profit-oriented" and asserted that "homogeneous networks are best able to satisfy corporate networking requirements because a minimum expenditure of resources and time is required to implement the network" (Peck, 1971, 564). Making the effort to connect heterogeneous systems made more sense if the system builder valued diversity among computers as an end in itself. Thus, in choosing whether to build a heterogeneous system, ARPA had to weigh the difficulty of implementing such a network against the agency's investment in incompatible computer systems and its desire to provide users with a variety of computing resources.
Building the ARPANET: Technical and Organizational Strategies
By the mid-1960s ARPA's Information Processing Techniques Office had supported several efforts to improve computer communications. IPTO's involvement in networking intensified under Robert Taylor, who became IPTO director in 1%5. Taylor brought with him the conviction, inspired by J. c. R. Licklider, that computers should be made more accessible to people (Norberg and O'Neill, 1992, 57). In 1966 Taylor began discussing with ARPA director Charles Herzfeld plans to link ARPA contract sites with an experimental computer network, dubbed the ARPA Network or ARPANET; in 1967 Herzfeld allocated $500,000 for preliminary work on the idea (Norberg and O'Neill, 1992).
Taylor's goal for the network was resource sharing. IPTO was funding computing research centers around the country to work on projects such as time sharing, artificial intelligence, and graphics, and paid for expensive computer hardware at each site (Norberg and O'Neill, 1992, 105). IPTO research sites featured a variety of computers made by IBM, DEC, GE, 50S, and Univac, as well as one-of-a-kind machines like the ILLIAC supercomputer (Dickson, 1968, 132). Even this was not enough, however, because a singie IPTO contractor might need access to several types of computers. Computer hardware and operating systems tended to be optimized for particular uses, such as interactive time sharing or high-powered "number crunching"; computers also had specialized input/ output devices such as graphics terminals. Contractors who wanted to combine different modes of computing had to either travel to another site or buy another machine, with the result that IPTO was besieged by requests from its contractors for ever more computers.
Figure 3.1A. Main [PTO research centers at time of ARPANET creation
Figure 3.1B. The 15-node ARPANET, 1971
Taylor believed that if ARPA's geographically distant computers could be connected through a network, computer resources could be pooled among contractors rather than wastefully duplicated. Figure 3.1A shows the geography of IPTO's main computer science research activities; Figure 3.lB depicts the network that would eventually connect them. The topology of the ARPANET would reproduce in material form the social organization of the IPTO research community, both in the location of sites and (as will become apparent) in the distribution of control throughout the system.
In 1%6 Taylor recruited Lawrence Roberts, a researcher at MIT's Lincoln Labs, to oversee the ARPANET development as Assistant Director of IPTO. Roberts had just published the results of his and Marill's networking experiments, and Taylor considered him best qualified to manage the ARPANET project. Roberts was initially reluctant to leave his research position at Lincoln Labs, but once at IPTO he discovered that, rather than putting an end to his involvement in experimental work, the position would allow him to carry it out on a much larger scale. Roberts became Director of IPTO when Taylor left the agency in March 1969.
As manager of the ARPANET project, Roberts had to make basic decisions about the network's topology and switching system. Initially Roberts planned to have pairs of computers connect to each other directly using ordinary telephone calls, the same method he had employed in his earlier experiments at Lincoln Labs. But the high cost of long-distance telephone connections seemed prohibitive, and he worried that ordinary phone service would be unacceptably prone to transmission errors and line failures. Roberts was aware of packet switching as an option for data networks, but was not sure how to implement it in a large network. With these issues still unresolved, Roberts attended a
computing symposium in Gatlinburg, Tennessee, in October 1967 to present ARPA's tentative networking plans.
At Gatlinburg Roberts heard for the first time about Donald Davies's packet switching experiments at NPL and talked with Davies's colleague Roger Scantlebury. Scantlebury advocated packet switching as a solution to ARPA's concerns about line efficiency, and referred Roberts to Baran's work. Soon after his return from Tennessee Roberts read Baran's On Distributed Communications, which he would later describe as a kind of revelation-"suddenly I learned how to route packets" (Norberg and O'Neill, 1992, 242). His encounters with the work of Davies and Baran convinced Roberts that packet switching and a distributed topology would be feasible for the ARPANET, and Taylor strongly urged him to adopt these new techniques. Instead of connecting each pair of sites directly, ARPA would lease lines between selected pairs and forward packets from one node to another as necessary.
Roberts' design decisions were based on the twin goals of providing a highperformance data communications system for the military and pursuing innovative research that would advance the state of the art in computing. A distributed, packet-switched network would reduce transmission costs while increasing reliability, which might save ARPA money and would further the military aim of developing survivable communications systems. Since the network would connect a diverse collection of computers, Roberts and his team would also address the problem of communicating between heterogeneous machines. As Roberts pointed out, "Almost every conceivable item of computer hardware and software will be in the network .... This is the greatest challenge of the system, as well as its greatest ultimate value" (Dickson, 1968, 131). Creating a heterogeneous, packet-switched, wide-area computer-to-computer
network would be a significant technical achievement. The challenge would be keeping these same technical features from leading the project into chaos.
Designing the ARPANET: challenges and strategies
The technical and managerial difficulties of the ARPANET project became apparent when Taylor and Roberts presented the network concept at IPTO's annual meeting of Principal Investigators (the scientists and engineers who head research projects) in April 1967. Most PIs reacted with indifference or even hostility to the idea of connecting their computer centers to the network. Some PIs suspected-correctly-that ARPA saw the network as an alternative to buying them more computers (Kleinrock, 1990, 30; Taylor, 1989,42). Many did not want to lose control of "their" resources to people at other sites, and saw the network as an intrusion. Others agreed on the general advantages of developing computer networks, but had practical objections to implementing the ambitious system proposed by Roberts. These PIs were unwilling to undertake the massive effort that seemed to be required, or were convinced that the project would fail altogether (McKenzie, 1990,8).
The negative reactions at the PI meeting made it clear that the complexity of the network design would be an issue. It was also evident that the PIs were more concerned with continuing their own local projects than with collaborating over a network; there was not at this time a sense of common purpose among IPTO researchers. In order to succeed, ARPA had to counter the technical complexity of the project as well as gain the cooperation of prospective network members.
While ARPA employed many problem-solving strategies in the process of building the ARPANET, two aspects of ARPA's approach epitomize the agency's managerial style. One was a technical and managerial strategy that came to be known as lauering, which involved dividing complex networking tasks into
modular building blocks. This approach simplified the design and operation of network components, and eased some of the tension between centralized and local control of network resources. The second characteristic of ARPA's systembuilding approach was a reliance on informal and decentralized management mechanisms. ARPA skillfully built on existing institutional practices of its contractors that encouraged independent creative work, while introducing its own management mechanisms to reinforce collaboration. Layering and a decentralized, collegial management style carne to be seen as essential characteristics of the ARPANET, and were held up as models for system development (see e.g. Padlipsky, 1983; Crocker, 1993).
The PIs' main concern was that the network would require too much effort on their part to create the necessary software for the host computers. The different hosts at IPTO research sites used a wide variety of time-sharing operating systems. If hosts were responsible for packet switching, someone would have to program each different type of host computer to perform the various packet switching tasks, and then re-program them whenever the software needed modification. Moreover, the packet-switching software would have to make allowances for the limitations and idiosyncrasies of each different computer. In view of these difficulties, even PIs who were sympathetic to the project's goals had reason to be skeptical about its technical feasibility.
One of the Principal Investigators, Wesley Clark of Washington University in 5t. Louis, saw an easier alternative. Following the PI meeting, Clark proposed to Roberts that they attach each of the host computers to a minicomputer that would act as the host's interface to the network. In Clark's plan the minicomputers would connect with each other by telephone lines to form the nodes of an inner network, called the subnet, that would handle the packet
switching operations. Since minicomputers were relatively inexpensive, it was feasible to dedicate several of them to running the network. Taylor endorsed the subnet scheme, and Roberts incorporated it into the ARPANET design, calling the minicomputers "interface message processors" or IMPs (Norberg and
O'Neill, 1992, 240). Figure 3.2 illustrates the subnet idea.
! = leased telephone line
Figure 3.2. Network model showillg communications subnet (shaded area)
The subnet design created a division of labor between the switching nodes (IMPs), whose task was to move packets efficiently and reliably from one part of the network to another, and the hosts, which were responsible for the content of those packets. Packet switching programs could be written for a single type of IMP computer, rather than many different hosts. Host administrators could treat the entire subnet as a "black box," a device that provided a service without requiring them to know how it worked, and could focus on providing computing resources.
With the decision to divide the network conceptually into two parts, the ARPANET designers embarked on a strategy that would guide the network's development: layering. A layered system is organized into a set of discrete, interacting components, each of which performs a specific function. The idea of layering seems to have occurred independently to many people working on networks; it drew on ideas of modularity and the division of systems by function that were current in 1960s computer science." In a well-layered system, the opportunities for interaction between layers-their interfaces-are limited and follow set rules. Limiting the interactions between different layers (and eliminating any unplanned interactions) reduces the complexity of the system, making it easier to test and debug. The designer of a given layer needs to know how that layer is expected to interact with adjacent layers, but he or she does not need to know anything about the internal workings of the other layers. Since the layers are independent, they can be created and modified separately, as long as everyone working on the system agrees to use the same interfaces between layers. Thus layering is a strategy with both technical and social implications: it reduces the technical complexity of the system, and it allows the system to be built and managed in a decentralized way.
The ARPANET builders did not start out with a specific plan for how functions would be divided up among layers or how the interfaces would work. Rather, a layered model evolved over the course of ARPANET development. With the decision to create the subnet, the ARPANET designers initially divided the system into two layers: a communications layer, consisting of packet-
1 Programming languages provided one example of a system that used layering to accommodate the different needs ofJ'eople and machines. "High-level" languages such as FORTRAN, COBOL, or LISP all owe people to write programs using words and symbols; to run the programs the computer had to translate them into a "low-level" numeric code. The programs themselves were usually written in modular sections, each of which performed a particular function (some programming languages refer to these modules as "functions").
switching IMPs connected by leased telephone lines, and a host layer, which would coordinate end-to-end communication between hosts and provide user services (Heart et al., 1970, 551). This model is shown in Figure 3.3:
Host Handles user interface; initiates connections between
pairs of hosts and provides end-to-end control
Communications Moves data through subnet using packet switching;
ensures reliable transmission on host-IMP and IMP-IMP
connections Figure 3.3. Two-layer model of the ARPANET
Before the ARPANET was finished the model would be expanded to three layers; in later years still more layers would be added to keep pace with new capabilities and new ideas about how to organize networks.
Assembling the ARPANET team
Designing the ARPANET was a collaborative process. Though reactions at the April 1967 meeting had been mixed, a small group of PIs did express interest in pursuing the project, and Roberts began meeting with them informally to discuss network design problems and work out possible solutions. Roberts recruited Paul Baran to advise the networking group about distributed communications and packet switching, and chose a small team with representatives from RAND, the University of California at Santa Barbara, Stanford Research Institute (SRI), the University of Utah, and the University of California at Los Angeles to formulate specifications for network performance and suggest a schedule for building the system. IMPs placed at four of these
sites-UCLA, SRI, Santa Barbara, and Utah-would form the first, experimental version of the subnet. In June 1968 Roberts submitted the plan his group had worked out to ARPA director Charles Herzfeld. Herzfeld gave Roberts his approval-and a budget of $2.2 million-in July, and Roberts began seeking network contractors (Norberg and O'Neill, 1992, 241-243; U.S. Congress, 1968, 2348).
For the contract to build and operate the IMPs Roberts solicited competitive bids. This was unusual for IPTO, which tended to award contracts through an informal process, funding individuals or organizations with known expertise in a particular area. However, there were no established experts in packet switching, and the IMP contract was too important not to entertain the widest range of proposals. The basic hardware of the network would consist of time-sharing hosts, packet-switching IMPs, and leased telephone lines (as in Figure 3.2); since the hosts and lines were already in place, the IMPs represented the crucial missing piece needed to complete the network infrastructure. In early 1969, after considering bids from a dozen companies of all sizes, Roberts awarded the contract to Bolt Beranek and Newman Corporation of Cambridge (BBN), a relatively small company specializing in acoustics and computing systems.
Though not a giant in the computer business, BBN had several advantages behind its bid. The company had previous ties with IPTO: J. c. R. Licklider had worked at BBN before becoming the first director of IPTO; BBN had participated in IPTO's earlier time sharing efforts; and one member of the BBN IMP team, theorist Robert Kahn, had discussed networking ideas with Roberts during the early ARPANET planning stages. Another strength was that Frank Heart, the head of BBN's IMP team, and members Severo Ornstein, David Walden, and William Crowther had previously worked at Lincoln Labs, where they had
acquired valuable experience with real-time computer systems. Real-time systems process data as fast as it comes in, which means the programmers must make their software compact and efficient; such skills would be needed to provide responsive network service with small IMP computers (Norberg and O'Neill, 1992, 247). Finally, BBN had valuable ties to the Honeywell corporation, whose H-516 minicomputer proved a strong candidate for the IMP: fast, economical, well tested by use, easily programmable, and possessing good input/ output capabilities (Heart et al., 1970,557). BBN and Honeywell were located conveniently near each other in the Boston area, and had agreed to work together on customizing the H-516 for use in the network if BBN's bid were accepted (Heart, 1990, 14).
Other ARPANET contracts were awarded less formally, with the institutions that had been active in designing the network taking on prominent roles in developing it. Leonard Kleinrock at UCLA was awarded a contract to create theoretical models of the network and analyze actual network performance; UCLA would also have the first IMP installed so that analysis could begin as soon as possible. SRI received a contract to create an on-line database called the Network Information Center. The NIC would maintain a directory of the network personnel at each site, create an on-line archive of documents relating to the network, and provide information on resources available through the network (Ornstein et al., 1972, 253). Roberts also reestablished his informal networking group, now named the Network Working Group (NWG), to develop software for host computers and to provide a forum for discussing early experiences and experiments with the network (Norberg and O'Neill, 1992, 244).
A final contract was to help plan the network topology. Once the initial fournode network was functioning smoothly, Roberts planned to extend the
ARPANET to fifteen computer science sites funded by IPTO (as in Figure 3.1B), then to ARPA research centers using computers to do work in other fields such as behavioral science, climate dynamics, and seismology (Roberts and Wessler, 1970,548). To help decide where the connections should be made between all these sites, Roberts turned to Howard Frank at the Network Analysis Corporation, whose background included solving topological problems such as optimizing the layout of oil pipelines. NAC would use recently developed computer techniques to evaluate the cost and performance of different network topologies. Figure 3.4 summarizes how ARPANET development was organized.
ARPA/IPTO Project Management I
Bolt Baranek and Newman (BBN)
IMP hardware & software, network operations
I Honeywell IMP hardware
University of California at Los Angeles (UCLA} Analysis
Network Analysis Corporation (NAC) Topology
Stanford Research Institute (SRI) Network Information Center
Network Working Group (NWG) Host protocols
Figure 3.4. ARPANET organization
The ARPANET contracts followed the layered division of the network itself:
BBN, UCLA, and NAC worked on the communications layer; the NWG provided the host layer; and the NWG and SRI supplied applications. The strategy of having most network tasks performed within a single layer
minimized the coordination needed between layers, making it easier to distribute the development work among several different groups.
Building the communications lauer: IMP design issues
The heart of the communications layer was the IMP, which acted as both a packet switch and an interface between the hosts and the network. BBN had to equip the IMP for a number of tasks, some simple and others complex. As an interface, the IMPs had to make host data conform to the packet format used in the subnet. IMPs received data from hosts in the form of messages, which contained up to 8,096 bits (the binary digits-ones and zeros-that make up a digital computer's basic code). The IMP had to break up these messages into packets of up to 1,000 bits, then add to each packet a standard header that contained the packet's source and destination addresses and some control information that could be used to check for transmission errors. When the packets reached the IMP at their destination, this IMP stripped off the packet headers and reassembled the packets into a complete message before sending the data to the destination host. The IMPs were also responsible for controlling the flow of traffic over the network to prevent congestion, which BBN tried to accomplish by having the IMPs restrict the number of packets any host could send into the network at one time (Frank et al., 1972, 257).
As a packet switch, the IMP had to ensure that data was transmitted reliably along each link between a pair of IMPs or an IMP and a host. One mechanism for increasing reliability was the acknowledgment. Whenever an IMP or host sent a piece of data across a link, it expected the recipient to send back a standard message indicating that the data had been received intact. If the
acknowledgment did not arrive within a given period of time, the sender assumed the data had been lost or corrupted, and retransmitted it. Before
acknowledging receipt of a packet, the IMP or host used a mechanism called a checksum to verify that the data had not been corrupted in transmission.? Acknowledgments and checksums used the computing power of the IMPs to provide the kind of reliability that telephone engineers, based on their experience with analog systems, had thought impossible.
Perhaps the most difficult packet-switching task for the IMPs was routing, the process of deciding which link the packet should be sent out on so as to reach its destination in the shortest time. ARPANET routing was dimamic: IMPs would determine each packet's route as the packet came along, rather than in advance. It was also distributed: rather than have a central routing mechanism, each successive IMP decided the next leg of the route. Each IMP had a table with an entry for each host on the network, showing how long it would take a packet sent from the IMP to reach that host and which of the IMP's links led to that host by the fastest route. When a packet came in, the IMP would look up the destination host in the table and forward the p~('k~t via the specified link. The routing system was also adaptive, responding to changes in network configuration or traffic. Every 2/3 second the IMP would make a new estimate of how long it would take to reach the various host destinations, and it would send these routing estimates to each of its neighbors. The iMP used incoming information from its neighbors to update its own routing table, matching each host destination with the link that had reported the fastest travel time to that host.
2 The checksum technique takes advantage of the fact that all computer data consists of a series of digits. The computer sending the message divides it into segments of a certain size, interprets the series of digits in each segment as a single number, and adds up the numbers for all the segments. This sum (the checksum) is then appended to the message. The receiver likewise divides the message up, sums up the segments, and compares the result with the number appended. If the two checksums are different, the receiver knows that the message has been corrupted, and can ask for a retransmission. An important point is that the checksum is much shorter than the message itself; thus sending a checksum is a more efficient way to check a message than the obvious alternative-sending a duplicate of the message for comparison.
The ARPANET approach to routing reflects the designers' interest in exploring new techniques and creating a high-performance system, as well as their willingness to accept a system that was, at times, difficult to understand and control. Distributed routing, like distributed topology, could make the network robust by minimizing the dependence on anyone component. Adaptive routing could allow IMPs to improve speed and reliability by avoiding congested routes and node or line failures. The price of relying on so many independent, constantly changing routing decisions, however, was a complex system prone to unexpected interactions. BBN had to revise its routing algorithm several times as experience or simulation revealed weaknesses (Ornstein et al., 1972, 244; also
To meet ARPA's technical specifications, BBN had to ensure that the IMP could perform the interface and packet switching functions just described. But the design of the IMP also reveals that BBN staff had strong beliefs about Izow the IMP should perform in relation to the rest of the network. The IMP was shaped by the BBN team's desire to enforce the distinctions between network layers. The team at BBN felt that the packet-switching subnet should be isolated from any potential interference from the host computers; in other words, that the communications and host layers must be separate:
A layering of functions, a hierarchy of control, is essential in a complex network environment. For efficiency, nodes [IMPs] must control subnetwork resources, and Hosts must control Host resources. For reliability, the basic subnetwork environment must be under the effective control of the node program .... For maintainability, the fundamental message processing program should be node software, which can be changed under central control and much more simply than all Host programs (McQuillan and Walden, 1977,282).
In the BBN vision, the IMP sub net was to be autonomous. IMPs would not depend on hosts for any computing resources or information; the functioning of an IMP would not be impaired if its local host went down; and hosts would not 52
be able to interfere with IMP operation in any way. Conversely, hosts would be isolated from failures in the subnet (unless their own local IMP were disabled). This arrangement not only made the network robust but also eased the technical task of the BBN team and allowed them to maintain control over their own domain, the IMP design and operation.
BBN took several steps to make the IMPs independent of the hosts, of other IMPs, and of human operators (Heart et al., 1970). To guard against failures elsewhere in the network, IMPs checked for lost or duplicate packets, and each IMP tested periodically for dead lines, failures in neighboring IMPs, nonfunctioning hosts, or destinations made unreachable by intermediate IMP or line failures. The need for human intervention in the subnet was minimized by "ruggedizing" the IMP hardware-a procedure familiar to suppliers of military computers that involved protecting the machine against temperature changes, vibration, radio interference, and power surges. BBN built into the IMPs capabilities for remote monitoring and control that allowed human operators to run diagnostics or reload software on the IMPs without making field visits. The IMP was also programmed to recover from its own failures. An IMP that went down due to a power failure would restart automatically when power returned. The IMP checked periodically to see if its basic operating program had been damaged; if so, the IMP would request a neighboring IMP to send a copy of the program to replace the corrupted version. If the IMP was unable to reload the new copy, it would automatically shut itself down to protect the network from any problems the damaged software might cause.
As work on the communications layer proceeded, Roberts made new demands on the IMPs based on his evolving view of how the network would be used. The original plan had been to connect each IMP to a single computer, but it
soon became apparent that some sites would want to connect multiple computers. BBN responded by modifying the IMP design to let a single IMP connect to multiple hosts and to increase the distance allowed between a host and its IMP (Ornstein et al., 1972,752,244; Norberg and O'Neill, 1992, 247). Then, two years into the project, Roberts decided would be desirable to make the network accessible to users whose sites did not have an ARPANET host. He directed BBN to create a new version of the IMP, called a Terminal IMP or TIP, that could interface directly to terminals (rather than hosts). TIPs would be placed at sites that had many potential network users but lacked an ARPANET host. People at a TIP site could connect terminals directly to the TIP; people outside the site could dial up the TIP using a modem. Once connected to the TIP, the user could access any host on the network. The TIP gave ARPA research centers a new option: instead of providing a host computer on-site, they could contract for some or all of their computing services from other sites on the network. Some computer centers found it easier or cheaper to give up their own time-sharing machines and rely exclusively on TIPs for access to time-sharing resources. Within two years half of the sites using the ARPANET were accessing the network through TIPs (Roberts, 1973, 1-22).
While BBN had the largest role in building the communications layer, UCLA and the Network Analysis Corporation made important theoretical contributions to the project. At UCLA's Network Measurement Center, Leonard Kleinrock and his associates developed models and simulations to describe and predict network behavior. Kleinrock made use of a mathematical tool called queuing theory that analyzes the behavior of entities that must wait in a queue before being processed (e.g., packets at a switching node) (Kleinrock, 1970). Queuing theory had previously been used to analyze telephone systems; by
adapting queuing theory for the distinctly different conditions of a packetswitched network Kleinrock made a major contribution to the field, his work becoming the basic reference in queuing theory for computer networks (Frank et al., 1972,265; Tanenbaum, 1989).
The Network Analysis Corporation explored ways of laying out the network so as to maximize performance and minimize cost. NAC's analytical tool was a heuristic computer program, one that provides an approximate solution or "rule of thumb" to a problem whose exact solution would require too much computing time. NAC programmers would specify, as input to the program, a topology that satisfied the performance constraints specified by ARPA: a maximum delay of 0.2 seconds for message delivery (for responsiveness), a minimum of two links to each IMP (for reliability), and easy expandability. The program then systematically varied small portions of this topology, rejecting changes that raised costs or violated constraints and adopting changes that lowered costs without violating constraints. By running the program thousands of times using different starting topologies and constraints, NAC was able to provide a range of solutions that satisfied ARPA's performance criteria while saving the agency thousands of dollars in communication costs (Frank et al., 1970,581).
Operating the network
In September 1969 the BBN team installed the first IMP at UCLA, and aU four initial sites were linked by the end of 1969-1ess than a year after the IMP contract had been awarded. The contract included responsibility for keeping the subnet running, and BBN soon found that operating a distributed network posed its own challenges. BBN set up a Network Control Center in 1970 when the company's own ARPANET node (the network's fifth) was installed. At first the
Network Control Center simply monitored the IMPs and was manned "on a rather casual basis" by the IMP designers (McKenzie, 1976,6-5). But reliability soon became an issue, forcing BBN to take network operations more seriously. IMP failures were more common than BBN had anticipated; IMPs were down 2% of the time on average, largely due to hardware problems. Line problems caused a similar amount of down time, and BBN received little assistance from telecommunications carriers such as AT&T, who resisted becoming technically integrated into the project (Ornstein et al., 1972,252).3 Since the effects of a problem in one location tended to propagate across the network, identifying the source of the problem posed a challenge, especially as the different components of the network were provided by different organizations. As BBN team leader Frank Heart explained, when a network user encountered trouble,
you had the problem of trying to figure out where in the country that trouble was, whether it was a distant host, or whether it was the host connection, or whether it was an IMP at the far end, or whether it was in a phone line .... people certainly did not anticipate at the beginning the amount of energy that was going to have to be spent on debugging and network analysis and trying to monitor the networks (Heart, 1990,35-36).
By late 1970 ARPA was also pressuring sites to make more use of the year-old network, and these sites began turning to BBN with their questions (McKenzie,
When Alex McKenzie took charge of BBN's Network Control Center in 1971, he responded to these demands by expanding and redefining the NCC's role.
3 The team at BBN would have preferred closer cooperation with the telephone companies. They felt, for instance, that the carriers should design the hardware interfaces to telephone lines (known as "circuit terminal equipment") with computer users in mind:
From the outset, we viewed the ARPA Network as a systems engineering problem, including the portion of the system supplied by the common carriers. Although we found the carriers to be properly concerned about circuit performance ... we found it difficult to work with the carriers cooperatively on the technical details, packaging, and implementation of the communication circuit terminal equipment .... In the long
run, ... circuit terminal equipment probably should be integrated more closely with computer input/ output equipment (Heart et al., 1970, 565).
The BBN argument reflects a central tenet of layering: that system components can be provided by separate organizations, but only if all parties agree on the interfaces between components. 56
McKenzie promoted a vision of the ARPANET as a "computer utility," believing that BBN should provide "the reliability of the power company or the phone company" (McKenzie, 1990, 13). Under his direction, the NCC soon assumed responsibility for fixing all operational problems-whether or not BBN's equipment was at fault-and for coordinating upgrades of IMP hardware and software. The center also acquired a full-time dedicated staff.
The Network Control Center monitored the ARPANET constantly, recording when each IMP, line, and host went up or down; NCC staff also took trouble reports directly from users. When problems were detected, the NCC used the diagnostic features built into the IMPs to identify their cause. For instance, if an IMP seemed to be sending garbled data, an operator could have the IMP feed its output back into its input and check the validity of the result; if the data was correct, the operator could assume that the IMP was functioning and the line was at fault. Problems in remote IMPs could often be fixed from the NCC via the network, using the control functions that BBN had built into the IMPs. The Nce also used its monitoring data to detect line outages, which were reported to the appropriate phone company. BBN developed such expertise in diagnosing network problems that the NCC was often able to inform the carriers of line problems before they themselves had detected them-much to the carriers' surprise and initial skepticism (Heart, 1990,34; Ornstein et al., 1972,253). Unknown to the carriers, ARPA had, with the introduction of itscomputerized switches, created a new identity for the telephone network as the raw material of a more powerful and reliable system.
By 1976 the NCe was, according to McKenzie, "the only accessible, responsive, continuously staffed organization in existence which was generally concerned with network performance as perceived In) the user" (Mckenzie, 1976,
6-5; my italics). The Network Control Center had become a managerial reinforcement of ARPA's layering scheme. Just as the division between subnet and hosts had insulated host sites from many design complexities, so the NCC allowed users to ignore much of the operational complexity of the network. The NCC presented the entire communications layer to the user as a black box. By providing a single human interface for network users, the NCC made layering work in practice as well as in concept.
Host and applications layers
The Network Working Group had the arduous task of developing the protocols that would guide the interaction of host computers over the ARPANET. NWG members realized that most sites were unwilling to make major changes in their hosts, so any protocols that host sites were required to implement had to be as simple as possible. Roberts, in his earlier work with Marill, had even argued against requiring a network-wide host protocol: "Since the motivation for the network is to overcome the problems of computer incompatibility without enforcing standardization, it would not do to require adherence to a standard protocol as a prerequisite of membership in the network" (Marill and Roberts, 1966,428). But Roberts's earlier experiment had linked only two computers; a network with dozens of different hosts would clearly need some level of standardization to avoid chaos.
NWG member Alex McKenzie recalled that there were many arguments over layering: "We had a concept that layering had to be done, but exactly what the right way to do it was [was] all pretty unclear to us" (McKenzie, 1990,8). The NWG's initial plan was to create two protocols. One would allow users to work interactively on a remote host computer (a process known as remote login); another would transfer files between machines. Both protocols occupied the
same layer, an intermediate position between the user and the communications layer
Roberts saw that the remote login and file transfer protocols would both have to begin their work by setting up a connection between two hosts, and he saw this as a needless duplication. Meeting with the NWG in December 1969, Roberts suggested separating the host functions into two layers. The first, called the host layer, would feature a general-purpose protocol to set up communications between a pair of hosts. The applications layer protocols would build on the host protocol services to provide specific network applications such as electronic mail or file transfer (Lynch and Rose, 1993, 8-9; Karp, 1973,270-271). The ARPANET model now had three layers, as shown in Figure 3.5. The advantage of having separate host and applications layers was that it eliminated the need for each application to duplicate the work of setting up a host-to-host connection. Applications would be easier to program, thereby encouraging people to add to the pool of user services.
Applications Handles user activities on the hosts: remote login, file
Host Sets up connections between pairs of host processes
Communications Moves data through subnet using packet switching;
ensures reliable transmission on host-IMP and IMP-IMP
connections Figure 3.5. Three-layer model of the ARP AN ET
The host-layer protocol, named the Network Control Program or NCP, was designed as a general method of setting up connections between hosts that would be the same regardless of whether the hosts were engaged in file transfer, remote login, or some as-yet-undreamed-of service. The NCP program on each host would collect data from the application protocols, package the data into messages, and sent the messages to the local IMP. Incoming messages from the IMP would be collected by NCP and passed on to the designated application. NCP would make sure the two hosts agreed on the data format, and that the destination host set aside enough space to store the incoming messages.
NCP was shaped by assumptions about social relations in the networking community. Each host site would be responsible for implementing NCP on its hosts (that is, providing a program for each host that carried out the actions specified by NCP). Since the host sites were somewhat reluctant partners in the development effort, NCP was designed to be simple so as to minimize the burden of creating this host software (Carr et al., 1970, 591). NWG members were also aware that the ARPANET system was being superimposed on existing patterns of computer use at the various research sites, where host computers were already being used for local projects. The NCP designers were careful to preserve local control over the hosts: network users were subject to the same mechanisms for access control, accounting, and allocation of resources as were local users (Carr et al., 1970,591). The NWG also accepted the independence of its users-many of them computer experts like themselves-as a given, and tried to accommodate it. In explaining their design decisions for the host protocol, NWG member Stephen Carr noted! "Restrictions concerning character sets, programming languages, etc., would not be tolerated and we avoided such restrictions" (Carr et al., 1970, 79).
The Network Working Group outlined a design for NCP in early 1970, and by August 1971 NCP was running at all fifteen ARPANET sites. With NCP in place, the NWG could focus on providing applications. The services originally envisioned by ARPA, and the first to be put in place, were remote interactive login and file transfer. Early in 1970 some NWG members had devised an experimental program called TELNET (for "telecommunications network") that allowed users at the University of Utah to log into computers at SRI (Crocker et al., 1972,273; Carr et al., 1970,594). In February 1971 NWG members produced a general version of TELNET, which allowed users to log into any remote machine through the network just as if it were directly connected to their terminal. TELNET was the dominant application on the network and formed the basis for other experimental services. Before long NWG members had improvised services for remote job entry (sending a programming job to be performed on a remote computer) using TELNET to establish a connection, and some users began to experiment with using TELNET for file transfer (Crocker et al., 1972, 277). This expedient proved awkward, however, and NWG members decided to create a separate File Transfer Protocol (FIP). TELNET, FTP, and other experimental applications went through a continual process of revision as NWG members used the protocols and suggested improvements. One key to the NWG's success was that, because its members used these protocols in their own work, they had the incentive and the hands-on experience to create and improve new services.
ARPANET applications protocols relied on a strategy analogous to layering: the use of standard formats to mediate between dissimilar hardware or software. For instance, the TELNET protocol had to display the user's typed instructions and the computer's responses on the user's terminal. The main
obstacle in developing TELNET was the number of different terminals in use, ranging from simple teletypewriters to sophisticated graphics displays; it was impractical to try to equip with TELNET the display instructions for every
TELNET issues instructions for a standard "virtual terminal." In computing, the term virtual is used to refer to the simulation by a computer of a physical state. The virtual terminal did not exist in any physical form; rather, it represented the minimal set of display capabilities that most terminals would be expected to have. TELNET would issue display instructions for the virtual terminal, and the hosts would translate these into instructions for their own particular terminals. The virtual terminal provided an intermediate step between the general functions performed by the TELNET application and the specific commands that the host used to control its own hardware. By using a simplified abstraction to provide a common language for terminal commands, the virtual terminal scheme masked the complexity of hardware incompatibility. The file transfer protocol used a similar approach: to avoid the need to translate between incompatible file formats, FIP used standard network file formats that could be recognized by all hosts. In the process of working out procedures that could run on different hosts, NWG members addressed long-standing compatibility problems. The common formats they developed for computers to represent files and terminals became general-purpose tools that aided users of both networked and non-networked computers (Crocker et al., 1972, 275).
Completing the system
The ARPANET development was fairly rapid: the project was approved in 1968; the initial communications subnet was in place by the end of 1969; host and applications protocols were developed in 1970-1971. By the end of 1971 the
network had achieved modest success. It had expanded somewhat beyond the ARPA community to serve researchers at the Air Force and the National Bureau of Standards, and Roberts claimed that private companies outside the project were already beginning to express interest in paying to join the network (Brinton, 1971,64,65). Yet most sites were only minimally involved in resource sharing: the ARPANET had not brought about the radical jump in productivity that ARPA had anticipated. While ARPA had proved some technical points with the IMP subnet, the network could hardly be considered a success if no one used it. Because ARPA was introducing a radically new technology, it had to put an entire sociotechnical system in place, not only supplying the network service but creating a demand for it by making it possible for users to channel their existing demands toward network applications.
Part of the problem was social, a reluctance on the part of computing centers to share their resources. MIT's information processing director said bluntly, "There is some question as to who should be served first, an unknown user or our local researchers" (Dickson, 1968, 134). The computer staff at UCLA complained, "Computer operations managers at other nodes may feel that incoming traffic is disruptive, less important than their own needs, or that UCLA's use of the net should be shunted to slack hours" (Brinton, 1971,65). But even those who wanted to share resources found the network inadequate because of a lack of network-oriented software. According to BBN's Kahn, "The reality was that the machines that were connected to the net couldn't use it. I mean, you could move packets from one end to the other ... but none of the host machines that were plugged in were yet configured to actually use the net" (Norberg and O'Neill, 1992, 1992, 252).
By early 1972 Roberts and Kahn decided that something dramatic was needed to increase participation and to prove to the outside world that the ARPANET was a success. They arranged to demonstrate the network's capabilities at the Ieee conference to be held that October. Kahn, who took charge of organizing preparations for the demonstration, felt it would galvanize the network community into creating useful applications. He mobilized members of the NWG and other software experts to make application programs that were already popular with users accessible over the network, and to create new services (Roberts and Kahn, 1972). In the spring of 1972 the ARPANET team at BBN reported "considerable enthusiasm" from participants, as well as increasing traffic (Ornstein et al., 1972). By the time the Ieee conference took place in October enough programs were ready to capture the attention of the crowds. The Ieee demonstration marked a turning point in participation: total packet traffic on the network, which had been growing a few percent per month, jumped by 67% in the month of the conference and maintained high growth rates afterward (Schelonka, 1976, 1976,5-21). The number of hosts on the network also grew steadily (Roberts, 1988, 151-152).
ARPA's concern over getting people to use the network illustrates how difficult it can be to determine when a system is "completed" or "functional." The physical components of the ARPANET were all in place by mid-1971, but a workable system had to be socially constructed-that is, the relevant groups had to agree that the system was an attractive means for achieving their ends. Pinch and Bijker suggest that system builders may attempt "rhetorical closure" by simply declaring that the system has achieved its purpose (Bijker et al., 1987,44). The Ieee demonstration could be considered a variation on rhetorical closure: by proposing the demonstration Roberts and Kahn had proclaimed to the
outside world that the system was finished and successful; they then used the public scrutiny they had invited as a way to motivate the user community to provide the missing services that would make this claim a reality.
The ARPANET culture
While the layering approach stressed divisions between system elements, ARPA's management style tried to foster the cooperation required to integrate those elements into a complete system. The organizational culture surrounding the ARPANET was decentralized, collegial, and informal. Coordination between contractors relied largely on informal collaborative arrangements rather than contractual obligations; technical decisions were usually made by consensus; and the network itself came to be used as a meeting place for the computer science community. The ARPANET culture enhanced ARPA's ability to enlist the support of its contractors and respond to the technical challenges posed by the project.
Though unusual for a Defense R&D project, the ARPANET management style was in fact typical for ARPA. ARPA in general, and the Information Processing Techniques Office in particular, had always had an informal and collegial organizational culture. ARPA recruited most of its IPTO directors and project managers from university and industrial researchers. IPTO directors came to the agency with expertise in computer science, and kept in touch with colleagues by touring research centers to evaluate the progress of programs, learn about new research ideas, and recruit promising researchers, often including their own successors (Norberg and O'Neill, 1992,97-98). IPTO personnel were not professional managers; they tended to stay only a few years at ARPA and then return to academia or private business (in part because of the modest salaries at the agency). Though ARPA had authority over its contractors by virtue of its financial power, the actual people managing IPTO projects were
largely drawn from this same group of contractors. As Howard Frank of the Network Analysis Corporation observed, "It's easy to say 'the government,' or ARPA, or something like that, but they are individuals that you deal with" (Frank, 1990, 30).
ARPA's corporate contractors also believed in keeping research groups small and informal. Even at BBN-whose contract was the largest in the project-the main IMP team had only five members, and the IMP software was designed, programmed, and debugged by three programmers. Though aware that much larger groups were the norm in computer system projects, the BBN group and its leader, Frank Heart, believed that coordination and efficiency were easier in a small, tightly knit team of engineers, each of whom had a grasp of the entire system (Heart et al., 1970,566). ARPA's corporate contractors kept close ties with academia and offered their R&D staff considerable freedom, a fact that employees strongly praised. Robert Kahn recalled, "BBN was a kind of hybrid version of Harvard and MIT in the sense that most of the people there were either faculty or former faculty of either Harvard or MIT .... It was sort of the cognac of the research business, very distilled" (Kahn, 1990, 11). Paul Baran made similar observations about the RAND Corporation, which had participated in the early ARPANET design: "RAND was a most unusual institution in those days. If you were able to earn a level of credibility from your colleagues you could, to a major degree, work on what you wanted to" (Baran, 1990, 9).
Part of the collegial atmosphere at ARPA was undoubtedly due to the many social ties among its contractors. Personal contacts played an important role in bringing people into the ARPANET project. For instance, IPTO's Lawrence Roberts, UCLA's Leonard Kleinrock, and BBN's Frank Heart and Robert Kahn had all earned degrees at MIT; in addition, Roberts, Kleinrock, and most of the
BBN team had all worked and become acquainted at MIT's Lincoln Labs. Howard Frank had met Kleinrock when they were both lecturing at Berkeley, and again when both were working on a project for the Executive Office of the President. Kleinrock introduced Frank to Roberts, who later awarded Frank's newly formed Network Analysis Corporation its ARPANET contract.
IPTO itself was responsible for other ties. Robert Taylor made a special point of funding graduate students, and held special meetings and working groups for them ([aylor, 1989, 19). Graduates of the IPTO-funded programs at MIT, Stanford, CMU, and elsewhere became a major source of computer science faculty at American universities, extending ARPA's social network into the next generation (Norberg and O'Neill, 1992, 140-141).4
Another reason researchers may have felt at home with the ARPANET was that ARPA did not emphasize the military character of the project. Frank Heart of BBN recalled that the few military sites that showed early interest in joining the network "wanted to be part of the academic research community rather than our serving the military community" (Heart, 1990,28). Participants apparently perceived little or no conflict between military and civilian values, perhaps because the network technology itself was not inherently destructive. Also, ARPA did not hamper researchers with security restrictions; in fact, the agency funded its contractors to present network results at conferences and encouraged
them to publish.
4 In its reliance on interpersonal networks the ARPANET fit with larger patterns of militaryindustrial-academic interaction. Contemporary defense research projects tended to be concentrated in a small number of institutions with tightly knit social networks. In 1965 half the money spent by all government agencies on university science went to 20 institutions (Johnson, 1972, 335). A 1965 study, noting that the defense R&D industry was highly concentrated in New England and Los Angeles, found that in New England nearly two-thirds of the people working in defense R&D had gone to school in the same area, while in Los Angeles the figure was 21 %; half of the engineers and scientists surveyed said they had sought their current jobs because they had a personal acquaintance in the company (Shapero et aI., 1965, 30, 50-51).
In managing the project, lITO directors Taylor and Roberts regarded building a sense of community among researchers as both a means and an end. ARPA's financial leverage over its contractors was considerable, especially given the expense of computing machinery at that time, and ARPA managers occasionally used this power to pressure contractors into participating in the project.> But both Taylor and Roberts felt that research could best be coordinated through informal cooperation. Conversely, Roberts and Taylor saw the ARPANET itself as a means of bringing researchers together. Taylor felt that each research site "had its own sense of community and was digitally isolated from the other one;" he saw the network as a way "to build metacommunities out of these by connecting them" (Taylor, 1989,46,35). Likewise, Roberts stressed early on that "a network would foster the 'community' use of computers. Cooperative programming would be stimulated, and in particular fields or disciplines it will be possible to achieve a 'critical mass' of talent by allowing geographically separated people to work effectively in interaction with a system" (Roberts, 1967,2).
Taylor and Roberts coordinated the ARPANET project through a variety of
informal mechanisms aimed at creating and reinforcing common values and
goals. They maintained personal contact between themselves and their contractors through frequent site visits. NAC's Howard Frank described these informal meetings with Roberts as a chance to discuss the exciting possibilities of the new technology, "the 'Gee whiz' kinds of things" that kept him and his staff excited about the project (Frank, 1990, 17). ARPANET participants could also
5 In fact, when Roberts was initially reluctant to join the project in 1967, Taylor had used ARPA's financial backing of Lincoln Labs to pressure them into sending Roberts to Washington. Years later, when the ARPANET had become widely regarded as a success, Taylor confessed, "I blackmailed Larry Roberts into fame!" (Taylor, 1989,43-44; also Roberts, 1988, 145fn).
meet at IPTO's annual retreats for Principal Investigators. Praised by BBN's
Heart as "among the most interesting, useful meetings that ever took place in the technical community," PI meetings gave contractors a chance to focus their attention and expertise on common problems (Heart, 1990, 40). PI meetings
were small (generally less than 50 people), allowing both "the formal transfer of research results and methods and camaraderie that aided informal exchange among the participants" (Norberg and O'Neill, 1992,99). For IPTO directors, the meetings provided a chance to assess the progress of their various programs through PI presentations and critiques by contractors (Norberg and O'Neill, 1992,99). Roberts's assistant director, Barry Wessler, ran similar meetings for the graduate students working on the ARPANET (Norberg and O'Neill, 1992, 100; Taylor, 1989). By bringing researchers from around the country together to work on pressing problems of mutual interest, PI and graduate student meetings encouraged existing social networks to become national rather than local in scope.
IPTO's first nationwide computing project would put to the test whether a decentralized confederation of independent researchers could actually build a coherent system. One of the most important mechanisms for pooling efforts and building consensus among the scattered sites was the Network Working Group. In its attempt to design host protocols the NWG brought together representatives (mostly graduate students) from all the network sites every few months to discuss software problems and work out solutions. "We were all feeling our way because there wasn't any body of current expertise or knowledge or anything," one member recalled (McKenzie, 1990,8). The lack of established authorities and the newness of the field meant NWG participants had to formulate technical problems and propose solutions on their own. As Vinton
Cerf, then a graduate student at UCLA, described it, "We were just rank amateurs, and we were expecting that some authority would finally come along and say, 'Here's how we are going to do it.' And nobody ever came along" (Cerf, 1990, 11). At one point, disappointed with the slow progress of the NWG, Roberts did consider turning over the host protocols to a professional research team; but in the end he stuck with the NWG, in part because he was aware that the group increased the contractors' sense of involvement in and commitment to
the network (Norberg and ONeill, 1992,245-6).
The NWG developed its own social mechanisms to ease the technical challenges it faced. One leading member, Stephen Crocker of UCLA, suggested that technical proposals and minutes of meetings should be distributed as a series of documents called Requests for Comments (RFCs); another UCLA student, Jon Postel, edited these documents. The RFCs were consciously designed to promote informal communication, as the NWG's "Documentation Conventions" made
The content of a NWG note may be any thought, suggestion, etc. related to the HOST software or other aspect of the network. ... Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable ....
These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition (Crocker, 1969).
The RFCs were kept on-line at SRI and were accessed through the ARPANET itself. In the course of many RFCs a consensus would emerge on protocols and policies, which eventually became official ARPA policy. Thus RFCs helped make it possible for the NWG to evolve formal standards through an informal process.
As Carr, Crocker, and Cerf reported to a 1970 AFIPS Computer Conference, the NWG had provided a unique collaborative experience:
We have found that, in the process of connecting machines and operating systems together, a great deal of rapport has been established between personnel at the various network node sites. The resulting mixture of ideas, discussions, disagreements, and resolutions has been highly refreshing and beneficial to all involved, and we regard the human interaction as a valuable by-product of the main effort (Carr et al.. 1970, 589-590).
The ARPANET itself, once it was operational, provided one more vehicle for IPTO managers to keep in touch with contractors. Roberts himself was one of the first to experiment with electronic mail on the network (Dern, 1989, 12). ARPA Director Stephen Lukasik embraced email and made a point of encouraging his office directors, program managers, and contractors to communicate electronically (Kleinrock, 1990, 31; Heart, 1990, 20; Licklider and Vezza, 1978, 1331).
The importance of collaboration
ARPA needed its network contractors to be able to work together as peers, since different tasks required different combinations of skills and no one contractor had the overall expertise or authority to direct the others as subordinates. Roberts often needed to combine the software engineering done at BBN with the theoretical and analytical work at NAC and UCLA. BBN, through its monitoring of the network, provided an important source of data for UCLA and NAC. This role was significant because there was no other source for data on the performance characteristics of a large distributed network, and because the theoretical tools of network analysis and simulation were in their infancy and needed to be checked against operational data (Heart et al., 1970,
In other cases, analysis and simulation could aid BBN's engineering work by predicting problems with the IMPs before they appeared in the network. For instance, when Robert Kahn of BBN and Vinton Cerf of UCLA worked together on network simulations, they were able to demonstrate that IMPs would be prone to congestion under heavy loads. One predicted problem was
"reassembly lockup." An IMP divides each message it receives from a host into a series of packets; when an IMP receives packets from the network, it must reassemble them into a complete message before it can pass them on to the host. The IMP has a limited amount of memory space in which to reassemble messages. Simulation showed that if this space filled up with half-completed messages, the IMP would become deadlocked: in order to pass on any of the messages it would have to assemble all of the packets for that message, but it would not be able to accept any more packets until it freed up space by delivering some of the messages. To avoid this problem, BBN revised the system to make sure IMPs reserved sufficient space for reassembling long messages. Another potential problem, "store and forward lockup," was also detected through collaborative experiments (McQuillan et al., 1972,742).
Occasionally the ARPANET depended on the active assistance of its users to overcome technical difficulties. For instance, the first version of BBN's IMP software did not effectively control the flow of data traffic, so that it was possible for the network to become overloaded and break down. This technical problem was "fixed" in the short term by users agreeing not to send data into the network too fast; their voluntary restraint allowed the network to function while BBN came up with a software solution (Norberg and O'Neill, 1992,250). Since the ARPANET had no precedents in large-scale packet-switched networking to build
on, the flexibility that allowed users to help build and operate the system may have meant the difference between success and failure in those early years.
Relying on informal collaboration between peer groups did have its drawbacks. When the complexity of the system made responsibility for problems difficult to pin down, contractors tended to trust their own work and find fault with others. Frank described NAC as having an "adversarial relationship" with BBN over the issue of congestion, which BBN attributed to shortcomings of NAC's topology and NAC blamed on BBN's routing scheme (Frank, 1990, 22). Contractors might also be reluctant to share control over an aspect of the project. UCLA and NAC were jointly responsible for analyzing network performance, a relationship Kleinrock described ambiguously as "competitive but cooperative" (Kleinrock, 1990,24). A more obvious conflict arose when BBN attempted to keep the source code for its IMP software from the other contractors, evidently hoping to preserve the company's advantage in producing a commercial version,s ARPA eventually intervened and established that BBN had no legal right to withhold the source code and had to make it freely available (Kleinrock, 1990, 12; Schelonka, 1976, 5-19).
Contractors also had conflicting priorities for allocating their time and effort.
When analysis predicted that the subnet would have problems under future heavy loads, UCLA urged BBN to revise the IMP software right away, whereas BBN's priority was to get the basic system installed and functioning before making improvements. Kleinrock described UCLA's relationship with BBN as one of "guarded respect," adding that "BBN was not very happy with us showing up their faults and telling them to fix them" (Kleinrock, 1990, 26, 25).
6 Source code is the high-level, human-readable form of a computer program. BBN wanted to distribute only the low-level binary or executable code, which would run on the IMPs but could not be understood or modified by human users.
Within BBN there were conflicts between McKenzie's group at the Network Control Center, whose priority was to keep the system up and running reliably, and the IMP developers under Heart, who wanted to understand what was behind network problems so as to prevent future recurrences. When an IMP failed, the development team would often keep the machine out of commission for several hours while they debugged it, rather than immediately restoring it to service. Heart commented that the IMP developers came under increasing pressure as the network expanded and became more popular: "People began to depend upon it. And that was a problem, because that meant when you changed it, or it had problems, they all got mad. So that was a two-edged sword" (Heart, 1990). The IMP team eventually resolved this conflict by developing new software tools that would allow them to diagnose IMP problems without keeping the machines down (Ornstein et al., 1972, 252).
It is testimony to the cohesiveness of the ARPANET group that despite conflicts of interest between contractors, the dominant paradigm was still one of collaboration. Representatives of the three main contractors, Howard Frank of NAC, Robert Kahn of BBN, and Leonard Kleinrock of UCLA, described in a 1972 conference paper how the ARPANET had provided a rare opportunity for collaboration across disciplines. They perceived their joint effort as something unique in computer science, "the first attempt to bridge the gap among theory, simulation, and engineering" (Kahn, 1990, 19). They noted, "Our approaches and philosophies have often differed radically and, as a result, this has not been an easy or undisturbing process. On the other hand, we have found our collaboration to be extremely rewarding" (Frank et al., 1972, 255). While they differed in their preferences for analysis, simulation, computerized optimization, or engineering experiment, after two years of experience they were willing to
concede that "all of these methods are useful while none are all powerful. The most valuable approach has been the simultaneous use of several of these tools" (Frank et al., 1972,267). If a lack of understanding between R&D specialties was the norm, ARPA's attempt to bring them together was all the more remarkable. Cultivating existing social networks, creating new management mechanisms to promote system-wide ties, and insisting on collaboration among groups all aided ARPA's social and technical integration of the system.
The political environment
One reason IPTO managers were able to create a research-oriented environment for their contractors was that ARPA's upper management buffered them from the political scrutiny ARPA faced as a government agency. ARPA had been shaped by political events from the start. The agency had been founded in 1958 in the aftermath of the Soviet Sputnik launch (its first mission was to manage the U.S. space program before the creation of NASA). The administrative structure of ARPA reflected perceived inter-service rivalries in research and development: the agency was based in the Office of the Secretary of Defense, rather than in one of the armed services, to give it a neutral position to coordinate research of general applicability to defense. The Director of Defense Research and Engineering, who reported to the Secretary of Defense, was responsible for assigning general research goals to ARPA, and ARPA directors authority to conceive and execute programs in line with goals. This structure put relatively few bureaucratic or political constraints on ARPA.
In the late 1960s ARPA benefited from the Johnson administration's support for defense-funded research projects. In a September 1965 memo to his cabinet, President Johnson advocated the use of government agency funds to support basic research in universities. Noting that funding by various federal agencies
made up about two-thirds of total university research spending, he said that this money should be used to establish "creative centers of excellence" throughout the nation (Johnson, 1972,335). He urged each government agency engaged in research to take "all practical measures ... to strengthen the institutions where research now goes on, and to help additional institutions to become more effective centers for teaching and research" (Johnson, 1972,336). Johnson specifically did not want to limit research at these centers to mission-oriented projects: "Under this policy more support will be provided under terms which give the university and the investigator wider scope for inquiry, as contrasted with highly specific, narrowly defined projects" (Johnson, 1972, 335).
A few months later the Department of Defense (DoD) responded to Johnson's call with a plan to create "centers of excellence" in defense-related research. The programs would be locally managed by universities, but the DoD expected that they would contribute to its own research priorities (Department of Defense, 1972,337). IPTO embraced this trend, establishing several "centers of excellence" in computing around the nation. It was the existence of these dispersed computing facilities that prompted ARPA to build a nationwide computer network.
As a publicly funded agency, ARPA had to operate in the context of national politics, which sometimes conflicted with the agency's own priorities. In particular, there were some in Congress who believed that defense money should not be spent on general research. ARPA's upper management became adept at buffering the agency's researchers from Congressional scrutiny and the need to provide explicit military justifications of their work. Thus, while ARPA played an important role in advancing basic research, the agency was careful to present Congress with pragmatic military or economic reasons for all of its
projects. For instance, ARPA Director Eberhardt Rechtin promised Congress in 1969 that the ARPANET "could make a factor of 10 to 100 difference in effective
computer capacity per dollar among the users" (U.S. Congress, 1969b, 809). Two years later the new ARPA Director, Stephen Lukasik, cited "logistics data bases, force levels, and various sorts of personnel files" as resources that would benefit from ARPANET access (U.S. Congress, 1971b, 652). Roberts himself always stressed potential cost savings in his public statements about the project (Roberts and Wessler, 1970; Roberts, 1973).
ARPA's concern for defense applications and cost savings was genuine, but the agency's disavowal of basic research was more rhetorical than real. Dr. John S. Foster, who as Director of Defense Research and Engineering oversaw ARPA during the creation of the ARPANET, assured the Senate in the 1968 budget hearings that
The research done in the Department of Defense is not done for the sake of research. Research is done to provide a technological base, the knowledge and trained people, and the weapons needed for national security. No one in 000 does research just for the sake of doing research (U.S. Congress, 1968,2308).
If taken at face value this statement might have surprised IPTO's academic contractors, since the agency was at the same time assuring them of its support for basic research. Many of IPTO's computer science projects were proposed by the researchers themselves, or allowed researchers to continue work in areas
they had explored independently. Even though the resulting technologies often became part of the military command and control system, the defense rationale may have come after the fact. Leonard Kleinrock acknowledged, "Every time I wrote a proposal [for ARPA] I had to show the relevance to the military's applications," but claimed "it was not at all imposed on us": he and his colleagues
would corne up with their own ideas and then propose military applications for the research (Kleinrock, 1990, 34)?Taylor stated that at IPTO,
We were not constrained to fund something only because of its military relevance .... When I convinced Charlie Herzfeld, who was head of ARPA at the time, that I wanted to start the ARPANET, and he had to take money away from some other part of ARPA to get this thing off the ground, he didn't specifically ask me for a defense rationale (Taylor, 1989, 10-11).
Such freedom, combined with generous contracts, appealed to researchers. ARPA's skill at constructing an acceptable image of the ARPANET for Congress ensured liberal funding for the project and minimized outside scrutiny. In this way ARPA was able to generate support from both its political and research
Implications of ARPANET strategies
The strategies of layering and collaborative organization were crucial to the success of the ARPANET. The layered model helped ARPA manage the complexity of the emerging network by allowing the system to be divided into modular parts, so that implementation tasks could be divided among different contractors. ARPA's informal, collegial management style provided a context in which network builders could share skills and insights and without giving up their autonomy.
These strategies had lasting implications. ARPA did not invent the idea of layering, but the ARPANET's success popularized layering as a networking
7 Arguably the need to be able to identify a military application represents an "imposition" whether or not the researchers themselves recognized it as such. One of ARPA's academic contractors commented that although Pis at universities acted as buffers between their graduate students and DoD-alloWing students to focus on basic research aspects of the projects without necessarily having to confront their military implications-this only disguised and did not negate the fact that military imperatives drove the research (Cerf, 1990,38). My point is not that ARP A contractors had absolute intellectual freedom but that they perceived ARPA as providing research funding with few strings attached, and that this perception made them more willing to participate in ARPA projects.
technique. The ARPANET's particular version of layering became one prominent model for network builders, and even when network experts differed on specific techniques they usually implicitly accepted the concept. Computer networking textbooks provide evidence of the popularity of layering as a conceptual scheme, with each chapter typically devoted to the functions of a different layer (Black, 1991; McConnell, 1988; Stallings, 1991; Tanenbaum, 1989).
The community formed around the ARPANET ensured that its techniques would be discussed in professional forums, taught in computer science departments, and implemented in commercial systems. Detailed accounts of the ARPANET published in the professional journals provided technical information and also legitimated packet switching as a reliable and economic alternative for data communications (Roberts, 1988, 149).8 ARPA encouraged its contractors to turn their ARPANET experience to commercial uses. The ARPANET trained a generation of engineers to understand and advocate these new techniques. ARPA's funding of PIs, careful cultivation of graduate students, and insistence that all contractors participate in the network project insured that all the major academic computing centers had people committed to and experienced with the ARPANET technology.
8 In addition to publishing many individual articles on the ARPANET, computer journals and conference proceedings periodically highlighted ARPA's contributions to networking by featuring several ARPANET papers in a single issue. See especially the AFIPS Spring Joint Computer Conference, 1970 and 1972; AFIPS National Computer Conference, 1975; and Proceedings of the IEEE, 66(11),1978.
From ARPANET to Internet
The years following the 1972 ARPANET demonstration saw a networking boom in the United States. During this period ARPANET technology was transferred directly to a number of new commercial and research networks and provided the inspiration for others. At the same time, ARPA initiated new network projects that applied ARPANET techniques to different transmission media. In the course of these experiments, ARPA managers became convinced it would be necessary to have networks that were technically different or administratively separate, yet could be linked to exchange data. Internetworking would require ARPA to develop new techniques as well as new structures for managing an ever larger and more complex system.
ARPANET's evolving role within the defense community
The ICCC demonstration had showcased the ARPANET's technical
achievements for an audience of computer experts. For its Defense sponsors, ARPA also had to provide evidence of the ARPANET's financial advantages and benefits for users. ARPA managers had always emphasized the potential cost savings of resource sharing, and they took pains to confirm and publicizeespecially to the agency's financial sponsor, the U.S. Congress-the financial benefits of the network. ARPA commissioned a study to determine the extent to which resources were used via the network during the year 1973: it found that the cost of providing these services was $2 million, while the cost of having sites purchase equivalent machines or services locally would have been $6 million. Since the cost of network equipment and operations averaged out to $3.5 million per year, wro Director Lawrence Roberts argued that the network was saving
ARPA half a million dollars annually (Roberts, 1974,46). Roberts pointed out that several sites, such as the University of Illinois, were able to dispense with local time-sharing machines altogether and contract for basic computing services from remote sites (Roberts, 1974,47). Sites that supplied computer services benefited as well: by 1973 several computer centers had become large suppliers of services, with some sites receiving as much as a quarter of their revenue from remote users (Roberts, 1974; Greenberger et al., 1973, 30),1 ARPA Director Stephen Lukasik, a staunch supporter of the network, invited the Air Force to make a sixmonth trial of the ARPANET, after which he reported to Congress that the Air Force had found the ARPANET "twelve times faster and cheaper than other alternatives for logistic message traffic" (Lukasik, 1973, 10).
The ARPANET gave researchers new opportunities, as every computer attached to the network became potentially available. The ARPANET also gave small research groups better access to ARPA funding, according to Lukasik:
Before the network we were in many cases forced to select research groups at institutions with large computing resources or which were large enough to justify their own dedicated computer system. Now it is feasible to contract with small talented groups anywhere in the country and supply them, rapidly and economically, with the computer resources they require via the network (U.S. Congress, 1972,822).
The network made it easier for people to use the computer that best fit their needs, rather than the one that was closest. Freed from institutional boundaries, some professors and students chose to work with distant colleagues rather than those at their own university. u.i . ers initiated collaborative projects-such as the
1 Roberts later pointed out that such savings were possible because at the time ARPA researchers were using mainframe computers. Since mainframes provide computing power in large fixed amounts, it is difficult to match the amount of computing power to the needs of the local users, and therefore it may be more cost-effective to obtain computer resources (or to sell excess resources) over a network. Roberts concluded that, once a site could afford to give each person his or her own computing resources in the form of a microcomputer, using a network to provide access to computing pouier no longer offered an economic advantage (Roberts, 1988, 158). Presumably there would still be an advantage in gaining access to infarmation.
development of the Common LISP programming language-that would not have been possible without a means for extensive ongoing communication between many geographically separated groups (Sproull and Kiesler, 1991, II, 32). One of the most heavily used services was the Network Information Center at SRI, which provided advanced text editing and bulletin board systems that were used by special interest groups to prepare, distribute and store their ongoing communications (Karp, 1973, 272).
Yet the popularity of the ARPANET masked the fact that the network did not fill the role its creators had planned. People did use the network to share machines and data, but-to the surprise of all-the network's most popular and influential service was one that had been added as an afterthought: electronic mail. Roberts had not included electronic mail in the original plan for the network, stating in 1967 that the ability to send messages between users was "not an important motivation for a network of scientific computers" (Roberts, 1967, 1). When it came time to create the host software, the Network Working Group had focused on protocols for remote login and file transfer, not electronic mail. ARPANET email had an inconspicuous beginning in July 1971 when two BBN programmers came up with an experimental "Mailbox" program. BBN's Frank Heart recalled, "When the mail was being developed, nobody thought at the beginning it was going to be the smash hit that it was. People liked it, they thought it was nice, but nobody imagined that it was going to be the explosion of excitement and interest that it became" (Heart, 1990,32). Before long electronic mail eclipsed the interactive use of remote computers-supposedly the main purpose of the network-in the volume of traffic it generated.
The idea of electronic mail was not new; at least one time-sharing system had a mail service to send messages between users on the same machine (Norberg
and O'Neill, 1992,254). Email had many obvious advantages: it was faster than postal mail, cheaper than long-distance telephone, and offered ARPANET users access to an impressive segment of the computing research community. Why then was its popularity such a surprise? Prior to the ARPANET, electronic mail had not been available on very many computer systems or for very many years, so there was no precedent for the widespread use of email. In addition, the rationale for building the ARPANET focused on providing access to computer resources, not people. There was a clear demand for computers, but no one at ARPA seemed to view the cost, timeliness, or convenience of interpersonal communication as problematic, so they were not looking to the network for a solution. Roberts made analyses comparing the cost of using the network with the costs of sending computer data by various media, but never compared the cost of using the network for messages with the cost of other means of human communications. Finally, the ARPANET builders do not seem to have considered how using a system of networked computers would be different from using the local computer center.s They were focusing on computing activities such as interactive login sessions or file transfers; these processes gave users the same result regardless of where the computers were located. But in the world surrounding the network, distance was still important: it was still much easier to communicate with a local person than with someone across the country. Therefore, if the computer offered a means for sending messages to other users, it would make a difference where the computers were located. From the computing point of view, the fact that the network spanned the country was incidental; from the communications perspective it was crucial.
2 Other than noting that time-zone differences could be used to spread out peak usage loads.
BBN's experimental mail service was instantly popular, and new protocols were developed for network-wide use. ARPANET users came to rely on email in their day-to-day activities, and the availability of email attracted new users to the network (Frank, 1990, 31). ARPA Director Stephen Lukasik adopted email for communication within ARPA and between ARPA and its contractors. Roberts
recalled, "Steve Lukasik decided it was a great thing, and he made everybody in ARPA use it. So all these managers of ballistic missile technology, who didn't know what a computer was, had to start using electronic mail" (Roberts, 1988,
Through their wholehearted embrace of email, ARPANET users shaped the way in which the ARPANET and networks in general were viewed. By the time other groups began to build large packet-switched networks, they were
envisioning a communications medium; in fact, the first commercial networks registered with the FCC as common carriers. Email provides an example of how people can construct meanings for systems in the process of using them. Whatever its creator's intentions, an artifact or a system may be used for unexpected purposes that people find more valuable. This kind of "interpretive flexibility" varies with the technology; specialized artifacts may be more difficult for users to turn to their own purposes then generic ones. The fact that the ARPANET's popularity was so tied up with the unanticipated demand for electronic mail shows that the system builders owed their success less to an accurate prediction of usage than to their determination to build a flexible, general-purpose system that offered users the opportunity to add new services.
3 The adoption of a new communications medium might be expected to affect ARPA's management style (see, e.g., Yates, 1989). However, any such change during this period is difficult to detect given the frequent turnover of ARPA managers, the rapid growth of IPTO, and the agency's move away from "basic" research under pressure from Congress.
Attempts at teclmoiogy transfer
ARPA's normal practice was to try to transfer successful technologies, either directly to the armed forces or to civilian businesses that would incorporate the new technologies into products that could be used by the military. Transferring the ARPANET would be problematic: the network was not merely a prototype-something that could be disposed of once it had demonstrated the technology-but had become a vital communications tool for researchers; yet providing routine communications services was not part of ARPA's mission. Thus ARPA needed both to transfer its networking techniques to the military (and its civilian contractors) and to find an operator for the ARPANET itself.
To share its technical expertise, ARPA established a Distributed Information Systems Project, which provided networking advice to the Air Force logistics and intelligence agencies, the Air Weather Service, the National Science Foundation, the Federal Reserve Board, and NASA (U.S. Congress, 1972,821). ARPA gave three IMPs to the Defense Communications Agency, whose function was to supply long-distance communications to DoD users, so that DCA could start an experimental network connecting the Worldwide Military Command Control System, a collection of military computers and databases around the world (Ll.S, Congress, 1972,822). ARPA also encouraged commercial exploitation of its networking techniques.
Finding a home for the ARPANET itself was less simple, underlining how closely the ARPANET system was adapted to its original organizational environment. In 1972 ARPA and BBN (which operated the network) began considering options for transferring the ARPANET to a government agency or a commercial carrier, so that the network could grow into a national public service (Ornstein et al., 1972, 253; McQuillan et al., 1972, 752). After discussing the matter
with the Federal Communications Commission and other agencies, ARPA decided the network should be turned over to a commercial operator that would buy the network hardware from ARPA, receive an FCC license as a specialized common carrier, and supply communications services to the government and other customers (U.S. Congress, 1972,822). ARPA commissioned Paul Baran to study the situation, and his 1974 report concurred that government packet switching needs (except for experimental work) should be met by competitive commercial suppliers, rather than a DoD-run network. Baran noted that if 000 used commercial networks it would stimulate the U.S. networking industry and make it easier for DoD users to communicate with non-DoD sites that did not qualify for ARPANET access (Kuo, 1975, 13). But despite these intentions, ARPA was unable to find a commercial operator for the network. AT&T, as the nation's largest carrier, seemed the most likely candidate, but showed no interest, though by Roberts and network designer Howard Frank met with AT&T managers to explain how the network could be scaled up for commercial use (Roberts, 1978, 49; Frank, 1990, 26-27; Kleinrock, 1990, 36).
In 1975 ARPA director George Heilmeier decided "after long discussion" on an alternative plan. Operational responsibility for the ARPANET would be transferred to the Defense Communications Agency, which would operate the network for three years (ACM, 1975, 32). ARPA would continue to provide funding and technical direction, and access would be open to DoD users and any others approved by DCA (Kuo, 1975, 13). The agreement left the fate of the network after three years unresolved, suggesting that ARPA still hoped to find a home for the ARPANET outside the government.
DCA officially assumed control on July 1, 1975, taking on a system that had grown to 60 IMPs and over 100 hosts. With DCA as manager, the network's
identity shifted away from its experimental origins and toward routine military operations (ACM, 1975,32). Under ARPA's administration users had not been billed for use of the network, though various hosts had their own accounting systems; DCA instituted a new accounting system for ARPANET services so that users would pay for their use of the network, with no loss or profit for DCA (Kuo, 1975, 13; ACM, 1975, 32). ARPA had struggled in the early years to get
people to use the network; limiting network access does not seem to have been a concern. DCA tried, in theory at least, to limit civilian access to those users who had no commercial alternative available, warning that "the ARPANET is an operational DoD network and is not intended to compete with comparable commercial service" (ACM, 1975, 33).4
Military users made increasing use of the network now that they could go through DCA; as Robert Kahn, then a program manager at IPTO, observed, "it was their normal way-they didn't have to deal with a research agency" (Kahn, 1990, 40). Army Colonel David Russell, who became director of IPTO a few months after the DCA takeover, helped accelerate contacts with the military (Kahn, 1990, 39). Russell made a point of using the ARPANET as a "testbed" for new command and control systems. Hardware and software under consideration by the services were made available through the network, allowing military users to test new products in a complex, distributed environment before making purchasing decisions (Kahn, 1990, 38; Klass, 1976, 63). George Heilmeier, who became Director of ARPA in 1975, saw the testbed as a way for "the users, not the engineers" to evaluate new systems "in real command and control scenarios while injecting the all-important human factors"
4 Anecdotal evidence suggests this rule was rarely enforced, possibly because no commercial service could be "comparable" in the sense of offering access to so many research sites.
(Heilmeier, 1976,6). By 1976 both the Navy and the Army were making plans to use the testbed.
The technical components of the ARPANET system were soon adopted by military users; its cultural aspects were not so easily transferred. As time went on, the ARPANET became the focus of cultural tensions between ARPA researchers and the mainstream of 000. Military users wanted a reliable, stable, and secure network service, while researchers wanted the freedom to modify network protocols and perform disruptive experiments, such as artificially disabling some nodes to observe the effect on network performance (Harris et al., 1982, 78). In addition, military users were concerned about possible security hazards posed by civilian access. DCA reported the increasing threat of "intrusion by unauthorized, possibly malicious, users ... as the availability of inexpensive computers and modems have made the network fair game for countless computer hobbyists" (Harris et al., 1982, 78). As a BBN manager put it, "The research people like open access because it promotes the sharing of
ideas .... But the down side is that somebody can also launch an attack" (Broad, 1983, 13).
DCA and ARPA took a technical approach to resolving these social tensions.
They decided to split the ARPANET into two distinct networks. One of these, called MILNET, would handle the ARPANET's military traffic; the other would retain the name ARPANET (it was briefly known as "R&DNET")' and would continue to be used by ARPA and its contractors for research and development. To minimize disruptions for military users, new networking technologies would be created and tested on ARPANET before being transferred to MILNET. The decision to split the network was announced on October 4, 1982, and carried out over the following year. Each host and IMP was assigned to either MILNET or
the new ARPANET, and telephone links were rearranged so that only IMPs from the same network would be interconnected. A few hosts were attached to both
networks in order to pass mail between the two. Thus MILNET represented a complete transfer of a network system to the military, while ARPANET continued to function as ARPA's own research network, though nominally administered by DCA.
ARPANET's national influence
The 1972 demonstration of the ARPANET had a dramatic impact on the world of computing beyond 000. In engineering terms, the demonstration provided an "existence proof" of the feasibility of packet switching-that is, the existence of the functioning network proved the validity of the technique, and far more compellingly than any equations on paper. Though ARPANET participants had reported on the status of the developing network at various professional conferences, it is evident from the response to the demonstration that their colleagues had been skeptical until they saw the network in action. "It was the watershed event that made people suddenly realize that packet switching was a real technology," recalled BBN's Robert Kahn (Kahn, 1990,3). Network Working Group member Vinton Cerf noted "a major change in attitude" among "diehard circuit switching people from the telephone industry," the same experts who had scoffed at Baran when he first suggested using packet switching (Cerf, 1990,25,26). The sheer complexity of the system, Roberts believed, was enough to make engineers skeptical until they saw it for themselves:
It was difficult for many experienced professionals at that time to accept the fact that a collection of computers, wide-band circuits, and minicomputer switching nodes-pieces of equipment totaling well over a hundred-could all function together reliably, but the ARPANET
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.