You are on page 1of 16

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

Department: Anecdotes
Title: Origins of the Domain Name System
Authors: Oscar M. Bonastre, University Miguel Hernández; Andreu Veà, Internet Society Spain (former executive President)
Editor: David Walden, dave@walden-family.com

We became better acquainted with Paul Mockapetris 1 and more interested in the history of the Domain Name System (DNS) when we were in
a conference panel session with him and Mike Roberts (the first chief executive of ICANN) in 2009 (see Figure 1). In 2012 we promoted Paul
for the honorary degree (Honoris Causa) from Spain’s Miguel Hernández University in recognition of his leading contribution to the invention
of DNS. With our heightened awareness of DNS, in 2013 we began to consider submitting a compendium paper on the history of DNS to the
IEEE Annals of the History of Computing; this idea was encouraged by then editor-in-chief Lars Heide. With help from Paul, we decided to
seek input on the history of DNS from Paul himself, Vint Cerf, Steve Crocker, Tin Tan Wee. However, for personal reasons, there was a
several year delay before we were able to complete a finished draft of this paper in 2018. Curiously, not finishing our paper until 2018 seemed
apropos at 2018 was the 35th anniversary of the creation and 1983 announcement of DNS.2,3 It was also the 20th anniversary year of the
founding of ICANN (which now administers DNS unique identifiers.

Figure 1. Deliberating about new Internet trends at iSummit 2009 conference; from left to right, O. M. Bonastre, M. Roberts, P.
Mockapetris and A. Veà (picture courtesy of Univ. Técnica Particular de Loja, Ecuador).

In some sense, the Domain Name System needs no introduction. Any Internet user gets a glimpse of it everytime they see a URL. What is
perhaps less realized is how much the creation of the DNS enabled development and evolution of the Internet and Internet applications such as
the World Wide Web (Web) ― something that has become manifestly clear to us through our several decades of research on the origins of the
Internet.

The following contribution to the history of DNS consists of descriptions by aforementioned individuals:
 Vint Cerf, who is well known for his leading contribution to TCP/IP and is a former chair of ICANN, describes the history of DNS
and its governance, including how DNS was an expansion of the top level domain space, the battle over intellectual property
protection, and the struggle for control over content found in the Web as indexed through URLs structured by the DNS.
 DNS inventor Paul Mockapetris describes the technical conception of DNS up to the time when the computers attached to the Internet
began to rely of DNS as a production system and then discusses what happened next.
 Steve Crocker, well known for creating the philosophy of the Request for Comments series and another former chair of ICCAN,
describes the initial concern about DNS security and summarizes how the decade following the discovery of DNS was occupied with

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

specifying how what came to be known as DNSSEC (DNS Security Extensions) might work.
 Tan Tin Wee, who created multilingual DNS, discusses the origins of the DNS extensions to handle multilingual queries.

History and Governance (Vint Cerf)

Here I outline how I lived the origins of governance of DNS and expansion of the top level domain space. Originally developed in response to
a clear need to scale the name-to-address mapping system of the Internet, the DNS has become a complex and multi-faceted phenomenon ever
since a charging and registration system for domain names was introduced around 1992.

In the late 1960s, the US Defense Advanced Research Projects Agency (DARPA) sponsored the development of the ARPANET.

This packet-switching system allowed host computers of all kinds to interact through a homogeneous network of packet switches called
Interface Message Processors which were developed by the Bolt Beranek and Newman company. The protocol used to connect the hosts to
each other over the ARPANET was called the Network Control Protocol (NCP), and it had the property that it used numerical addresses to
refer to the hosts in the network. For convenience, text names were adopted for each destination host and mapped through a simple table from
name to address.

Host names such as UCLA or MIT or USC-ISI were used in reference to the computers that were connected to the ARPANET. The table
containing the mapping was a simple text file called “hosts.txt” and it was assembled and distributed regularly by the SRI International
Network Information Center (SRINIC) team in accordance with instructions from Jonathan Postel, the so-called “numbers czar” of the
ARPANET.

Very soon after the ARPANET was made operational in 1969, networked electronic mail was demonstrated by Ray Tomlinson and email
addresses took the form “cerf@usc-isi” for example. Concurrent with the expansion of the ARPANET, DARPA also sponsored research into
packet network intercommunication that led to the design of the Internet in 1973. Over a period of ten years, the Internet experiments
continued until the Internet was formally launched on January 1, 1983, using the TCP/IP protocols designed to allow many kinds of packet
networks to be interconnected.

Postel continued to manage address and naming assignments, delegating the administrative effort to the SRI-NIC. It became clear in the early
1980s that the Internet phenomenon was taking off and expanding at such a pace that the system of name to address mapping would have to
change to operate at a vastly larger scale and accommodate a vastly increased rate of change. The old “hosts.txt” table could not be kept up to
date in a timely way.

With support from DARPA, the DNS was developed by Paul Mockapetris in collaboration with Postel. The hierarchical nature of the DNS
allowed for a highly distributed method of management. Top level domains such as “com,” “net,” “org,” “mil,” “gov,” “edu,” and “int”
formed the initial “root zone” of the DNS. The managers of each of these top level domain names could then delegate management of second
level domain names such as “example.com” or “usc-isi.edu” or “nsf.gov” to other parties. Third level domain names could be managed by yet
other delegations. The older host names such as UCLA or SRI became “UCLA.EDU” and “SRI.COM.” During this period, Postel became
known as the Internet Assigned Numbers Authority (IANA). The “IANA functions” included top-level domain name management including

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

changes to the root zone, allocation and assignment of Internet Protocol address space (IP addresses) and recordation of parameter names and
values needed for the TCP/IP protocol suite.

A collection of thirteen organizations were invited by Postel to become “Root Servers” for the Domain Name System.
In effect, the root zone, specified by IANA and generated by SRI International, would be replicated and housed in thirteen servers. Later,
technical developments permitted replication of hundreds of these servers around the world.

During the mid-late 1980s the responsibility for Internet policy moved from DARPA alone to include a group of US agencies that included the
National Science Foundation (NSF), the National Air and Space Administration (NASA) and the Department of Energy (DOE), all of whom
were involved in implementing networks connected into the Internet. NSF inherited responsibility for managing the SRI-NIC activity that
included managing the contents of the DNS Root Zone.

By 1991, the ARPANET had been retired and NSF re-competed the Network Information Center functions to create something it called the
INTERNIC and the responsibility for managing the root zone and three of the top level domains of the DNS (.com, .net, .org) fell to a
company called Network Solutions. In 1992 NSF concluded that spending valuable research dollars on the administrative function of domain
registration was not the best use of its resources and agreed to allow Network Solutions to charge users for registration of domain names. The
initial charging structure was $50/year for two years for each second level domain name (such as “foo.com”).

Effects of Monetization

In the experimental period of Internet’s evolution, all costs were borne either by US Government agencies or by the participating institutions
that installed local infrastructure to connect to the regional and national (and eventually international) elements of the Internet’s network of
networks. Domain names were free of charge as were allocations of IP address space. The shift to charging for domain names in 1992,
followed by the arrival of the World Wide Web triggered explosive growth in the number of domain names registered and in the speculative
market in domain names that emerged. Domain Name management was becoming a big business as was speculative registration of names
thought to be of potential resale value to others.

“Domaining” became a popular activity among a small number of speculators. Some “domainers” held hundreds of thousands or more domain
names and evolved sophisticated software to analyze activity associated with their domains and also to watch for the expiration of registrations
and to re-register those that might be of interest for themselves or for clients. There were many scaling side effects to this aspect of the domain
name business and a variety of efforts were undertaken to manage these. One serious side-effect arose from the potential expiration of a
domain name. If a digital object in the World Wide Web had been identified with a Uniform Record Locator (URL), such as
www.example.com/example-filename, and if the registration of that domain name expired and was not renewed in a timely way, the domain
name would no longer resolve to an IP address and the file, “example-filename” would no longer be reachable using the URL. The artificial
“death” of URLs has the side-effect of losing information or, perhaps worst, potentially resolving to an unexpected location and/or digital
object if a new party obtains the domain name and happens to associate a new object with the same file name as before. Old references to the
object might either not resolve at all or might resolve to something different than was originally intended. Such registration and expiration
processes have an impact on the longevity of digital objects created and placed in the World Wide Web. Around 1995, Science Applications
International Corporation (SAIC) acquired Network Solutions.

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

By this time, domain name management had become a big and growing business. The so-called “dot-boom” commenced with the initial public
offering of Netscape Communications in August of that year. SAIC took advantage of that moment and sold Network Solutions to VeriSign in
March 2000 for $19.3B!

Institutionalization of the IANA Function

As the Internet began to take off and domain name management was becoming a big business, Postel and the legal department at USC
concluded that the risk factors for disputes were growing and that the function that had been performed under government contract by Postel
and his team should be spun off and institutionalized. There ensued a period turmoil and debate in the Internet community over how this might
be accomplished. An International Ad Hoc Committee (IAHC) of interested parties formed around the question of institutionalization, driven
in part by the increasing value being assigned to domain names that had become signposts (and trademarks) in the World Wide Web that was
driving the growth of the Internet after the appearance of the MOSAIC browser. Internet and the World Wide Web had become highly visible
phenomena. When it appeared that the IAHC was proposing to locate a domain management institution in Switzerland, the American
Congress took note and the ensuing rampage led to the Clinton White House where Ira Magaziner was charged with dealing with the problem.
In widespread consultation, Magaziner produced a “Green Paper” that produced yet more comment, followed by the “White Paper” that
offered a path towards institutionalization of domain name management but under the oversight of the US Government in the form of the
National Telecommunications Information Agency (NTIA) of the US Department of Commerce (DOC).

During 1998, NTIA managed a process for accepting proposals to operate the IANA function under contract. The winning proposal came from
a group calling itself the Internet Corporation for Assigned Names and Numbers (ICANN). Tragically, Jon Postel, who has been expected to
become the Chief Technology Officer of ICANN, passed away in October 1998, just a couple of weeks before ICANN was formally launched.

The newly formed ICANN Board, chaired by Esther Dyson, selected Michael Roberts as its first CEO.

The IANA functions included root zone management and ICANN provided to Verisign information needed to update the root zone file. A
cooperative agreement was also created between NTIA and Verisign for the creation and distribution of the master root zone file. Verisign also
operated one (subsequently, two, owing to an acquisition) of the thirteen root servers. ICANN’s proposal responded to a set of desiderata
specified in the White Paper and the governance implications of these desiderata will receive attention in the successive subsections.

General view of the Governance

“Governance” is a potentially vast topic and its application to the Domain Name System specifically does not reduce its scope by very much.
There have been and will continue to be arguments over what is meant by governance: who is affected? What rules apply? How are they
enforced? Who makes the rules? How are disputes over rules or their violation resolved? How is the transnational nature of the Internet and its
use accommodated? I am sure the readers of this essay can make up a much longer list of questions, some of which have not found answers
despite a 40+ year history of Internet evolution. One thing is clear; “governance” is not the same as government. Government may be involved
in governance but it need not necessarily be. In an attempt to get at the ways in which governance might apply in the Domain Name context, I
have adopted this pragmatic characterization of governance:

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

“Governance” expresses what is “permitted, forbidden, required and/or accepted” with regard to practices in some context. A full rendering of
governance would have to describe not only the individuals or entities (including institutions) that are governed, but also by whom and by
what means. It would also have to include some explanation of enforcement and also the means by which the governing rules are created,
amended and adopted.

Systems of rules may be adopted by entities other than governments to constrain and define the practices that are allowed in some context.
Organizations may be formed to provide governance of an activity. The rules of golf are governed in the United States by the US Golf
Association [www.usga.org]. The technical rules defining the functional operation of the Internet and the World Wide Web are defined, inter
alia, by the Internet Engineering Task Force and the World Wide Web Consortium. What is permitted in a residential neighborhood may be
governed, in part, by the Home Owner’s Association, that spells out, among other things, rules for the appearance of the homes and gardens
making up the neighborhood.

In some systems of governance, the governed parties are uniform in nature. The citizens of a country are generally treated as a uniform set of
individuals, governed by the common laws of the land. In the Internet, widely diverse actors are drawn together to create, operate and use the
network of networks and the devices they interconnect. These actors have varying structure, scale and interests and range from governments
and corporations to institutions and individuals. Attempts to define the taxonomy of the potential stakeholders with an interest in some aspects
of the Internet yields results ranging from vastly oversimplified to impossibly detailed. The extreme case is that every entity or individual with
an interest in the Internet is in his or her own unique stakeholder category.

Examples of multiple governance regimes are readily available. A company that offers Internet access may find itself subject to a wide range
of governance rules. As a corporation, it may be subject to international, national, regional, provincial or even local rules for incorporation and
operation. It may be subject to taxation rules regarding, inter alia, its profits, its property, and its assets. It will be subject to technical
obligations for interoperability’s sake. It may be subject to rules regarding pollution, management of human resources and environmentally
sound practices. It may be subject to telecommunications regulation in some jurisdictions, depending on the exact nature of its offerings. If it
also provides applications (e.g. email, cloud computing, software-as-a-service, mobile apps, etc.), it may be subject to various transparency
requirements regarding user privacy, enforcement requirements regarding copyright or trademark protection or restrictions on the export of
certain kinds of information.

Many distinct entities may be involved in applying and enforcing these hypothesized restrictions and it is even possible that there will be
inconsistencies and conflicts among the rules put forth by distinct governance agents. The processes by which governance rules are created
and applied may also vary from regime to regime.

Multi-stakeholder Governance

In rough terms, “multi-stakeholder governance” describes a practice in which all interested parties have a say in the development of
governance policy and its implementation. As always, the Devil is in the details. Answers must be found to such questions as: “Who or what is
a stakeholder?”, “What authorities and obligations do stakeholders have?”, “What categories of stakeholders are there?”, “How does an entity
qualify as a stakeholder?” In this essay, it is assumed that the term “stakeholder” refers to a party or entity with an interest in some aspect of

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

the Domain Name System.

To make matters more complex, many for-profit and non-profit organizations span a wide range of functionalities and interests making
categorization and membership in only one stakeholder category problematic.
Technological Governance

In principle, the technical standards associated with the Internet are voluntary in nature ― they are not forced on anyone. Operators or service
providers use the protocols that are needed for services they wish to offer. The Internet Engineering Task Force (IETF) and its associated
Internet Architecture Board (IAB) are the primary arbiters of the Internet architecture and the standards associated with it. Both of these
unincorporated organizations rely on the Internet Society (ISOC) to house and support their work. The World Wide Web Consortium (W3C)
is another voluntary standards entity that caters to the protocol layers and interface standards that are needed to operate the World Wide Web
atop the Internet.

There is coordination between the IETF and W3C, sometimes through joint working groups and sometimes simply as a result of highly
overlapping membership in working groups. Perhaps the best way to characterize “technical governance” is to suggest that multiple standards
development organizations and cooperating industry consortia develop standards and conventions. These are voluntarily adopted, in general,
by providers of equipment, software and services. Some government jurisdictions may reference standards or conventions and make them
mandatory, as might be the case with procurements for products or services. There is a great deal of flexibility and loose-coupling in such a
system. Interoperability by way of voluntary standards accommodates a wide range of incentives for the cooperating parties to interwork and
this has been the case for the Internet ecosystem as well.

As a case in point, the Internet Corporation for Assigned Names and Numbers (ICANN) provides some useful examples of the effects of
technical governance.

Multi-stakeholder Policy Making

What is unusual about the multi-stakeholder approach to policy making is the intention that all parties at interest have a fair and egalitarian
opportunity to influence policy development. Given the dynamic range of interested parties and their capacities to engage in policy debates,
one can expect, at best, to achieve an approximation of the ideal outcome.

ICANN was created incorporating the concept of multiple stakeholder supporting organizations that included the Protocol Supporting
Organization (PSO) representing the technical standards community, the Domain Name Supporting Organization (DNSO) representing
domain name registries and registrars, the Address Supporting Organization (ASO) representing the Regional Internet Registries. These
supporting organizations were charged with populating the Board of Directors.

In addition, provision was made for the election of Board members by direct vote of the global general public. The root server operators and
national governments constituted advisory bodies without franchise except for the appointment of liaisons to the Board. All of these
supporting organizations and advisory bodies would participate in policy-making discussions. It was hoped that each supporting organization
could self-organize. Early experiments with this design led to breaking up some of the supporting organizations and to creating substructure

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

for some of them. The Domain Name Supporting Organization (DNSO) was broken into a Generic Domain Name Supporting Organization
(GNSO) and a Country Code Top Level Domain Supporting Organization (CCNSO). A nominating committee, populated by various parts of
the ICANN constituencies, was formed in place of general public elections. The At-Large Advisory Committee was charged with developing a
global and regional supporting organization structure with the authority to fill one seat on the Board of Directors, which, as of this writing, it
has accomplished.

The GNSO has evolved substructure for various stakeholder subgroups (Registries, Registrars, Commercial and Non-Commercial) but this is
becoming complex and a challenge for organizations with overlapping interests in the various supporting organizations and advisory groups. A
Stability and Security Advisory Committee (SSAC) has been formed to provide expert advice regarding technical risks associated with
proposed policy choices. The PSO has been replaced by a Technical Liaison Group and the IETF each or which appoints a non-voting liaison
to the Board, as does the Root Server System Advisory Committee.

The history of policy development at ICANN has evolved over time and while it has made a significant effort to accommodate the many and
diverse interests of parties involved in the making, operating and using of the Internet, it is continues to be a complicated process that
continues to present the Board with sometimes conflicting input on matters for which decisions must be made.

It may be timely to explore somewhat less rigid policy-making processes that organize around issues rather than around roles. There is a
regular process incorporated into the ICANN system that requires review and analysis of the operation of the organization and its manifold
parts that would allow for alternative consideration and perhaps even experiments to explore a new mechanism for a particular matter. Any
changes to the current practices, however, would need to take into account questions about the means by which the Board is formed, the
parties with any kind of franchise, determination of consensus on policy in lieu of the present methods of weighted voting in the GNSO and
other decision mechanisms in other supporting organizations and advisory committees.

Expansion of the Top Level Domain Space

From its creation, ICANN has had an objective for increasing competition in the domain name space. After nearly 15 years of operation,
ICANN has begun the process of expanding the generic Top Level Domain space by a significant amount. On the order of 2000 top level
domain names were proposed to ICANN and some of these have been released for implementation. New domain names create new risks.
Users registered in foo.com may find themselves feeling that they should protect their interests by also registering “foo” in all new top-level
domains. While this seems understandable, some users see this as forcing them to register at second level in all or many of the new Top Level
Domains or at least police registrations in a many or all of them. Trademark holders are particularly concerned about the implications of such
registrations and the cost of policing them or registering defensively. The dynamics of domain name usage may well change. Some people
worry that confusion may result but new technical means such as the Domain Name System Security extensions (DNSSEC) and digitally-
signed certificates may allow users to confirm they have reached the correct destination.

Search engines may be helpful to users looking for particular correspondents, information, products or services, despite the potential
duplication of second or third level domain labels since more than just a domain name becomes part of the specification of the desired
destination.

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

Recourse

Policies and policy-making can have negative effects on the users of the Internet. In all policy arenas, the question of recourse arises when a
policy has a side effect, intended or not, the process produces a result that is considered harmful by the parties affected. In democratic
societies, there are many mechanisms intended to act as safety valves for negative effects. ICANN has incorporated a number of them in its
processes including an ombudsman, a Reconsideration process, and Independent Review Panel process, and various dispute resolution
processes associated In particular with domain name assignment. As the Internet increases in daily importance and its many policy-making
and enforcing actors try to deal with problems arising in the use of the Internet, it seems inescapable that there will be a need for a wider range
of mechanisms for dispute resolution and law enforcement. Inasmuch as the Internet’s infrastructure and users are found in ma ny different
jurisdictions, there is a natural need to find efficient means for coping with these problems on a global scale.

In some cases, problems may be resolvable using local means. Where jurisdictional boundaries lie between disputing actors, alternative
dispute resolution methods such as arbitration may prove more effective and less costly than litigation. This conclusion appears to be
substantiated by the successful use of the Uniform Dispute Resolution Policy (UDRP) developed by the World Intellectual Property
Organization (WIPO) in connection with Domain Name Disputes. More recent development of a Rapid Suspension System (URS) for the
most serious trademark infringement issues is another example.

It seems inescapable that more development will be needed not only for matters specific to domain names, but more generally to deal with
abuses that occur in the use of the Internet. Users of the Internet will be less likely to use it if they feel they have little or no recourse for harms
they experience or believe might occur.

It is a vital challenge to the Internet Community, writ large, to develop means to protect the interests of the users of the Internet, to protect
them from harm and to provide recourse when protections fail.

To conclude, the evolution of the Internet and the DNS continues apace. There other potential ways to associate identifiers with IP addresses.
One of these is called the Handle System, for example. Specialized identifiers found in the context of FaceBook, Google+, Twitter all are
eventually resolved to IP addresses associated with Web-based servers. The Internet of Things will create yet additional opportunities for
creating, managing and resolving new identifier spaces. All of this suggests that the Internet will continue to evolve and its environment will
continue to adapt to new possibilities, implying that policy and policy-making will need to evolve along with these and other changes.

Technical Conception and Evolution (Paul Mockapetris)

The DNS is a fundamental building block of today’s Internet. Here I sketch its origin and evolution. Origin refers to its history up to the time
when the first machines came to rely on it as a production system, roughly 1986. The evolution phase takes us to the current decade.

Origin

There are varying accounts of how the DNS came to pass, but the start for me was when Jon Postel suggested that I take a look at several
proposals for a next-generation system to replace the existing HOSTS.TXT central directory. 4 I was to try and figure out a compromise,

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

combination, or champion. One of the most common questions I get is, “Why you?”, and the answer is simple. At the time it seemed like a
nice little problem for a new PhD — certainly not a significant as other problems of the time, such as how fixed length records (e.g., card
images) would be sent over TCP. This phenomenon was not unique to me. In their Turing lecture, Vint Cerf and Bob Kahn acknowledged
that they could never have done their TCP/IP work if the future significance was fully appreciated. In any case, I proposed a clean slate design
rather than a compromise.

Over the next few years, I shepherded the protocol through two informal versions, and two RFC generations; Jon Postel did the adoption plans
and somehow convinced everyone that the idea should go forward, along with managing the all of the other plans for transitioning the rest of
the Internet off the ARPAnet base.

My approach was very much based on some lessons I had learned at the MIT Architecture Machine Group (now the Media Lab) in the late
1960s, as well as other lessons from the Distributed Computing System Project at UC Irvine in the late 1970s. At MIT, that had been
designing a distributed file system shared between several minicomputers, all of which lacked any memory protection. This taught the
discipline of designing distributed algorithms assuming that something was always broken, as well as causing a dislike of hierarchies with a
fixed number of levels. At Irvine I built messaging systems that used names instead of addresses to route messages built on top of hardware
that supported the name-based messaging — all on another set of fault tolerant (and unreliable) minicomputers. Those experiences shaped my
decision to propose a DNS protocol which assumed redundant servers and a lightweight transport over UDP. 5 The hope here was to keep the
protocol as fast as possible, so that it could be used for more and more lightweight activities.

The protocol design had two parts: a simple model of the data carried over the DNS,2 and a separate set of implementation ideas.3 These were
accompanied by Jon’s adoption plan. 6 The idea was that DNS customers might only look at the former, while actual implementers would look
at the latter — all in the hope that the distribution system would make a distributed database look like a single coherent database. This turned
out to be problematic, since users, and particularly the applications they wrote, had to deal with the case in which a DNS request was pending
on a machine that had lost network connectivity or where all of a domain’s servers were down; waiting forever or retrying forever weren’t
acceptable. As a result, the DNS application programming interface (API) had to be somewhat different than the HOSTS.TXT API. The
original goal of a seamless replacement proved a mirage.

DNS adoption was encouraged by a plan that automatically included all HOSTS.TXT names in the DNS database — thus the DNS became the
system that was universal with HOSTS.TXT always slipping behind. After much argument, the Top Level Domains (TLDs) allowed both
country codes and a few generic domains. It seems obvious now, but at the time many “experts” were convinced that the naming hierarchy
had to follow network geometry (MCI, ATT, etc., as TLDs); or only country codes could be used (OSI was going to replace it all, after all); or
that “.COM” was a bad idea since the Internet wouldn’t have significant commercial usage.

I’m sure these issues were debated endlessly in the upper reaches of the Internet management and research community, while I stuck to the
position that the system was flawed if it imposed any technical limitations on the name structure that could be implemented.

On the implementation front, the original DNS server code on DEC mainframes provided stable root service to the Internet. The competition
for a UNIX implementation between a Stanford program called Druid and Berkeley’s BIND was probably decided by the distribution
dominance of the BSD system. While not nearly as stable, BIND spread. This was one of the first of many instances where choices went not

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

to the most elegant or politically correct implementation but to the “Darwinian” metric of market share. Darwin was also at work in the battle
between DNS and X.500/LDAP.

Based on experimentation, the protocols evolved into the RFC 1034 and RFC 1035 standards.7,8

What were the advantages offered by DNS? The most important was that it allowed organizations to fully manage their own network once
they had a domain allocated. The typical DNS tutorial says that the first function of the DNS is to map host names to addresses, but that’s not
really true ― the first function is to locate the right set of servers and make the distribution transparent. The second advantage was that while
the original DNS specifications were seen as extremely heavyweight for the times, they really had a lot of room to grow. Another way to judge
the origin phase is to ask, “What were the avenues for growth?”

A simple way to judge this is to look at the DNS packet format from the RFCs as shown in Figure 2.

Figure 2. Early design of DNS packet format: The ID field which was intended to allow servers to match requests and responses
was criticized as being too large at the time (since no server would have more than 256 outstanding requests at a time). In more
recent time, when used as a mechanism to prevent spoofing of responses, it proved inadequate.

Most interesting was the use of 16-bit integers for not only the number of items in the query section of the packet, but also for the three
different parts of the response. In today’s DNS one query item is pretty much the standard, and the different parts of the response could easily
be accommodated in 8-bit fields.

Given that DNS packets were carried in UDP packets with a maximum guaranteed payload of 512 bytes, why the 16-bit fields? One reason
was the aspiration to manage mailing lists and the like in the DNS.

While a primitive scheme was proposed in the 88x series of RFCs, 2,3,6 it was never really adopted and was replaced by the mail exchanger
(MX) datatype. Another reason was that it was simply unbelievable to me that we wouldn’t develop a transaction protocol or just increase
over time TCP’s maximum transmission unit (MTU). Sadly, we are still suffering from MTUs that haven’t scaled with the multiple orders of
magnitude of increased speed.

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

Critics are quick to point out the “flaws” in the original protocol: the lack of security, facilities for online update of the information, techniques
for sending only updates rather than whole zones, etc. Others claim the system was inadequate for production due to these limitations. They
miss the fact that these omissions were intentional. The network security establishment of the time was always claiming that they would soon
reveal the “correct” security architecture. Similarly, the DNS always had as a core idea that zone transfers could be accomplished by any
method, and the scheme described in the RFCs was simply a bootstrapping measure. Incremental transfers and dynamic updates were always
foreseen as nice little problems for the next generation of grad students. I made an evaluation of the system’s progress toward the end of this
period, but many of its observations are dated as the system has continued to evolve. 9

Evolution

The evolution of the DNS was primarily driven by the evolution of the Internet community from 1986 to the present ― a small community of
network researchers changed into the general research community, then into the commercial world, and then into a worldwide consumer
product. Marketers, lawyers and politicians all helped out. The Web is a conspicuous success that was enabled by the DNS.

One of the successes is the use of DNS for blacklist information. Following on the heels of the MX system for routing mail, real-time black
hold lists (RBLs) evolved as the first necessary, if not sufficient, countermeasure. But we can probably learn even more by the failures.

ENUM was a standard to route phone calls via DNS lookups. It failed for several reasons. First, it ignored the need to route calls by more
criteria than the destination phone number. Second, it took value out of the proprietary hardware startups were selling to carriers. Third, it
didn’t really deliver value to anyone that would incent pushback on the proprietary hardware vendors. It still exists in some small contexts,
and for DNS cognoscenti interested in the name authority pointer (NAPTR) idea of localizing DNS responses at the user.

The object name service (ONS), the radio frequency identifier (RFID) directory in the DNS, also suffered from commercial pressures. The
first version of ONS, developed by the MIT AutoID center, used DNS to associate variable information to any RFID tag. The idea was that
one type of tag for pharmaceuticals, might want a large serial number and a small product number, where another for books might want a large
product number and a small version number. A complete implementation was created. But when change control was handed over to the
EPCGlobal organization, the flexibility was removed in the standards process.

Arguments were even made that the subnetting process used every day in every router in the Internet wouldn’t work for ONS. After some
politics, what worked in practice wouldn’t work in a standard.

The lesson is that displacing an existing technology usually involves displacing an existing business model, and designers must do all their
homework and then anticipate commercial opposition.

Over the last few years, it has become fashionable in the research world to declare the whole Internet architecture as “ossified” and start with a
“clean slate”. One of the big trends, if not the biggest, is to specify desired content by name rather than location, and then opportunistically
cache it. Content Centric Networking, Information Centric Networking, and Name Based Networking are examples, to name a few. What
separates these from the DNS? The primary differences are universal digital signatures and more powerful data structuring and query
mechanisms. Secondary differences include packet routing by name, flow control, and identical paths for a query and its corresponding

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

response (rather than possibly asymmetric paths by conventional IP routing).

So the natural question is whether the DNS can or should be upgraded to encompass these ideas or possibly serve as a building block for a
“dirty slate” approach. In the day-to-day world of the DNS, the challenge is twofold: getting protocol changes adopted, and getting some
consistency in the DNS capabilities available to the user. The needs and opportunities go all the way from the user interface to the core data
structures of the DNS, as touched on below

User interface: Today’s user is encouraged to type search terms or even voice commands into their devices instead of domain names or URLs.
While one suspects that the applicants for the approximately 1300 new TLDs will want to keep domain names popular, interfaces with more
and more intelligence are inevitable. At the same time, some consistency in the interpretation of domain names would reduce user confusion
and security risk. A recent experiment by Geoff Huston illustrates the problem: typing “Geoff.Huston” into various browsers led to completely
different searches through the DNS! In today’s network, where clicking on the wrong link can compromise security, this is a danger.

API: Programs are faced by a similar lack of consistency when they access names, and since there is no user to check the results, any
misdirection may go unnoticed. At the same time, existing DNS APIs don’t do a good job of handling asynchronous requests or DNS security.
There’s some work going on here, but more may well be needed.

Query mechanism: It’s time to expand DNS queries beyond the name, type, class exact match paradigm. By doing so, we could enable content
streaming and the like in a manner similar to CCN. We also need to remove size limitations. Some think that the only way to get this is over
HTTPS, which seems unfortunate, but is better than stagnation. DNSSEC needs to be end to end, and we are temporizing.

Replication: It’s time to create new ways to distribute signed copies of zones ― this doesn’t require any change to the DNS protocol since it
always allowed other methods for replication. We do need to think of replication as a way of enhancing reliability and security, particularly
with the root zone. We also need to think about automating coordination between zones, whether DNSSEC keys, glue, forward and reverse
zones, or whatever.

Internal Data Structure: It has proved much harder to create new data types than was hoped in the original DNS specifications. For decades we
have talked about defining resource record types (RRtypes) via metadata stored in the DNS. More recently, implementations have been adding
terminal labels which are equivalent to types and then storing data in text resource records. Both have merit, but removing ambiguity, one way
or another, is the goal.

Protocol design isn’t the only issue; there’s also our innovation process.

While there are many DNS design activities in the IETF, it isn’t obvious that there is any architectural coordination or oversight. One
colleague of mine is intentionally using a deceptive Internet draft title in order to avoid process. There’s a working group designing
“mechanisms” instead of “protocols” since mechanisms involve less IETF process; and lastly the DNS extensions working group is itself shut
down. Any coordination would create more drag.

DNS may be universal, but that creates a huge legacy base. Worse yet, the legacy base is by no means uniform, particularly when looks at the

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

DNS implementations in WIFI access points, DSL modems, etc.

The ISC once attempted to make the underscore character illegal; Microsoft made it necessary for people using Active Director y.

In sum, the DNS is constrained in multiple ways that can’t be solved by a narrow focus. We will make progress with the existing focused
efforts, but I think we could do much more if the various I* and industry could articulate a vision and a plan for a DNS renaissance.

Security (Steve Crocker)

Sometime in the early 1990s, probably 1992, Vint Cerf called me saying, “Steve, we have a problem.” Tsutomu Shimomura, working at the
San Diego Supercomputer Center, had demonstrated how to poison the DNS cache of a targeted machine and then break into the machine.

Shimomura had developed a small set of tools to make it quick and easy to do. His boss, Sid Karin, not only ran the Center, he also was well
connected to the senior levels of science policy within the U.S. government. In short order the President’s Science Advisor was brought into
the discussion and the exploit began to gather attention. Vint Cerf was chair of the Internet Architecture Board (IAB). Vint and I were long-
standing colleagues and had worked on the original suite of protocols for the Arpanet, the forerunner of the Internet. I was a vice president at
Trusted Information Systems, a small R&D company working in several areas of improving security architectures on the Internet. As it turned
out, Tsutomu Shimomura wasn’t the only one who knew about the problem. Steve Bellovin had discovered and documented the vulnerability
a bit earlier, but he withheld publication10 because he didn’t want to facilitate malicious break-ins. It was immediately obvious in the first
conversation with Vint that digital signatures would provide strong protection against any form of spoofing. But while the broad concept was
easy to see, it was going to take some work and time to get a full-scale design, test it, gain acceptance throughout the community, and get it
fully deployed. It’s now been more than 20 years and we had no idea how much harder it would turn out to be than we had expected.

The decade following the discovery of DNS cache poisoning was occupied with specifying how what came to be known as DNSSEC might
work. We approached DARPA for funding to work on this and got rapid approval. My initial thinking covered both authentications of the
relationship between a domain name and an address, but also whether the address is legitimately allocated to the holder of the domain name.

These two ideas were separated early in the process, and the latter is only now being pursued under the Resource Public Key Infrastructure
(RPKI) effort.

The basic idea for DNSSEC is thus quite straightforward. The DNS consists of a hierarchy of records that match the naming structure. If a
user’s system needs to know the IP address associated with, say, www.example.com, it sends a DNS query to a DNS name server. It
ultimately receives back an answer of that says, in effect, the address of www.example.com is 1.2.3.4. There is a lot of important structure and
action behind the scenes, all well document in RFCs 103311 and 10358 and their successors, but this is the essence of it.

We wanted to add digital signatures to this system. The digital signature would provide the end system a way of verifying that the association
between www.example.com and 1.2.3.4 was authentic, i.e., the association was created by the party authorized to make that association. In the
case of the domain name system, the party authorized to make such an association is whoever controls the contents of the example.com
domain, i.e., the parent of www.example.com. Hence, we needed a digital signature to cover the association between www.example.com and

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

1.2.3.4, and that signature had to come from example.com. In order for the end system to check the validity of the signature, it would have to
know the public key associated with example.com. End systems would not, in general, have the public keys for every parent of every domain
name, so the end system would have to fetch the public key and verify that it’s the correct public key.

The solution is to use the same system, except instead of associating www.example.com with an address, 1.2.3.4; the parent name
example.com is associated with a public key. That association in turn is authenticated by a signature associated with its parent, .com, and the
.com signature is authenticated by the signature of the root.

Hence, we needed digital signatures at each level of the DNS tree all the way to the root, and then each end system needs a copy of the root’s
public key. Conceptually this is easy. In practice, there were ― and are — multiple hurdles. During the last decades we learnt from the
experiences of the origins, and it is worth noting that DNSSEC is of high interest for the DNS development.

Internationalization (Tan Tin Wee)

The internationalization of the Domain Name System created a big movement to drive the change of the DNS to the next level. When I first
started to tackle multilingualization of the DNS, my intention was that since Asia has benefited tremendously from the Internet (and will
continue to do so), therefore, Asian researchers and engineers must play a rightful role to contribute to the architecture of the Internet.

The Internet was built and architected on the premises and assumptions of a monolingual mindset. We in Asia can help rectify that deficiency
through active participation in agencies such as the IETF, to build a global information infrastructure (as it was called in the late 90s) that
supports its global growth without everyone having to learn English first. My concern at that time was that the non-English masses would face
tremendous entry barriers in using the Internet. With hindsight the delays with the international rollout of the Multilingual Domain Name
system (iDNS) had forced many people to learn English (which was good for them) in order to be fully effective on the Internet. (Only in 2012
did the international rollout happen, more than a decade after we invented a fully functional IDN system.) In 1998 to work around making
changes at the Root, we had implemented a fully operational proxy-based IDN system which intercepted multilingual DNS queries, detected
their encoding and converted it into an ASCII label which could be resolved to IP addresses using conventional DNS servers such as BIND.
Shortly after, we discovered that Martin Duerst, then with the W3C in Japan, had shown (despite doubts of colleagues) that DNS could be
reengineered using his invention of UTF5. Thus, I paid him a visit in Japan and got his consent to use UTF5 as our ASCII compatible
encoding backend instead of reinventing the wheel. With a working system, I started to set up an Asia Pacific testbed in late 1998 to
demonstrate (to the incredulity of my Asia colleagues) that a multilingual DNS system could actually work.

During those years we set the stage for the Multilingual Internet Names Consortium (MINC) as an international organization to push this
forward into the new millennium. MINC was able to get groups covering specific languages (including Arabic, Chinese, Cyrillic, Indian
languages, etc.) to self organize; and groups like the Chinese Domain Name Consortium (CDNC ) and the International Forum for IT in Tamil
(INFITT ) continue to exist today in 2018.

As we entered the new millennium, everything looked promising for an early implementation of IDN domain names. However, this was not to
be. The standardization process dragged into 2003 with a compromise that the Root would remain unchanged. When ICANN started to
stabilize in the mid 2000s, the IDN initiative had an almost complete makeover. It became one of ICANN's flagship project and its gift to the

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

global community, helping drive ICANN’s legitimacy as a global organization. Companies desiring more diversity in domain names also
embraced IDNs and applied great pressure on ICANN to open up the Root to IDN TLDs. It was only 2010 that the first IDN ccTLDs finally
made it into the Root, more than a decade after our first proof of concept and fully working IDN system. Throughout this process we learnt
that acceptance of innovation by the establishment is something that can take a long time; so much for the legend of speedy innovation within
the Internet.

A Final Note

The DNS absolutely changed the development of the Internet and its evolution continues. As the inventor Mockapetria has said about the
origin of DNS, “I built the first floor and maybe the second floor and then people came along and added about 20 more floors”.

Acknowledgements

The authors particularly want to show express our great gratitude to Paul Mockapetris. Thanks also to Vint Cerf, Steve Crocker, and Tin Wee
for their contributions in preparing this work. Thank you also to editors of the IEEE Annals of the History of Computing, especially former
editor-in-chief Lars Heide.

About the authors

Dr. Oscar M-Bonastre (IEEE Senior Member) received his Ph.D. degree in Telecom engineering at the Polytechnic University of Valencia,
Spain, and master in computer engineering at the University of Alicante, Spain. He has more than 20 years of background in academia and
industry. Prof. Bonastre is with Department of Computing, Maths and Statistics at University Miguel Hernandez, Spain. He is also a
contributor to Operations Research Institute at same university. His research interests include the development of the Internet and advanced
distributed systems. Contact him at ombonastre@ieee.org

Dr. Andreu Veà, Ph.D., is a Telecom Engineer (’91) and Electronic Engineer (’93). Dr. Veà has an MBA in IT Management and a Ph.D. in
computer networks ('02). He is the cofounder and former President of the Spanish and Catalan chapters of the Internet Society (ISOC-ES &
ISOC-CAT) and is the only European to be selected to serve on the advisory board of the Internet Hall of Fame. After his doctoral dissertation
thesis on the technology, history, and social structure of the Internet, he was invited by Vint Cerf to continue his original research at Stanford
University (California, USA). He has served as Digital Champion for Spain at the European Commission. Nowadays he serves as Digital
Transformation and Optimization Lead at Barcelona Hospital Vall Hebron Campus. Contact him at Andreu@Vea.cat

References
1
One of us had previously met Paul.
2
RFC 882, P. Mockapetris, “Domain Names – Concepts and Facilities,” Nov. 1983, tools.ietf.org/html/rfc882
3
RFC 883, P. Mockapetris, “Domain Names – Implementation and Specification,” Nov. 1983, tools.ietf.org/html/rfc883
4
en.wikipedia.org/wiki/Hosts_(file)
5
en.wikipedia.org/wiki/User_Datagram_Protocol

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/MAHC.2019.2913116, IEEE Annals of the History of Computing

6
RFC 881, J. Postel, “The Domain Names Plan and Schedule,” Nov. 1983, tools.ietf.org/html/rfc881
7
RFC 1034, P. Mockapetris, “Domain Names – Concepts and Facilities,” Nov. 1987, tools.ietf.org/html/rfc1034
8
RFC 1035, P. Mockapetris, “Domain Names – Implementation and Specification,” Nov. 1987, tools.ietf.org/html/rfc1035
9
P. Mockapetris and K. Dunlap, “Development of the Domain Name System”, Proceedings of SIGCOMM '88, 1988, pp. 123-133.
10
S. Bellovin, "Using the domain name system for system break-ins," Proceedings of the 5th Usenix Unix Security Symposium, 1995, pp. 199-208.
11
RFC 1033, M. Lottor. “Domain Administrators Operations Guide,” Nov. 1987, tools.ietf.org/html/rfc1033

1058-6180 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like