DNS: A short history and a short future
First Monday

DNS A short history and a short future



Abstract
This paper examines some of the basic premises that typify the rhetoric of the DNS debates. It challenges the assumption that these problems are new; instead, they reiterate the text–to–number mapping problems that plagued the national and international integration of telephony systems. The later telephonic transitions were marked by an interplay between the phasing–out of telephonic addressing systems and marketing innovations. DNS is faced with an analogous problem: DNS policies inflexibly founded on past conditions have conspired with marketing forces to create an illusory scarcity of domain names. The easy correspondence between domain names and names used in other spheres of life, a correspondence that business interests demand, is untenable; and reforms geared toward facilitating such a situation are undesirable. Newer techniques that offer even higher levels of abstraction than DNS are paving the way practical solutions: domain names will cease to be a primary interface for navigation, and in ways that will facilitate the exploitation of the entire name space.


In the debates that have erupted over domain–name system (DNS) policy, two main proposals have come to the fore: a conservative option to add a handful of new generic top–level domains (gTLDs: “.nom” for names, “.firm” for firms, etc.) administered by a minimal number of registrars, and a more radical proposal to level the hierarchical structure of domain names altogether by permitting openly constructed names (“whatever.i.want”) administered by an open number of registrars.

The supposed cause for these debates orbit around perceived limitations on the system — monopolization of registration by NSI (in the United States) and a scarcity of available names; as such, the debates gravitate toward modernizing the system and preparing it for the future. What little attention has been paid to the past has focused on the immediate past, namely, the institutional origins of the present situation.

Little or no attention has been paid to the prehistory of the basic problem at hand: how we map the “humanized” names of DNS to the “mechanical” numbers of the underlying IP address system. In fact, this isn’t the first time that questions about how telecom infrastructures should handle text–to–number mappings have arisen. And it won’t be the last time, either; on the contrary, the current debates are just a phase in a pas de deux between engineers and marketers that has spanned most of this century.

A bit of history: From the 1920s through the mid 1950s, the U.S. telephone system relied on local–exchange telephone numbers of between two and five digits. As these exchanges were interconnected locally, they came to be differentiated by an "exchange name" based on their location. These names, two-letter location designations, made use of the lettering on telephone keypads: thus an 86x– exchange, for example, might be “TOwnsend,” “UNion,” “UNiversity,” or “VOlunteer.” Phone numbers such as “Union 567” were the norm; “86567” — the same thing — would have been seemed confusing, in much the same way that foreign dialing conventions can be. There wasn’t a precedent for a purely numerical public addressing system, and, with perfectly good name–and–number models like street addresses in use for centuries, no one saw any reason to invent one.

However, as exchanges became interconnected across the nation, AT&T/Bell found a number of problems — among them, that switchboard operators sometimes had difficulty with accents and peculiar local names. As a result, the national carriers began to recommend standardized exchange names, according to a curious combination of specific and generic criteria: they chose words that resisted regional inflection but were common enough to peg to “local” landmarks. The numbers 5, 7, and 9 were reserved because the keys have no vowels, making it (so the theory goes) more difficult to form words from them; hence artifacts like the fictional prefix 555–, so common in old movies, later became the national standard for prefix for fact, in the form of directory assistance.

By the late 1950s, when direct long–distance dialing became possible, then popular, variable length of phone numbers became a problem for the national carriers, which demanded yet more standardization — seven–digit phone numbers in a “two–letter five–number” (2L5N) format. And while it wasn’t an immediate problem, the prospect of international telephonic integration — with countries that used different letter–to–number schemes or even none at all — drove yet another push for standardization, this time for an “all–number calling” (ANC) system. Amazingly, the transition to ANC in the U.S. took almost thirty years, up to around 1980 depending on the region. (Just as certain telecom–underserved areas are now installing pure digital infrastructures while heavily developed urban areas face complex digital–analog integration problems, phone–saturated urban areas such as New York were among the last to complete the conversion to ANC.)

Direct long–distance dialing wasn’t merely a way for friends and family to keep in touch: it allowed businesses to deal in “real time” with distant markets. And the convention of spelling out numbers, only partially suppressed, hence fresh in the minds of the many, became an opportunity. Businesses began to play with physical legacy of lettered keypads and cultural habits by using number–to–letter conversions as a marketing tool — by advertising mnemonic phone numbers such as “TOOLBOX.” And as long–distance calls became a more normal for people to communicate, tolls began to fall, in a vicious — or virtuous, if you prefer — circle, thereby lowering the cost of transaction for businesses and spurring their interest in broader markets.

However, direct long–distance dialing presented a new problem, namely the cost of long–distance calls, which became the next marketing issue — and toll–free direct long–distance dialing was introduced. The marketing game replayed itself, first for the 800– exchange (and again more recently for the 888– exchange). As these number spaces became saturated with mnemonic name–numbers, businesses began to promote spelled–out phone numbers that were longer than the functional seven digits (1–800–MATTRESS) — because the excess digits had no effect. The game has played itself out in other ways and other levels — for example, when PBX system manufacturers adopted keypad lettering as an interface for interactive directories which use the first two or three “letters” of an employee’s name.

Obviously, this capsule history isn’t in a literal allegory for the way DNS has developed — that’s not the point at all. There are “parallels,” if you like: questions of localized and systematic naming conventions, of national/international integration, of arbitrarily reserved “spaces,” and of integrating new telecom systems with installed infrastructures, of technical standards coopted by marketing techniques. But implicit in the idea of a “parallel” is the assumption that the periods in question are separate or distinct; instead, one could — and should, I think — see them as continuous or cumulative phases in an evolving effort to define viable standards for the interfaces between mechanical numerical addressing systems and human linguistic systems. Either way, though, DNS — like the previous efforts — won’t be the last, regardless of how it is or isn’t modified in the next few years.

This isn’t to dismiss the current DNS policy debates. On the contrary: they bear on very basic questions that should be addressed precisely because their implications aren’t clear — questions about national/international jurisdiction and cooperation, centralized and distributed authorities, and the (il)legitimacy of de facto monopolies.

Ultimately, though, these questions are endemic to distributed–network communications and are not unique to DNS issues. What is unique to DNS isn’t any peculiar quality but, rather, its historical position as the first “universal” addressing system — that is, a naming convention called upon (by conflicting interests) to integrate not just geographical references at every scale (from the nation to the apartment building) but also commercial language of every type (company names, trademarks, jingles, acronyms, services, commodities), proper names (groups, individuals), historical references (famous battles, movements, books, songs), hobbies and interests, categories and standards (concepts, specifications, proposals) ... the list goes on.

The present DNS debates center mostly around the question of whether and how DNS should be adapted to the ways we handle language in these other spheres, in particular, “intellectual property.” Given the sorry state of that field — which is dominated by massive corporate efforts to extend proprietary claims indefinitely, to severely penalize infractions against those claims, and to weaken “consumer” protection by transforming commodities purchases into revocable and heavily qualified use–licenses — it’s fair to ask whether it’s wise to conform such an allegedly important system as DNS to that morass.

What’s remarkable is how quickly this has evolved, from a system almost fanatically insistent on shared resources and collaborative ethics to a speculative, exclusionary free–for–all. A little more history: With the erratic transformation of the “acceptable use policies” (AUPs) of the various institutional and backbones supporters of the Internet in the first half of this decade, commercial use of the Net expanded from a strictly limited regime (for example, NSFNet’s June 1992 “general principle” allows “research arms of for–profit firms when engaged in open scholarly communication and research”) to an almost–anything–goes policy left to private Internet providers to articulate and enforce (along with questions over spam, Usenet forgeries, and other troubles). The result was that any entity that couldn’t establish educational, governmental, or military credentials was categorized as “commercial” by default. The “.com” gTLD quickly became the dumping ground for just about everything: not just business names and acronyms, but product and service names (tide.com, help.com), people’s names (lindatripp.com), ideas and categories (rationality.com, diarrhea.com), parodies and jokes (whitehouse.com, tragic.com), and everything else (iloveyou.com, godhatesfags.com). (This essay omits discussion of the more nebulous “.net” and “.org” gTLDs — which are vaguely defined and became popular only after the domain–name debates — as well as of state [“.ny”] and national [“.uk”,“.jp”] gTLDs.) Thus, the “commercialization” of the Net took place on two levels: in the legendary rush of business to exploit the Net, obviously, but also in the administrative bias against non–institutional use of the Net.

There were practical reasons for that trend: individual or “retail” access was initiated by commercial Internet providers, which doled out many more dial–up user accounts than domains, as well as technical issues ranging from telecom pricing schedules to software for consumer–level computers that discouraged the casual use of domains. But the trend also had an ideological aspect: the entities that governed DNS preferred the status quo to basic reforms — and, in doing so, relegated the Net’s fast diversification to a single gTLD that became less coherent even as it became the predominant force.

One can’t fault the administrators for failing to foresee the explosion of the Net; and their responses are, if not justified, at least understandable. DNS was built around the structurally conservative assumptions of a particular social stratum: government agencies, the military, universities, and their hybrid organizations — in other words, hierarchical institutions subject to little or no competition. These assumptions were built into DNS in theory, and they guide domain–name policy in practice to this day — even though the commercialization of the Net has turned many if not most of these assumptions upside down. Not only are the newer “commercial” players prolific by nature, but most of their basic assumptions and methods are very much at odds with the idealized cooperative norms that supposedly marked governmental and educational institutions: they come and go like mayflies, they operate under the assumption that they’ll be besieged by competitors at any moment, they thrive on imitation, and they succeed (or at least try) by abstracting everything and laying exclusionary claim to everything abstract — procedures, mechanisms, names, and ideas. The various systems and fields we call “the market” worked this way before the Net came along; small wonder that they should work this way when presented with a “new world.”

If no one anticipated the speed with which business would take to this new medium, even less could anyone have predicted how it would exploit and overturn the parsimonious principles that dominated the Net. Newer domain users quickly broke with the convention of subdividing a single domain into descriptively named sub– and sub–sub–domains that mirrored their institution’s structure (e.g., function.dept.school.edu). Instead, commercial players started to strip–mine name space with the same comical insistence that led them to label every incremental change to a commodity “revolutionary.” The efficient logic of multiple users within one domain was replaced with a speculative logic in which a few users became the masters of as many domains as they could see spending the money to register. In some cases, these were companies trying to extort attention — and money — out of “consumers” (business’s preferred name for “person”); in other cases, they were “domain–name prospectors” hoping to extort money out of business; in many more cases, though, they were simply “early adopters” experimenting with the fringes of a new field. In effect, the potentially complex topology of a multilevel name space was reduced — mostly through myopic greed and distorted rhetoric — to a flatland as superficial as the printed pages and TV screens through which the business world surveys its prey. The minds that collectively composed “mindshare,” it was assumed, couldn’t possibly handle something as complicated as a host name.

So, for example, when Procter and Gamble decided to apply “brand management” advertising theories to the Net, it registered diarrhea.com rather than simply incorporating diarrhea.pg.com into its network addressing. And so did the ubiquitous competition, including the prospectors who set about registering every commercial domain they could cook up. The follies of this failed logic are everywhere evident on the Net: thousands of default “under–construction” pages for domain names whose “owners” wait in vain for someone to buy their swampland: graveyard.com, casual.com, newsbrief.com, cathedral.com, and lipgloss.com.

Under the circumstances — that is, thousands of registered domain names waiting to be bought out — claims that existing gTLD policies have resulted in a scarcity of domain names are doubtful. In fact, within the “.com” gTLD alone, the number of domain names registered to date is a barely expressible fraction of possible domain names, such as “6gj–ud8kl.com”: ~2.9934 possible domain names within “.com” alone, or ~4.9924 domains for every person on the planet; if these were used efficiently — that is, elaborated with subdomains and hostnames such as “6b3-udh.6gj–ud8kl.com” — the number becomes effectively infinite.

Obviously, then, the “scarcity” of domain name is not a function of domain name architecture or administration at all. It stems, rather, from the commercial desire to match domain names with names used in everyday life — in particular, names used for marketing purposes. To be sure, “6gj–ud8kl.com” isn’t an especially convenient domain name; but, then again, was “Union 567” or “+1–212–674–9850” a convenient phone number, “187 Lafayette St #5B New York NY 10013” a convenient address, or “280–74–513x” a convenient Social Security number?

But if DNS is in fact such an important issue, does it really make sense to articulate its logic according to the “needs” of marketers? After all, business has managed to survive the tragic hardship of arbitrary telephone numbers for decades and arbitrary street addresses for centuries. Surely, if the Net really will revolutionize commerce, to the point of “threatening the nation–state” as some like to claim, the inconvenience of arbitrary domain name will hardly stop the revolution.

There are territorial squabbles over claims to names and phrases and some people and organizations profit from the situation. But we don’t generally erect a stadium in areas where gang fights break out; so one really has to ask whether it’s a good idea to restructure gTLD architecture — supposedly the system that will determine the future of the Net, hence a great deal of human communication — to cater to a kind of business dispute that’s in no way limited to DNS.

Ultimately, it doesn’t really matter which proposed gTLD policy reform prevails, because the gains will be mostly symbolic, not practical — except, of course, for the would–be registrars, for whom these new territories could be quite profitable. At minimum, adding new gTLDs such as “.firm”, “.nom”, and “.stor” will bring about a few openings — and, more to the point, a new round of territorial expansions, complete with redundant registrations, intellectual property lawsuits, etc. At maximum, an open domain–name space that allows domains such as “whatever.i.want” will precipitate a domain–grabbing free–for–all that will make navigating domains as unpredictable as navigating file structures.

Moreover — and much worse — where commercial litigation is now limited to registered domain names, an open namespace would invite attacks on the use of terms anywhere in an address. Put simply: where apple.material.net and sun.material.net are now invulnerable to litigation, in an open namespace Apple Computers and Sun Microsystems could easily challenge “you.are.the.apple.of.my.eye” and “who.loves.the.sun”.

Neither proposed reform necessarily serves anything resembling a common good. But both proposed reforms will provide businesses with more grist for their intellectual property mills and provide users with the benefits of, basically, vanity license plates. The net result will be one more step in the gradual conversion of language — a common resource by definition — into a condominium colonized by businesses driven by dreams of renting, leasing, and licensing it to “users.”

It doesn’t, however, follow that the status quo makes sense — it doesn’t. It’s rife with conceptual flaws and plagued by practical issues affecting almost every aspect of DNS governance — in particular, who is qualified to do it, how their operations can be distributed, and how democratized jurisdictions can be integrated without drifting being absorbed by the swelling ranks of global bureaucracies. The present administration’s caution in approaching gTLD policy is an instinctive argument made by people happy to exploit, however informally, the superabundance of domain–name registrations.

Without doubt, the main instabilities any moderate gTLD policy reform introduced would be felt in the administrative institutions’ funding patterns and revenues. More radical reforms involving more registrars would presumably have more radical consequences — among them, a need to certify registrars and DNS records, from which organizations with strong links to security and intelligence agencies (Network Associates, VeriSign, and SAIC) will surely benefit. The current administration insists that an open name space would introduce dangerous instabilities into the operations of the Net. But whether those effect would be more extreme than the cumulative impact of everyday problems — wayward backhoes, network instabilities, lazy “netiquette” enforcement, and human error — is doubtful.

There is one point on which the status quo and its critics agree: the assumption that DNS will remain a fundamental navigational interface of the Net. But it need not and will not: already, with organizations (ml.org, pobox.com), proprietary protocols (Hotline), client and proxy–server networks (distributed.net), and search–engine portal advances (RealNames, bounce.to), we’re beginning to see the first signs of name–based navigational systems that complement or circumvent domain names.

And they’re doing it in ways that address not the bogeys that appear in the nightmares of rapacious businessmen but the real problems and possibilities that many, many more users are beginning to face: maintaining stable e–mail addresses in unstable access markets, maintaining recognizable zine–like servers in the changing conditions of dynamic IP subnets, cooperating under unpredictable load conditions, and, of course, finding relevant info — not offering it, from a business perspective, but finding it from a user’s perspective.

DNS, as noted, was built around the assumptions of a specific social stratum. Prior to the commercialization of the Net, most users were if not computer professionals then at least technically proficient; and the materials they produced were by and large stored in logical places which were systematically organized and maintained. In short, the Net was a small and elite town, of sorts, whose denizens — “netizens” — were at least passingly familiar with the principles and practices of functional design. In that context, just as multiple users on a single host was a sensible norm, so were notions of standardized file structures, naming conventions, procedures, and formats. But just as the model of multiple users on a single host has become less certain, so has the rest.

The Net has become a non–systematic distributed repository used by more and more technically incompetent users for whom wider bandwidth is the solution to dysfunctional design and proliferating competitive formats and standards. Finding salient “information” (the very idea of which has changed as dramatically as anything else) has become a completely different process than it once was.

This turn of events should come as no surprise. As commercial domains multiplied, and as users multiplied on these domains, the quantities of material their efforts and interactions produced grew ferociously — but with none of the clarity typical the “old” institutional Net. In the past, the information generated around or available through a domain (or to the subdomains and hostnames assigned to a department in a university or military contractor) was often “coherent” or interrelated. But that can’t be said of the material proliferating in the Net’s fastest–growing segments: commercial Internet access providers, institutions that automatically assign Internet access to everyone, diversified companies, and any other domain–holding entities that permit discretionary traffic.

Instead, what one finds within these domains is mostly random both in orientation and in scale: family snapshots side by side with meticulously maintained databases, amateur erotic writings next to source–code repositories, hypertext archives from chatty mailing lists beside methodical treatises. In such an environment, a domain name functions more and more as an arbitrary marker, less and less as a meaningful or descriptive rubric.

This isn’t to say that domain names will somehow “go away”; on the contrary, it’s hard to imagine how the Net could continue to function without this essential service. But the fact that it will persist doesn’t mean that it will serve as a primary interface for navigating networked resources; after all, other aspects of network addressing have become all but invisible to most users (IP addresses and port numbers to name the most obvious).

The benefit that DNS offers is its “higher level of abstraction” — a stable addressing layer that permits more reliable communications across networks where changing IP numbers change and heterogeneous hardware/software configurations are the norm. But “higher” is a relative term: as the substance of the Net changes — as what’s communicated is transformed both in kind and in degree, and as the technical proficiency of its users drops while their number explodes — DNS’s level of abstraction is sinking relative to its surroundings. End of article

 

About the authors

Ted Byfield works as a freelance book editor and occasional writer. He lives in New York City.
E–mail: tbyfield [at] panix [dot] com

 

Note

This essay was first published on Rewired during the week of 28 September 1998 under the title “A Higher Level of Abstraction”; it was edited and redistributed on nettime–l.

 


Copyright © 1999, First Monday.
Copyright © 1999, Ted Byfield.

DNS A short history and a short future
by Ted ByfieldFirst Monday, Volume 4, Number 3 - 1 March 1999
http://firstmonday.org/ojs/index.php/fm/article/view/654/569





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.