Gullibility is the principal cause of bubbles. Investors and the general public get snared by a “beautiful illusion” and throw caution to the wind. Attempts to identify and control bubbles are complicated by the fact that the authorities who might naturally be expected to take action have often (especially in recent years) been among the most gullible, and were cheerleaders for the exuberant behavior. Hence what is needed is an objective measure of gullibility.
This paper argues that it should be possible to develop such a measure. Examples demonstrate, contrary to the efficient market dogma, that in some manias, even top business and technology leaders fall prey to collective hallucinations and become irrational in objective terms. During the Internet bubble, for example, large classes of them first became unable to comprehend compound interest, and then lost even the ability to do simple arithmetic, to the point of not being able to distinguish 2 from 10. This phenomenon, together with advances in analysis of social networks and related areas, points to possible ways to develop objective and quantitative tools for measuring gullibility and other aspects of human behavior implicated in bubbles. It cannot be expected to infallibly detect all destructive bubbles, and may trigger false alarms, but it ought to alert observers to periods where collective investment behavior is becoming irrational.
The proposed gullibility index might help in developing realistic economic models. It should also assist in illuminating and guiding decision–making.
3. The telecom bubble and “Internet traffic doubling every 100 days”
4. The preposterous O’Dell and Sidgmore lectures
5. Capacity versus traffic
6. Divergent growth projections, or 4 versus 10
7. Underwater cables, and 2 versus 10
8. George Gilder’s pioneering discovery that 2 is not 10
9. Information viscosity
10. Ubiquitous and deep gullibility
11. Detecting bubbles
12. Information viscosity (Part 2)
13. Human nature, positive aspects of the Madoff fraud, and the virtues of gullibility
14. The growth of gullibility
Current investigations of the great financial crash of 2008 concentrate on issues that are ancillary, such as banker bonuses and where derivative trading should take place. Strangely (but conveniently for many of those involved) these investigations do not delve into the question of whether this crash could have been foreseen and prevented, nor into the fundamental cause of the crash, namely the extensive gullibility that led to the extreme bubble we have experienced. What characterizes bubbles is a rise in valuations of some class of assets, and it is the collapse of those valuations that leads to subsequent pain. The incentives and institutions involved in bubbles have varied over the centuries, and will surely vary in the future. But bubbles we have had for ages, in a variety of settings.
Bubbles (interpreted here in the popular term to mean investment manias that lead to rising prices of some assets and then to a collapse) are not always easy to identify. However, as was shown by numerous hedge fund managers who earned fortunes, this was possible in the bubble that crashed in 2008. This will be discussed in more detail in Section 11 — Detecting bubbles. Some other bubbles could be identified a priori much more easily. Yet claims continue to be made by powerful and respected authorities, in particular by Alan Greenspan (2010), that bubbles are impossible to spot. Such behavior can be regarded as just one example of the gullibility that facilitates rise of investment manias.
The gullibility responsible for the real estate and financial industry bubble that collapsed in 2008 was widespread and deep. It involved minimum–wage earners buying expensive houses using mortgages they did not understand, as well as bankers issuing such mortgages. And of course there were all the people who trusted Bernie Madoff with large sums of money, including many financially sophisticated professionals acting as trustees for charitable institutions. (For more examples and a discussion, see Section 10. Ubiquitous and deep gullibility.) The general opinion is that the level of gullibility was much higher during this bubble (as well as during other bubbles) than is normal in ordinary times. Unfortunately this is a subjective judgment, and the proposal of this paper is to develop a quantitative gullibility index, a formal measure of this phenomenon.
An objective measure of gullibility is especially important because we face a variant of the old “Qui custodiet ipsos custodes?” question of Juvenal: “Who will watch the watchmen?” The economic policy–makers and regulators in almost all countries over the last couple of decades were enthusiastic practitioners of the “see no bubble, hear no bubble, speak no bubble” philosophy. Consider just the three most prominent professional economists among recent powerful economic policy–makers in the U.S., Alan Greenspan, Ben Bernanke, and Larry Summers. As is sketched in Section 10, they not only contributed to the financial debacle through their decisions, but in addition they discouraged investigations by others into investment manias. They proclaimed that bubbles cannot be identified. What they then proved conclusively (in the two recent bubbles, the Internet one and the real estate and finance debacle) was that they are unable to identify bubbles, even giant ones. Further, they still proclaim, in the face of overwhelming evidence, that those bubbles could not have been identified. Yet, they (or, to be more precise, two of the three) are entrusted with guarding our financial system from future disasters of this type. Were Samuel Johnson to come alive, he might be tempted to come up with a phrase more expressive than “the triumph of hope over experience” to describe the situation.
On the other hand, Samuel Johnson was an astute man, and after observing our society he might conclude that the selection of the most gullible for top economic policy decisions is not an accident. There is an apocryphal story about Otto von Bismarck. Somebody once supposedly asked him, “How can you tolerate that Baron X in charge of the Ministry of Y? He is so stupid!” To which the Iron Chancellor replied, “But can you find somebody more stupid to take his place?” While the value of cleverness to success in life may be overrated, as in the Bismarck anecdote, incisive contrarian thinking may actually be inimical to economic progress, especially when coupled with a skeptical turn of mind. A case can be made that gullibility is essential to progress. (See Section 13. Human nature, positive aspects of the Madoff fraud, and the virtues of gullibility.) It is even possible to find a positive aspect to the Madoff fraud, not in the fraud itself, but in what it says about the high level of gullibility and trust in our society.
It does appear that gullibility has grown over the last couple of centuries, in parallel with the flowering of the Industrial Revolution and later transformations of society. However, it would take the development of a verified gullibility index, moreover one that could be applied retrospectively, to make sure of that. But the value of naiveté was appreciated early on. For example, the caption of a cartoon in Punch in late 1845, at the height of the British Railway Mania, noted 
“Where ignorance is bliss, ’tis folly to be wise!”
It is thus possible that the present course of society, towards increasing gullibility, is the optimal one. However, while Mae West famously said that “too much of a good thing is wonderful,” this may not always be true in economics. Growing gullibility will inevitably lead to more and bigger bubbles. And how many more financial crashes such as that of 2008 can we afford?
Even if increasing gullibility is the optimal course for society, and the resulting bubbles and crashes an inevitable side effect that we have to learn to tolerate, a quantitative index of this characteristic might be useful, for example in directing policy–making towards raising its level. What is needed is an index of gullibility of society, and also a gullibility quotient for individuals. (The latter might be used in selecting future policy–makers.) Aside from the social utility of such measures, there is the basic intellectual challenge of understanding how society works. Conventional economic models have again proved themselves utterly useless during the crisis of 2008. Perhaps by incorporating a gullibility index into them, as well as other measures, some to be mentioned later, these models and the associated economics literature could be made relevant.
A gullibility index would also be useful for individuals in their investment decisions. Recent financial market crashes have demonstrated conclusively that the public cannot rely on regulators, the press, or the economics profession to provide useful warnings of impending disasters. Even if this is caused by our society’s need to encourage exuberant “animal spirits,” individuals might be reluctant to have their careers and fortunes sacrificed for the common good. They might therefore appreciate having an objective measure of the state of the collective investment mood to help them make their own decisions.
The goal of this work is to point out some approaches towards constructing measures of gullibility. The main focus is on demonstrating that during manias, even foremost business and technology leaders are subject to collective delusions that lead them towards behavior that is not only irrational, but irrational in objectively measurable ways. Examples are drawn primarily from the Internet bubble, since the absurdities of that episode were the most striking, and the irrational behavior of decision makers easiest to demonstrate. These examples suggest some measurements that could be incorporated into a gullibility index.
During the Internet bubble, the two biggest real investment disasters (i.e., involving actual outlays by companies, as opposed to changes in stock market valuations) were the construction of new long–haul fiber optic networks in the U.S. at at a direct cost of approximately US$100 billion, and the European 3G spectrum auction in which participating wireless service providers paid approximately €100 billion . The Economist wrote of them (Standage, 2006) that “[b]oth of these episodes are now regarded as embarrassing collective hallucinations over which the industry prefers to draw a veil.” This veil, combined the studious efforts by economists (with Messrs. Bernanke, Greenspan, and Summers in the lead) and other observers not to notice even the existence of the veil, has allowed Alan Greenspan, for example, to claim with a straight face that bubbles cannot be identified before they burst (Greenspan, 2010).
Collective hallucinations in investment settings have not attracted much serious scholarly attention in recent years. Mackay’s Extraordinary popular delusions and the madness of crowds continues to be popular, but it is based on unreliable secondary sources, and so cannot be taken seriously. On the other hand, there has been an explosion of research in a variety of fields, such as anthropology, psychology, sociology, political science, neuroeconomics, and behavioral economics, which demonstrates how frequently human behavior departs from the Homo economicus utility–maximizing assumptions of classical economics. In the interests of brevity, they will not be discussed here, but a a good collection of references to some of this work can be found in Baddeley (2010). The current paper argues that many of the tools and insights from these areas should be applied to study collective hallucinations. The main thrust of this work, though, is on showing, in simple terms, without resorting to any technical results or language, that sometimes even the foremost business and technology leaders are subject to mass delusions to an extent that drastically skews their investment decisions . Moreover, at least occasionally this effect can be quantified in objective terms.
There is considerable anecdotal evidence for the important influence of mass delusions on modern decision–making. Managers and financial analysts accused of improper behavior during the Internet bubble often tried to argue, in defending themselves during the class action lawsuits filed afterwards, that normal standards should not apply to decisions made in the frenzied atmosphere at the height of a mania. Those were obviously self–serving claims, but they are supported by testimony by other, less interested participants in that historical episode. Some are cited in Haacke’s book Frenzy (2004). Even more support for this view can be obtained from other sources. For example, Neil Barton, the London–based technology analyst for Merrill Lynch during the 1980s and 1990s, retired in 1999. He recalls (private communication) that he was suspicious of the developing market and economic situation and began to doubt even his own judgment. He was uncomfortable with the very fast–growing ‘buy–side,’ where both new and established investment managers reacted irrationally to even mildly negative reports.
Still, even Barton’s evidence is still anecdotal, and what appears needed is an objective measure of the degree of irrationality that develops during bubbles. The aim of this paper is to point out ways that such a measure might be developed. This investigation also leads to new ways of looking at economic activity, and suggests that bubbles may be an inevitable by–product of the processes that produce economic growth.
The failure of conventional macroeconomics, demonstrated in the recent financial crash of 2008, has stimulated interest in considering other approaches. Lo and Mueller (2010) argue that economics has been unrealistic in trying to achieve physics-like standards of predictability. Akerlof and Shiller, in their recent book Animal spirits (2010), argue even more strongly that human psychology is the driving force behind economic behavior. They base much of their arguments on, and take their title from, John Maynard Keynes. They go beyond Keynes by also emphasizing the importance of stories (what I will call tales) that spread through society and inspire economic activity.
The core of this paper involves a whale of a tale, the myth of “Internet traffic doubling every 100 days.” It was a key inspiration for the entire Internet bubble. The Madoff Ponzi scheme appears to have involved somewhere between US$20 and US$65 billion. About US$20 billion is what the gullible investors put into it, drawn by the beguiling tales of Madoff’s magical trading system. And US$65 billion was about the sum of the largely imaginary balances that Madoff created. The “Internet traffic doubling every 100 days” tale led to real direct investments that easily exceeded US$100 billion, and likely was several times that, and to stock market valuations that were in the trillions of dollars. Thus this tale was far more influential than the Madoff one. It was also far more transparently nonsensical than the Madoff Ponzi scheme, and involved important business and technology leaders not only willingly suspending their disbelief, but losing the ability to do simple arithmetic.
This paper is based on material from a book in progress, tentatively entitled Beautiful illusions and credulous simplicity: Technology manias from railroads to the Internet and beyond, which is devoted to comparing the Internet bubble to the British Railway Mania of the 1840s and making projections about future technomanias. The Railway Mania was the largest technology mania in history, when measured in real investments as a fraction of the economy. It pulled in as investors such famous personalities as Charles Darwin, John Stuart Mill, and the Brontë sisters. Several manuscripts from that project, concerned primarily with early railroads, are available on my home page (Odlyzko 2010; manuscript). Much of the material about the telecom debacle during the Internet mania is based on previously published papers (Odlyzko, 2003; Coffman and Odlyzko, 1998).
The first half of the nineteenth century may seem remote to us. But it deserves special attention, as that was the formative period of modern corporate capitalism, when many of our most important laws, regulations, and institutions, and our entire institutional culture developed. Interestingly enough, one finds among some observers of that time an understanding of “animal spirits” deeper than that of Keynes. Much of it is even deeper than that of Akerlof and Shiller, with their discussion of the importance of tales. It involved an appreciation of the role of the promoters who concoct, embellish, and propagate such tales. After any crash, they are often referred to scornfully as Pied Pipers and “snake oil salesmen,” but that is a very one–sided view. They included people such as Benjamin Disraeli during the British investment bubble of the mid–1820s, see Chapter 5 of Odlyzko (manuscript). Their contributions towards stimulating economic activity through creation and propagation of “beautiful illusions” should not be underestimated, just as the importance of technical and managerial competence and of honesty should not be overestimated. Further, many of them cannot be placed easily on the “fool to rogue” scale. Often, as was the case with Disraeli, they believe enough of their tales (“drink their own Kool–Aid” in a modern phrase) to lose their fortunes in the bubbles they help inflate. In general, the mental states of various participants at times of mania peaks are indeed different from normal ones (and deserving of more careful studies using modern tools, such as those of neuroeconomics). This was appreciated by some of the more perceptive of the early Victorian observers. And so was the value of credulous simplicity among the public and among decision–makers.
The next section has a brief discussion of innumeracy, which is intimately connected to gullibility, and offers one of the most promising approaches to a quantitative measure of this characteristic. Section 3 introduces the telecom bubble and the key “Internet traffic doubling every 100 days” tale. There is a brief discussion of the significance of this tale, and what reactions to it signified about listeners’ ability to understand compound interest. Then sections 4 through 8 delve in more detail into this “whale of a tale” and demonstrate that most of the accomplished and supposedly sophisticated professionals involved with the telecom sector as investors, employees, regulators, or reporters, were acting in a mental haze. They were unable to tell the difference between growth rates of 100 percent (2x) per year and 1,000 percent (11x) per year, even when those affected the most important measure of their industry’s health. It is rather intriguing that the only person at the time who left any public record of recognizing the difference existed, and that it mattered, was George Gilder. Exalted as the foremost prophet of the Internet revolution while share prices were rising, he was called, after the crash, the Pied Piper (or worse) of that episode. Yet, although he was not trained in any quantitative discipline, he seems to have been the only one to retain enough common sense, and ability to comprehend the power of compound interest, to realize something was wrong. Unfortunately for him, and a myriad of his followers, he drew the wrong conclusion from his observation, and helped inflate the telecom bubble even further.
The demonstration of the objectively irrational behavior presented in sections 4 through 8 leads naturally to Section 9, which briefly introduces the concept of “information viscosity.” This refers to the phenomenon that important information often does not spread efficiently, and so does not get incorporated properly into market prices. This is contrary to the basic assumption of theories of efficient markets.
After the discussion of the irrationalities and defective information distribution in the telecom bubble in sections 3 through 12, the rest of the paper considers briefly some related observations, and their implications. Section 10 considers the general role of gullibility in our society. Section 11 deals with bubbles and their detectability and controllability. Section 12 produces more examples of “information viscosity.” Section 13 relates gullibility to various human traits, and presents the Madoff fraud as a positive indicator of the level of trust in our society, the type of trust that is essential for the functioning and progress of modern economies. Section 14 contains a brief discussion of how increasing gullibility is cultivated in our society. Finally, Section 15 has the conclusions, with some specific suggestions on how a gullibility index might be developed.
Gullibility is strongly correlated with innumeracy, the inability to reason with numbers and other mathematical concepts. Innumeracy is almost universal, and can be seen every day. As just one example, the Wall Street Journal, one of the most prestigious business publications, managed recently to use US$6 million in one place, and US$6 billion for the same quantity a couple of lines further (Wall Street Journal, 2010a). Another example comes from a white paper from IBM, one of the most eminent and most successful technology companies. It claims (IBM, 2006, p. 2) that by 2010, “the world’s information base will be doubling in size every 11 hours,” implying in a year there would be more bits of information than the number of elementary particles in the universe! More examples are presented later, scattered throughout the text.
Innumeracy is especially dangerous in situations such as the telecom bubble, where the quantities under discussion are huge, with prefixes such as tera–, peta–, and exa–, and refer to photons and electrons, objects that are not very tangible. (That may be one reason the Internet bubble fooled people so much more than the Railway Mania of the 1840s, which dealt with far more tangible passenger transport.)
However, innumeracy appears to be only a part of the story. As we will see, many people who otherwise exhibited substantial degrees of numeracy in their careers, to the point of getting PhDs from MIT and MBAs from Harvard, still fell for the most preposterously impossible quantitative stories. And John Allen Paulos, the mathematician who wrote the famous, illuminating, and for many readers frightening book Innumeracy (1988), was also a victim of the telecom bubble (Paulos, 2003).
Still, innumeracy offers one of the most promising methods to approach the study of gullibility in a quantitative way. It appears (as will be shown later with numerous examples from the Internet bubble) that as a mania advances, innumeracy grows, and grows particularly dramatically among business and technology leaders, the people with PhDs and MBAs who are normally competent with basic arithmetic. Hence a quantitative measure of innumeracy, which ought to be feasible, since it involves explicit numbers in reference to specific objects or services, could become a key part of the gullibility index.
Discussions of the Internet bubble tend to concentrate on the dot–coms. They are usually held up as examples of extreme irrationality. Yet they were by some measures the most sober part of the Internet bubble, and were a great success. In retrospect, we can look back and say that WebVan and eToys were silly wastes of investor funds. However, they did not consume much funding. If we consider just the real investments of the dot–com boom, the money spent on writing software, buying servers, as well as the legal, marketing, and other expenses, the total appears to be well under US$20 billion dollars. Any single one of the (few) great success stories of that mania, namely Google, Yahoo!, eBay, and Amazon, created more value (from the perspective of today’s market valuations) than the sum total of all the misbegotten ventures. Thus from the standpoint of society as a whole, the dot–com frenzy was a great success. A potentially different picture emerges when one looks at the prices paid for dot–com shares at their IPOs or later. But that represents just a transfer of money from one class of investors to another, and not real economic activity. On the other hand, the real investments of the telecom bubble (and the associated more general information and communication technologies bubble) were far larger.
The dot–coms were also hard to discount beforehand, at least with any solid arguments. Yes, “eyeballs” and “mindshare” were vague concepts, and it was not easy to see how to convert them into revenues, but then it was hard to prove such conversion would not work either. (It did work for Google, let’s not forget.) On the other hand, what the Economist called “embarrassing collective hallucinations” of the telecom bubble were embarrassingly easy to show as destined to crash, largely because they were justified on the basis of quantitative claims that were transparently false when describing the situation of that time, or absurd when describing the promised future. A passage from the Times (of London) from the time of the Railway Mania is singularly appropriate:
“The salient points of absurdity in … statistics were so numerous that we found it impossible to notice more than a portion. It was tantalizing indeed to see so luxuriant a crop, and feel that human hands could not grasp more than a handful.” 
A fuller treatment of the patently false projections and claims will be presented in Odlyzko (in preparation). In particular, that work will consider the financial aspects of that bubble, including the WorldCom accounting frauds. (The famous ones, the ones that resulted in Bernie Ebbers going to jail, were probably not the most damaging ones.) Right now, in the interests of brevity, and since “human hands cannot grasp more than a handful” of the absurd numbers and arguments that were used, I will concentrate on just one key issue, that of Internet traffic growth. The preposterous nature of the claims made there is easiest to demonstrate, and had the most visible (and destructive) effect on actual investments. The issue also demonstrates most clearly the irrational behavior of business and technology leaders, as well as of the press.
The key mantra of the telecom bubble was that of “Internet traffic doubling every 100 days.” (Sometimes the 100 days was replaced by three months, or four months.) This mantra was widely held, and was crucial in justifying construction of the new fiber networks. It was also a key inspiration for the rest of the Internet bubble, including the dot–coms. As an example, a story in the New York Times in March 2000, at the peak of the tech market, just as NASDAQ shares were beginning their 80 percent decline, quoted Matthew Johnson, “chief trader of Nasdaq stocks for Lehman Brothers,” on why shares were not overvalued (Hershey, 2000):
“The hardest part for a lot of us on Wall Street who have taken traditional valuation measures, if you try to apply them today you’d probably never buy a stock. I’m a believer in the new paradigm. Traffic on the Internet is doubling every 100 days; if you think about that, you begin to understand the magnitude of this technology revolution and you can understand investors’ willingness to take the risks they’re taking.”
And indeed, had Internet traffic grown at that rate, not only would the telecom industry have flourished, but so probably would have many of the dot–coms. The rapid growth of traffic would surely have arisen from rapid adoption of new technologies and new online business models, all progressing at the proverbial (but in fact horribly misleading) “Internet time.” That would likely have produced bountiful revenues and profits for many of the new ventures.
The claim that Internet traffic was doubling every 100 days was completely uncontroversial, as far as the mainstream press was concerned, all the way to the end of 2000. During that year, the New York Times alone had at least five stories that cited it, in all cases as something that everyone knew was true . The 27 November 2000 issue of Fortune even had two stories, by two different reporters, that cited this myth. And there were many authoritative–sounding parties that helped propagate it. For example, in the conference calls with financial analysts to discuss the third quarter 2000 results, the heads of AT&T, Global Crossing, and Level 3 (and quite possibly others) claimed that Internet traffic was growing that fast.
As seems typical, government officials were among the most gullible. Reed Hundt, the Chairman of the FCC (Federal Communications Commission) from 1993 to 1997, claimed in his 2000 book You say you want a revolution that “[i]n 1999, data traffic was doubling every 90 days … .” An official March 2000 FCC document, submitted to Congress by the then–Chairman of the FCC, William Kennard (2000), claimed:
“Internet traffic is doubling every 100 days. The FCC’s ‘hands–off’ policy towards the Internet has helped fuel this tremendous growth.”
This language is eerily reminiscent (preminiscent?) of that used by financial regulators a decade later, who hailed their ‘hands–off’ policy as leading to the flowering of ‘financial innovation,’ in particular to the astronomical growth in volume of derivatives, which was supposedly leading to a new era of prosperity and stability. The temptation to carry out a deeper comparison of the underlying philosophies, non–actions, and eventual consequences of these two episodes, separated by a decade, is almost irresistible. But in the interests of brevity, let us resist it for now.
We do have to observe, however, that the consequences of FCC’s hands–off policy were catastrophic for the telecom industry. Note that a simple Internet traffic volume reporting requirement, similar to the one that had been in force for decades for voice traffic, would have sufficed to disprove the myth and squash the huge spending on new fiber. It might also possibly have dampened the “animal spirits” behind the dot–com craze .
The FCC document (Kennard, 2000) is a reflection of FCC’s gullibility and ignorance, but by itself it does not seem to have had any significant impact. On the other hand, a document from the U.S. Department of Commerce, the 1998 white paper The Emerging Digital Economy (U.S. Department of Commerce, 1998), was tremendously influential in stimulating the dot–com bubble. It was cited by innumerable news stories and start–up business plans as supporting an extremely optimistic outlook for new Internet–related ventures. This report in effect argued (combining the words in the report to go slightly beyond what it explicitly said) that “dramatic improvements in computing power and communication and information technology” would “create a ‘long boom’ which [would] take the economy to new heights over the next quarter century.“ Among the seven points cited on p. 2 of The Emerging Digital Economy as “showing the growth of the Internet and electronic commerce this past year,“ number three was the claim that “[t]raffic on the Internet has been doubling every 100 days.” This claim, as well as a similar, but more extensive one on p. 8 of that report, cited a 1997 white paper from Inktomi. The Inktomi report in turn quoted Mike O’Dell, the Chief Scientist of UUNet, as saying that “[t]he capacity crunch is real and will continue for quite some time,” and stated
“UUNet estimates that network traffic is doubling every 100 days as graphics, audio, and video become more common. This rate means that bandwidth demands are increasing at five times the rate of Moore’s Law. The ability to build network capacity cannot keep pace. O’Dell concludes, “Demand will far outstrip supply for the foreseeable future.”
UUNet is engaged in a million–dollar–a–day capital expenditure plan to add the bandwidth needed to handle more users and increasing traffic loads. In three years, its network will be a 1,000 percent bigger than its current network.”
The Inktomi white paper also provided a chart of Internet traffic growth, taken from the Gilder Technology Report (which will be discussed at some length in Section 8).
There were several noteworthy features of The Emerging Digital Economy that appear not to have been noted by the media nor by the numerous venture capitalists and entrepreneurs who cited it. One is that the hard Internet traffic data from the Gilder Technology Report only went through the end of 1996. When questioned on this issue, in an e–mail pointing out that there was evidence of a dramatic slowdown in Internet traffic growth in 1997, one of the authors of the U.S. Department of Commerce report responded that “the Internet is evolving so rapidly, that information quickly becomes out of date.”  That was certainly a paradox common to many Internet enthusiasts of the time. Even though they were almost uniformly believers in the mantra of “Internet time,” the concept that everything was changing many times faster than in the physical world, they would fixate on a statistic that may have been true at some point, and keep repeating it as a settled fact for years, without checking. (The “doubling of traffic every 100 days” was approximately true in 1995 and 1996.) Another interesting feature of the Internet growth claims in The Emerging Digital Economy was that they were based on the Inktomi report, which demonstrated common innumeracy. The claim of a doubling every 100 days corresponds to growth of over 1,000 percent per year (more precisely, 1,155 percent, but that level of precision is irrelevant), and not the growth over three years of 1,000 percent cited in the Inktomi white paper. (Almost certainly, as we will see in the next section, O’Dell had told Inktomi that the UUNet traffic was going to grow 1,000–fold in three years, meaning approximately 100,000 percent.)
Tales of fast growth of Internet traffic were the key element behind the telecom bubble. All the dark fiber that is lying around, still dark, was put down as a result of exaggerated expectations of Internet traffic growth. Towards the end of the telecom bubble, in 2001 and 2002, as the previous business plans were becoming less and less credible, telecom leaders often talked of how the fiber was just a way to deliver futuristic (but unnamed) services. Even some complaisant financial analysts had difficulty swallowing that story. Another explanation that was propagated was that the cost of lighting the fiber was many times the cost of laying the fiber, and so it was no wonder so much of the glass was idle. This was true, but made nonsense of the financial plans. Since “[t]he salient points of absurdity” in the telecom bubble form “so luxuriant a crop … that human hands [cannot] grasp more than a handful,” let us not explore financial aspects of the telecom bubble here, and concentrate on the basic traffic growth story. If we examine documents from the late 1990s, it is clear that this was by far the most important justification for the new telecom ventures, in fact, almost the only one. See, for example, the presentation from 1998 by Tom Soja (1998), an important consultant to underwater cable companies. It was clear that growth rates of traffic were the only thing that mattered for Global Crossing. Or consider 360networks. At the end of 1999, it lured Gregory Maffei, the CFO of Microsoft, as its CEO, inducing him give up US$64 million in Microsoft options (in return for a possible US$886 million in 360networks shares, were the new venture to succeed). Further (Grice, 2000; Willoughby, 2000), it had “a technical advisory board consisting of the Who’s Who of the New Economy, including Michael Dell; Nathan Myhrvold, chief technology officer for Microsoft; and Terence Matthews, founder of Newbridge Networks” . The point is that it was to be a “carrier’s carrier,” providing basic pipes to other service providers, expecting to “capitalize on the ‘amazing growth in Internet and data traffic.’” Thus in spite of the spin that was attempted afterwards, the driving force behind the greenfield fiber deployments was the perception that Internet traffic was exploding, and would continue to do so, and it would be the provision of commodity bandwidth that was the road to riches.
Given “so luxuriant a crop” of absurdities, I do not deal with the many ways in which financial projections for the telecom industry should have been seen as fatally flawed. Some more detail on that topic will be presented in Odlyzko (in preparation). But for completeness, it might be worth mentioning that it would have taken something like a “doubling of Internet traffic every 100 days” for the first half a dozen years of the third millennium for the greenfield fiber players to make money. On the other hand, equipment providers and established telcos would likely have done extremely well even with 4x annual growth. What killed the entire industry was Internet traffic growing about 2x annually, which only served to offset the technological and architectural improvements in communications, and led to stagnant service revenues.
The truth was that while Internet traffic did double about once every 100 days in 1995 and 1996, by 1997 the rate of growth had declined to doubling once a year, the rate that had prevailed rather regularly in the early 1990s . The FCC, the Department of Commerce, the press, and the VCs and entrepreneurs and investors who believed in the myth were all like Wile E. Coyote, happily running effortlessly through the air, unaware they were in a freefall, very far from the hard ground beneath. Their oblivious attitude was enabled by a lot of hot air, emanating principally from WorldCom and its UUNet branch.
The full story of the telecom bubble is complicated. Even the more detailed version in Odlyzko (in preparation) will only be able to present a simplified picture. But it should be said that in addition to the nonsense from WorldCom/UUNet about growth rates, misinformation that was often amplified by others, there was much confusion and pressure to make decisions quickly. Two participants in those events told me separately that “there was no time to think.” Still, Internet traffic was known to be key to the health of the telcom sector, and people obviously did think about it, since they frequently cited the “doubling every 100 days” story. Those who wanted to believe in it, could even occasionally find snippets of solid information from outside WorldCom/UUNet that supported it. For example, the respected reports from the Dell’Oro market research firm showed that the maximal total capacity of all routers shipped in 1999 with ports of at least 2.5 Gbps increased by a factor of seven over the corresponding number of 1998. Similarly, the optical component markets sometimes provided examples of large jumps in demand . In reality, most of those router ports took several years to be populated, many of the optical component orders represented multiple orders placed in anticipation of shortages, etc. And, of course, quite a bit of the demand was coming from the new networks that were being built in anticipation of booming demand, networks that later were often abandoned, when the anticipated traffic did not materialize. As was revealed a few years later, actual Internet traffic growth rates varied, from 70–80 percent per year at UUnet, to about 100 percent at Genuity, to about 300 percent at AT&T. Most estimates agree that growth in the year 2000 overall was around 100 percent, 2x per year.
In any event, the 10x annual growth rate claims should have aroused intense curiosity and spurred serious efforts to validate them. Andy Grove in his famous book Only the paranoid survive wrote about the importance of 10X change; whenever some important parameter changes by a factor of 10, one needs to fundamentally rethink business plans. The appeal of the “doubling every 100 days” tale of course was that it implied the world was now on a path of revolutionary upheaval. WorldCom talked of how their “system architecture has to be redesigned — not just boosted — every year. Every year UUNet redesigns the basic network architecture.” (At the same time, WorldCom was also telling investors that the era of big capital outlays was over. With the network buildout complete, they were about to enjoy the bountiful profits their money had earned. The discrepancy between this story and the one about annual redesign of the network architecture was not noted at the time. I will not dwell on it here, since there is “so luxuriant a crop” of absurdities in theWorldCom story, and I am leaving the financials aside.) This was then extrapolated to the rest of the world by enthusiastic listeners. But the story was about 10X growth year after year. Those who retained the ability to think quantitatively, and had any sense for technology, were able to recognize such rates could not be sustained for long. But there seemed to be few such people. It was shown explicitly in Odlyzko (2000b) that such growth rates would have implied that by the end of 2000, the average traffic per Internet user would have been over 1.5 million bits per second around the clock. At that time, most people had at best 28 thousand bit per second modems they used for perhaps one hour per day, at a fraction of the capacity. Many readers, who had apparently managed to resist all the solid evidence of (Coffman and Odlyzko, 1998; Coffman and Odlyzko, 2002; Sevcik, 1999) that Internet traffic was only doubling once a year, appeared then to finally grasp that their “doubling every 100 days” belief was a delusion. But why was it necessary to present that simple compound interest calculation explicitly? It should have been obvious, and it had indeed been obvious all along to many. The most convincing explanation is that bubble participants lost the ability to comprehend compound interest.
The only seemingly credible direct source for the “doubling every 100 days” story was the UUNet division of WorldCom . Unfortunately this source had great credibility, as it was reputed to have the largest Internet backbone in the world, and at times claimed to be carrying half the world’s Internet traffic. The WorldCom acquisition of MCI and the later aborted acquisition of Sprint were carefully scrutinized by competition authorities in the U.S. and Europe largely because of concerns the combinations would have monopoly power in Internet services.
However seemingly credible the source, the claims should have aroused suspicion early on. Not only were they the most extreme in the industry, but there were lots of indicators that something was fishy about them. These indicators form “so luxuriant a crop” that “human hands could not grasp more than a handful,” so I will concentrate on the most prominent and most extreme case, the public presentations by Mike O’Dell and John Sidgmore of WorldCom/UUNet. But there were plenty of others .
The most famous disseminator of the “Internet doubling every 100 days” tale was John Sidgmore. At the end of the 1990s he was the Vice Chairman of MCI WorldCom, and after Ebbers was forced to resign in 2002, was briefly CEO of that company. His obituary in the New York Times noted that he had been “one of the industry’s most well–liked executives” (Feder, 2003). The obituary, printed after WorldCom’s bankruptcy and the revelation of numerous accounting irregularities at WorldCom, also noted that he “was never charged with any misconduct.” On the other hand, it is also known that his estate paid a multi–million dollar settlement in the class–action lawsuits that followed. As usual, the details are unknown, but it seems safe to assume that there was good evidence of negligence or worse on Sidgmore’s part that had been found. This may not have been connected at all to the tale of “Internet doubling every 100 days.” However destructive this tale was, and however false, and however fraudulently concocted, it is not clear that it broke any laws. Lying is not a crime, and this may have been just a case of the “inactionable puffery” that our society tolerates (and, some would say, encourages) . All the WorldCom insiders I have spoken to, ones who had been aware of the falsity of the “doubling every 100 days” tale and opposed to it, spoke very highly of Sidgmore. They thought that he had been simply fed that line to push, and had been sufficiently far away from operations not to recognize it as false.
Sidgmore seemed to be ubiquitous on the high–tech circuit, making presentations, often keynotes, at various conferences, often run by investment banks or other organizations for investors and investment managers. Most of the news stories that did cite sources for the astronomical growth myth mentioned Sidgmore. Unfortunately so far no recordings of his lectures have turned up, although we do have available his paper from the Vortex 98 conference (Sidgmore, 1998). On the other hand, for many years we had available a video recording of a similar presentation by Mike O’Dell, Chief Scientist and Vice President at UUNet (O’Dell, 2000). The context of this talk is almost as important as the talk itself, and will play an important role in the discussion in Section 6. It was one of a dozen talks at a symposium on “Optical Internet: The next generation,” held at Stanford University on 16 May 2000. (I.e., this was shortly after the dot–com collapse started, but while the telecom industry was still going strong. The flow of investment money was beginning to slow down, and prices were drooping, but overall, spirits were still high.) It was associated with the inauguration of the Stanford Networking Research Center, a cooperative venture of Stanford and (in the words of the official Web site) “leading information technology corporations and Silicon Valley industries.” Attendance exceeded expectations, with many guests forced to watch videocasts of the lectures in an overflow room. Participants apparently included numerous networking industry players, as well as venture capitalists, and of course students and faculty at Stanford. Thus this was about as sophisticated an audience as one could hope for. And yet they all fell for transparent nonsense of the O’Dell talk.
It has to be admitted that it was a wonderful talk, catnip for all the technology enthusiasts who made up the audience. So it is a pity that the video of it has vanished from the Stanford site. (Hopefully a copy will be found someplace.) Still, we can get a taste, although an inadequate taste, from the transcript at http://www.dtc.umn.edu/~odlyzko/isources/. The talk had some nice lines, such as the similarity of the Internet revolution to the disintegration of central planning, “the difference between an eccentric and a madman is that the eccentric has a checkbook,” and of course the final line, occupying the last slide, the line that had apparently been used by O’Dell and Sidgmore many times before, and was often repeated by others:
“If you aren’t scared, you don’t understand.”
This was definitely (although inadvertently, since the intent was to convince the audience to accelerate research, development, and deployment of networking technologies) true. It appears nobody in the audience understood they were being fooled, the industry had already fallen off a cliff, and was about to crash. So they were enthused, not scared.
Why should the audience have been suspicious? Well, for one thing, O’Dell was projecting growth by a factor of between 106 and 107 over the next five years, for annual growth by a factor of 16 to 25. And he was projecting petabit trunks, multiple ones. The audience should have recognized that this was totally impossible, at O’Dell’s time scale, with any conceivable technologies. But they swallowed the line, which was made tastier with the usual techniques of warning people ahead of time it might seem impossible, etc. (cf. University of Exeter, 2009; Stajano and Wilson, 2009). And there were other obviously questionable aspects of the presentation. For example, O’Dell was describing how the UUNet buildout, to be completed by the end of that year, was going to use four times the world’s annual production of OC–192 lasers. Where were they going to come from (especially since few had been produced in previous years)? Were they going to be imported from Mars?
However, predicting the future is hard, and miracles or almost miracles do happen. Still, O’Dell also made some supposedly factual statements about the past that should have set alarm bells ringing. The second through fourth of his slides were the famous UUNet Global Network charts, which apparently also featured in all, or most, of the Sidgmore presentations. These three slides (available at http://www.dtc.umn.edu/~odlyzko/isources/) purported to depict the UUNet network in mid–1997, mid–1998, and mid–1999. Tom Stluka, who was involved in preparing these slides in early 1998, reports (private communication) that the network capacity shown for mid–1997 was correct, that for mid–1998 probably came close to being installed on schedule, but the one depicted for mid–1999 was likely not achieved until a year or more later. But they were all being presented in mid–2000 as historical facts! The audience had no way of knowing that, however. Still, by comparing the figures for “U.S. Domestic Backbone” capacity shown in the lower left of those slides, they could have noticed that the jump from mid–1997 to mid–1998 was by a factor of 7.3, and in the following year by a factor of seven, significantly less than the factor of 10 that O’Dell was claiming for each of the preceding six years. Even ignoring that, if one took the mid–1999 figure of 268,794 OC–12 miles, and combined it with the O’Dell claim of 106 growth over the previous six years, one discovered that his slides implied that UUNet had, in mid–1993, U.S. network of 0.28 OC–12 miles, which is equivalent to a single voice line across the continent. That was impossible, as UUNet was already a large ISP back in 1993, with an extensive network of T1 (24 voice lines each) or faster lines across the country.
With a little digging for information one could also show that the O’Dell slides contradicted official UUNet filings. The S–1 form that UUNet filed with the Securities and Exchange Commission on 10 April 1995, while it was still an independent company, stated that its network then included a 45 Mbps ATM backbone . Well, even a single transcontinental 45 Mbps link in mid–1995, combined with the stated capacity of 5,281 OC–12 miles in mid–1997 shows that the growth over those two years could not have been higher than a factor of 30, far short of the 100 claimed by O’Dell in 2000.
Still, as with the Sidgmore presentations, the audience for the O’Dell lecture reacted neither with derision nor with indignation, but with applause. A recent passage about art fraud appears to describe the situation:
“Forgers usually succeed not because they are so talented but, rather, because they provide, at a moment in time, exactly what others desperately want to see. Conjurers as much as copyists, they fulfill a wish or a fantasy. And so the inconsistencies — crooked signatures, uncharacteristic brushstrokes — are ignored or explained away.” 
In Section 3 I discussed the role of the “Internet traffic doubling every 100 days” myth in the telecom bubble. However, the reader may have noticed that the section above discussed network capacity, as measured in OC–12 miles, and that this is what is presented in the O’Dell slides and in the transcript of his talk. This leads to another aspect of the story, one that provides more data on gullibility.
In the 1998 to 2001 period, O’Dell and Sidgmore appeared to always carefully talk of network capacity, not traffic. However, practically all press reports talked of traffic. It appears that people heard capacity, but took it to mean traffic. A few, such as Jon Healey of the Los Angeles Times, knew the difference, and were sensitive to what O’Dell and Sidgmore said, but the general assumption was that the two were the same. (And, on the backbones of the Internet, they are indeed not much different, certainly not when one has even 2x annual growth rates.) It is a mystery how this transformation took place so universally.
Furthermore, O’Dell and Sidgmore appeared to always talk just of UUNet capacity growing rapidly. (In fact, in his Vortex98 presentation, Sidgmore (1998) asserted with some conviction that UUNet was growing faster than the rest of the industry.) Hence another mystery is how an assertion about UUNet growth rates was transformed into one about the Internet as a whole.
On a few occasions, WorldCom personnel appeared to talk of rapid growth in traffic. A few examples are cited in Odlyzko (2003). Another is the story (Heinzl, 2000) from May 2000, the same month that the O’Dell Stanford lecture was delivered, in which “Joe Cook, WorldCom Inc.’s vice president of network systems engineering” is quoted as saying that “[n]etwork traffic volume for [his] company is growing eightfold each year.” However, those seemed to be exceptions, and it is hard to tell whether those WorldCom/UUNet staff forgot to toe the party line, or whether the reporters translated claims about capacity into claims about traffic. As it turns out, UUNet traffic was about doubling each year during that period. This was implicit in a posting by O’Dell to a mailing list at the end of 2000, a posting to be discussed below. It was also confirmed officially by the company in mid–2002, after its bankruptcy filing, when it stated that its Internet traffic growth rate had been in the 70–80 percent per year range in the preceding years (Rendleman, 2002).
Back in the mid–1990s, UUNet had made a number of claims about different types of growth . The purpose, according to various insiders, was to induce the telecom supplier sector to develop new technologies faster, and also to induce the incumbent phone companies to provide local access lines faster. The official story line then settled on capacity growth because there was evidence to back it up. In 1997 and 1998, UUNet was racing to solve a capacity crunch, and their network was indeed growing rapidly. The jump from mid–1997 to mid–1998 shown in the O’Dell presentation was real, even if the one from mid–1998 to mid–1999 was a fantasy presented as fact.
As evidence mounted that traffic was not doubling every 100 days, O’Dell attempted in late 2000 to reconcile this with his claims of network capacity growing astronomically. In a posting to Dave Farber’s IP (Interesting People) list, he claimed that for traffic “to double every year, network capacity must double every 4 months or so” . He claimed that “[t]his is actually a pretty simple result from graph theory, once one gets the picture right (as are most results from graph theory — grin).” A more preposterous claim can hardly be imagined. If capacity has to grow 8x each year while traffic grows 2x, then the average utilization will drop by 4x each year . Hence, if O’Dell’s claim were true, and even if UUNet had run at 100 percent of capacity at year–end 1995, by year–end 1996 average utilization had to be at most 25 percent, by year–end 1997 it had to be down to below 6.25 percent, by year–end 1998 to below 1.6 percent, by year–end 1999 to below 0.4 percent, and by year–end 2000, when O’Dell was composing his e–mail, down to below 0.1 percent. Now it was, and is, true that average utilizations of data networks are low, something that surprised many people and contradicted some of the most cherished Internet dogmas (Odlyzko, 1999; 2000a). However, they are not that low, especially on the backbones of the Internet, which is what O’Dell was writing about. Had his claims been correct, the Internet would be the most dysfunctional communication technology imaginable, and we would likely have gone back to using dial modems over the voice networks.
Needless to say, nothing like the scenario outlined by O’Dell has materialized. Traffic over the Internet in the U.S. alone has grown by a factor of just about 100 over the last decade. During that period, average backbone utilizations, instead of becoming infinitesimal, have grown .
The interesting point about the O’Dell note is less its absurdity, and more in the reaction of its audience. Farber’s IP list has several tens of thousands of leading technorati around the world. It is hard to tell how they reacted to what O’Dell wrote. The list is moderated, so we don’t know how many may have actually read the postings and reacted to them. But at least three people produced written attempts (outside the IP list) to duplicate O’Dell’s “pretty simple result from graph theory.” Those attempts were fallacious, but for three people to independently undertake the effort of producing an involved argument, and convince themselves at least to some extent of its correctness in spite of its falsity, is remarkable . It is a wonderful illustration of how strong the collective hallucination of the Internet bubble was.
Let us go back to the O’Dell presentation at Stanford, and consider the reactions of the audience. They applauded, and there is no sign on the videotape of anyone raising any questions about the veracity or plausibility of O’Dell’s claims. That is another testament to the power of the collective hallucination of the time, especially since the other events of the day provided plenty of grounds for raising serious questions.
Suppose it really is too much for people to remember figures from a slide flashed on the screen, too hard to do compound interest calculation to show that those figures imply something absurd about the past, and too hard to actually dig up SEC documents to show that those figures contradict official filings. Even then there was plenty in the Stanford meeting at which O’Dell presented his talk to alert anyone with any sense, and not in the grip of a massive delusion, that the telecom industry was in grave danger, as it did not have any idea whatever about future demand for its services.
The O’Dell presentation was just one of three in the last of four sessions in the Stanford symposium that lasted all of 16 May 2000 (see http://www.dtc.umn.edu/~odlyzko/isources/ for more information). There were many statements about high growth rates throughout the meeting, some with specific figures. The remarkable thing is that those figures varied tremendously! Steve Alexander, Senior Vice President and CTO of Ciena, for example, casually mentioned 400 percent growth in two years, which is just 2.24x per year. Rao Arimilli, Vice President of Software Product Marketing at ONI Systems mentioned 35x growth in four years, which is 2.43x per year. Then, in the session just before O’Dell’s, Don Smith, the President of Optical Internet at Nortel, mentioned several times that his company had done extensive studies, and expected 100 to 200–fold growth over the next four years, for annual growth rates of 3.16x to 3.76x . Smith mentioned that his projections were higher than those in earlier sessions. And then came O’Dell, with his 10–fold growth per year, and many explicit statements that this was far faster than any of the previous estimates. All the talks were applauded. Each session was followed by a question and answer period (about half an hour in each case, following on about an hour for the formal presentations). Some of the questions after Smith’s session assumed explicitly his 100 to 200–fold growth in four years. And the questions for O’Dell all assumed the reality of his 10,000–fold growth projection over the next four years. Nobody asked how real the projections were, nor did anyone ask about the wide discrepancy.
The inability to tell 4 from 10 was endemic. (Or at least the inability to recognize that it meant a huge difference for the future of the telecom industry.) It did not reign just on the Stanford campus on the day of the O’Dell talk. The general press, as was shown before, was full of claims of “Internet traffic doubling every 100 days.” But if we consider just January 2000, and look at the publications of IEEE, one of the premier professional organizations, we find wildly disparate estimates. Larry Roberts, for example, one of the “fathers of the Internet,” claimed that month that Internet traffic was growing 4x per year in IEEE Computer (Roberts, 2000). An even more interesting case is that of IEEE Internet Computing. The bulk of the January/February 2000 issue of this magazine was devoted to “An internet millennium mosaic,” a set of predictions for the next decade of the Internet by a collection of tech luminaries, including Bill Gates, Bill Joy, Leonard Kleinrock, and Eric Schmidt. Bob Metcalfe, the inventor of Ethernet and founder of 3Com, was then a technology columnist for InfoWorld (and is now a venture capitalist). His piece declared that “Internet traffic is doubling every four months.” Thus Metcalfe was claiming 8x annual growth, close to the “doubling every 100 days” myth. On the other hand, Ross Callon, the Chief Architect for IronBridge Networks, stated in that same issue:
“Bandwidth will continue to grow rapidly. The rate of growth will average a factor of 4 per year over five years (implying total growth will be roughly a factor of 1,000 relative to 1999 bandwidth).”
So here we had two well–connected technologists with greatly differing views on the magnitude of Internet growth. And yet there was no discussion about the difference, and what it might mean.
Some more of the variety of claims about Internet traffic will be discussed in Section 9, and in much more detail in Odlyzko (in preparation). It appears that, in spite of the “Internet traffic doubling every 100 days” mantra, meaning around 10x annual growth, that dominated the popular press, much of the telecom supplier sector (that is, companies like IronBridge Networks and Nortel) was assuming about 4x annual growth. But there was no discussion of the difference, nor any visible sign of it in the mainstream business press.
Simple compound interest calculations would have shown that, at the growth rates being discussed, even small differences could lead to catastrophic mistakes. At the Stanford symposium, O’Dell talked of the imperative need to grow the network 10x per year, and the need to start deploying some systems three years ahead of demand. But at the 4x growth rate projected by Nortel at that meeting, in three years Nortel production capacity would grow only 64x, while, according to O’Dell, UUNet network would grow 1,000x. In other words, Nortel, which was at the time the dominant supplier of photonic systems to the industry (after making a bold leap to a more advanced transmission system that their competitors had not thought would be needed), was at risk of being relegated to holding just a six percent share of UUNet business in three years.
Cognitive dissonance is the term (citing Wikipedia) for “an uncomfortable feeling caused by holding contradictory ideas simultaneously.” Telecom bubble participants appeared to transcend cognitive dissonance, by holding contradictory ideas simultaneously without any uncomfortable feelings.
The collective hallucination of the telecom bubble was so powerful that it not only prevented observers from noticing that 4 is not 10, it even kept them from seeing that 2 is not 10. There was a large and prominent sector of the telecom industry that based its plans on an assumption of an approximate doubling of Internet traffic each year. Yet, with one singular exception noted later, no one seemed to notice, and it was the claims of 10x annual growth rates that dominated public discussion and planning.
The submarine cable sector, with enterprises like Global Crossing, Project Oxygen, FLAG, and Gemini, attracted much attention and many billions of dollars. Tom Soja was one of the main consultants employed by this industry to estimate demand. First at KMI, and then at his own company, T Soja & Associates (TSA from now on), he prepared the business cases for Global Crossing and several other underwater ventures. The slide deck (Soja, 1998) from an April 1998 meeting shows the information on the data sources used to make demand projections. The most conservative projection on slide 19 in Soja (1998), the 85 percent annual growth rate, was the key ingredient in the financial plans of Global Crossing and other carriers. It was also what Global Crossing told investors and the public that it was counting on. For example, slide 10 of the presentation by Mool Singhi, the Director of Network Planning for Global Crossing at the April 2001 conference (Information Gatekeepers, 2001) shows what it called “Global Bandwidth Demand (Gbps)” growing at an annual rate of 82 percent from 1999 to 2004. An important point is that either implicitly or explicitly, these forecasts assumed that bandwidth demand for underwater capacity would grow at about the same rate as for terrestrial capacity.
Financial projections are not the focus of this paper, but an obvious question arises. If Soja’s traffic forecast was approximately correct, with traffic about doubling each year, how come Global Crossing went bankrupt? There are two interrelated reasons. One is that Soja did not appreciate how quickly technology would improve, and he did not get the financial dynamics right. And part of the unanticipated financial dynamics was the rise of many competing projects. Soja (personal communication) thinks that many of the bankers (and there were hundreds at the numerous presentations he made to finance people doing due diligence prior to funding projects) looked at his projections, saw all the other, far more optimistic growth scenarios in his charts (with the most optimistic one in slide 19 in Soja (1998) showing 300 percent CAGR), and assumed the true value would be someplace in between, and would provide plenty of room for other carriers to make profits. It is quite plausible that if no other trans–Atlantic cable had been built by new players, ones not affiliated with the established telcos, then the Global Crossing AC–1 cable would have been very profitable. (But then the traffic would likely have fallen short of Soja’s projections, as prices would have been higher than they turned out. Fortunately we don’t have to worry about such details here.)
TSA was not the only consulting firm making conservative, and, in retrospect, very reasonable, traffic projections. Arthur D. Little (ADL) was also in that camp. Arthur Solomon, formerly a Vice President and Director of that company, and head of its telecommunications industry consulting group, provided (private communication) a copy of the trans–Atlantic traffic forecast that his group had made for a client in 1999. It assumed growth rates declining from 76 percent in 1998 to 21 percent in 2009. The mix of applications it envisaged can be seen, in retrospect, to have been incorrect. (And so was the pricing forecast.) But the total traffic volumes turned out to be extremely accurate. An especially amusing observation is that for 2008, the forecast predicted average traffic of 859.2 Gbps. The estimate made by Telegeography in 2008 was of average traffic in the spring of that year of 861 Gbps! This amazing coincidence should not be taken too seriously. The ADL projection was for total traffic, including voice, and non–Internet data networks, while Telegeography measured only Internet traffic, and for April of 2008. (At the growth rates of Internet traffic, there is substantial variation even from month to month.) Still, by any standard, actual volumes were similar. Further, the projections made by the planners of the unnamed cable involved in that study were only about 20 percent higher than those made by ADL. (Solomon reports that in some other cases, cable promoters had far higher traffic expectations than those of ADL.)
The main point of these examples is that ADL, TSA, and many undersea cable promoters had expectations for Internet traffic growth that in retrospect were very reasonable, and at wide variance with the prevailing 10x annual growth myth. In other words,
“none of them drank the O’Dell/Sidgmore/WorldCom/UUNet Kool–Aid”
of “Internet traffic doubling every 100 days.” They were assuming approximately 2x annual growth, in capacity and traffic, on both terrestrial and subsea links.
Enthusiastic financial analysts, with the now-infamous Jack Grubman in the lead, had no difficulty acclaiming simultaneously the glowing prospects of WorldCom, with its 10x annual growth projections, and Global Crossing, with its 2x estimates. The discrepancy does not appear to have been noticed in any publication, with the singular exception of the Gilder Technology Report.
It is noteworthy that it was Gilder who noticed that 2x and 10x annual growth figures don’t jibe. Gilder did not have a PhD. from MIT, nor an MBA from Harvard, nor did he have a tenured post at a major research university. On the other hand, he was the greatest cheerleader for the Internet bubble in all its manifestations. Yet he seems to have been the only one to have the wits to realize the various Internet growth projections were inconsistent, and that this had serious consequences. Unfortunately, he drew the wrong conclusions from this observation, and this was instrumental in destroying his career, as well as in ruining many investors.
Gilder’s role in the Internet bubble deserves a book–length study, perhaps several. As it is, we have to make do with the informative but limited article by Gary Rivlin in Wired, “The madness of King George” (Rivlin, 2002). Gilder was tremendously influential in stimulating investor interest in technology stocks, and in affecting the thinking of business and technology readers. Even many of those who thought him close to the lunatic fringe paid attention to his speeches and writings, simply because so many others were doing so. At the peak, his flagship Gilder Technology Report, GTR from now on, had about 70,000 subscribers and brought in about US$20 million a year in revenues (Rivlin, 2002). The famous “Gilder effect” had share prices of obscure tech startups zoom into the stratosphere after they were added to his list of promising investments.
One thing that everybody agrees on is that Gilder is one of the most honest people around. (That he kept some of his doubts about Internet bubble high fliers to himself, as related in Rivlin (2002), is a reflection of the skewed, some would say perverted, attitudes that seem to affect almost everybody at the peak of any mania.) Seldom in doubt, very often wrong, he is unquestionably sincere. He personally lost much materially in the Internet bubble crash (aside from the damage to his reputation). Further, he is not a fool, as over the years he has had a number of brilliantly prophetic technological insights, some of which still have not been absorbed by the information and communication industries, nor by the research community. Thus he is another example of the difficulty of placing people important in stimulating destructive bubbles on the spectrum from rogue to fool.
Gilder’s fame and career in the 1980s and early 1990s was focused on computing, what he called Microcosm. Then he realized that communications was the new frontier, and concentrated on what he named the Telecosm. The monthly GTR, which started appearing in July 1996, devoted great attention to Internet traffic, which it called “the real index of Internet expansion” (GTR, Sept. 1997, p. 2). Thus Gilder understood the role of Internet traffic both as a driver of the telecommunications industry expansion, and of the entire Internet revolution.
The very first issue of GTR had a big chart of Internet traffic measures on its front page, and later issues continued this attention. (Much of the work in tracking such measures was apparently done by Ken Ehrhart, the Director of Research at the Gilder Technology Group, who had his name jointly with Gilder on some of the published pieces in GTR. However, for simplicity, I will write as if GTR was just Gilder’s work.) GTR was the first at least semi-public work to systematically collect Internet traffic statistics after the phase–out of the U.S. government–funded NSF backbone in early 1995, far ahead of Coffman and Odlyzko (1998) and Sevcik (1999). The hard data in GTR unfortunately was based just on the traffic through the U.S. public Internet exchanges, the MAEs and NAPs. The statistics in GTR, if read with even a mildly open mind, showed a dramatic slowdown in traffic growth in 1997. However, Gilder managed to gloss that over, assisted by the widely acknowledged but not quantified fact that growth was faster at private exchanges for which traffic statistics were not available. One could write a long article just about GTR reports and opinions on Internet traffic. But when one reads the back issues of GTR, what comes across is Gilder’s struggle to understand what was happening, trying to reconcile fragmentary and conflicting snippets of information with each other and (more than anything else) with his hopes and expectations for revolutionary change. Thus we find him writing of
— “MCI reports of traffic on their network growing by 6% per month” (GTR, Sept. 1997, p. 5, which corresponds to traffic only doubling in a year)
— “The Law of the Telecosm, that each year will bring a 3–4 fold increase in bandwidth” (GTR, Jan. 1998, p. 4),
— Alan Taffel of UUNet being “up against the kilofold wall, a thousand–fold rise in network traffic every three years” (GTR, Dec. 1997, p. 1, which corresponds to traffic growing 10x each year, the “doubling every 100 days” myth)
There is quite a bit of innumeracy mixed in, numbers that are clearly not consistent being tossed together . There seems to be some initial skepticism about the WorldCom/UUNet growth claims. Thus we find Gilder writing of how Alan Taffel of UUNet “makes a truly stunning claim about Internet traffic, confirming my most extreme projections. At WorldCom–UUnet, so he says, traffic is increasing at a rate of some ten times per year” (GTR, March 1998, p. 3). But as time goes on, the temptation to accept the WorldCom/UUNet story proves irresistible, and we find Gilder writing without any hesitation of “doubling of Internet traffic every four months.” 
And then Gilder discovered that 2 is not 10, and that this mattered. Up until the end of 1998, Gilder did not pay much attention to submarine cables, and his darlings were the terrestrial carriers, in particular WorldCom and Qwest. But then he looked at Global Crossing, and had a revelation. Gilder discovered that its business plans were based on a mere annual doubling of traffic each year, and that undersea capacity had grown at lower rates over the preceding decade than terrestrial one. (He confused potential capacity of fiber when lit with state–of–the–art equipment with actual used capacity, but that’s another story.) Hence he espied fantastic profit opportunities for Global Crossing. The Nov. 1998 issue of GTR was entitled “Cosmic Crossing: 1998’s best opportunity,” and argued that
“undersea capacity increased some 42 fold since 1990 and will rise another 82 times over the next three years. That’s a total of 3,444 times. That means that between 1990 and 2001, terrestrial capacity will have increased by a thousand times more than undersea capacity. …”
“Assuming that global Internet traffic will prove to be growing at less than millionfold every six years projected by UUNet, the expansion will still be huge. …”
“… undersea traffic will grow several times faster than terrestrial traffic. Take my word for it. Over the next five years, the submarine portions of the Internet will prove to be an agonizing choke point. Thus Global Crossing has a truly cosmic position as the supplier of the missing element that completes the global system.”
Gilder’s analysis showed considerable sophistication, far greater than that of most contemporary observers. He realized that Internet traffic was very likely to become more local with time, following the pattern of other communication technologies. (Not understanding the importance of locality has hobbled many people over the ages, and was a key element in the financial disaster of the British Railway Mania of the 1840s [Odlyzko, manuscript].) Still, Gilder had enough of a sense for the power of compound interest to realize that doubling each year (as in Global Crossing business plans) would lead to growth by a factor of eight in three years, while doubling every 100 days would produce growth of at least 1,000 times over that period. Hence if the WorldCom/UUNet “doubling every 100 days” story were true, even the most extreme imaginable localization of traffic would still produce demand for subsea capacity far in excess of the eight–fold growth anticipated by Global Crossing . Therefore he became a raging bull on the shares of this company. Gilder stayed a bull on Global Crossing until its bankruptcy in January 2002, a bankruptcy that took Gilder’s reputation and his fortune (as well as those of many of his followers) down the tubes.
What Gilder appeared never to have entertained seriously was the idea that the “doubling every 100 days” story was an outlandish fantasy. Thus he presents a wonderful, if pitiful, example of extreme gullibility. Moreover, Gilder demonstrates the durability of delusions. As late as 2005, he could not bring himself to say that the WorldCom/UUNet tale was a fraud, and was writing about “an alleged Worldcom lie that Internet traffic was doubling every 100 days,” in the familiar pattern of people reluctant to admit they have been duped .
A foundational assumption of the efficient markets hypothesis is that asset prices in liquid markets incorporate all relevant information very quickly. (There are several versions of the hypothesis, let us not get involved in technical details.) Recent events show this is clearly not true, as will be discussed later in Section 12. However, that was also not true during the Railway Mania of the 1840s (Odlyzko, manuscript), and it was not true during the telecom bubble. The simplest example from this recent episode is the one discussed in the preceding sections, in which wildly differing growth rate assumptions were held by different segments of the industry without the public being aware of this.
As usual in bubbles, in the telecom mania there were from the beginning various skeptics and doom–sayers. The GTR of September 1999 reported that its online Forum “writhes and wriggles with bandwidth glut anxiety.” Telecom consultant and financial analyst reports estimated the then–current and future Internet growth rates at anywhere from the unrealistically low 30 percent per year up to the WorldCom/UUNet fable’s 1,000 percent per year. (A more detailed description of the various estimates will be presented in Odlyzko [in preparation].)
There were numerous people who knew the truth. Tom Stluka and a group of other WorldCom employees, embarrassed by the myth and concerned about the damage it could do to their company, had hard direct knowledge from internal data that the myth was false. Those outside could not be certain, but often had very convincing evidence that Internet traffic was not growing astronomically. For example, Scott Marcus, who was CTO of GTE Internetworking (which became Genuity), one of the Tier–1 ISPs of the late 1990s, reports (private communication) that he and his colleagues were quite sure, from marketing data and statistics on traffic exchanged with other carriers, that traffic of both UUNet and the entire Internet was not growing much faster than a doubling once a year. However, they kept quiet, since
“our firm could not benefit by publicizing the emperor’s lack of clothes. The myth was well entrenched by then. We felt that the public would assume that our traffic was growing more slowly than the norm, and that this perception would hurt us in the marketplace.
So the engineers at Genuity had a good sense of the underlying reality. Our senior management knew as well, but occasionally the marketing types seemed to forget. Or perhaps they did not completely believe their engineers, in the face of the steady droning from the trade press about traffic for our competitors that was supposedly doubling every hundred days.”
At AT&T, top management and the sales force believed the 10x annual growth myth. AT&T salespeople told customers that Internet traffic was growing that fast, and, when pressed, revealed that AT&T’s own traffic was growing about 4x per year (which was true, as AT&T was going all out for market share and was growing faster than the industry as a whole). Among AT&T engineers, there was a mix of opinions (but no perceptible debate). Many knew that the 10x fable was false, and that general Internet traffic was not growing much faster than 2x per year, even while others continued to hold onto the “doubling every 100 days” tale.
Many other people in the telecom industry knew the “doubling every 100 days” myth was false, with various degrees of certitude. For example, much of the telecom supplier sector seemed to be planning on 4x growth, in some cases on less, which implicitly meant they did not accept the O’Dell and Sidgmore fables. (Note that for suppliers, traffic is not directly relevant, it is capacity they get paid for. Thus it is high rates of growth of capacity, what O’Dell and Sidgmore talked about, that mattered.)
There will be more examples and discussion of this topic in Odlyzko (in preparation), but for the moment let us just note that, just as in the Madoff Ponzi scheme, see Section 12, the negative information was quite widely dispersed, was not held as a great secret, yet it did not affect press coverage, nor market valuations nor investments. Along the same lines, consider George Gilder’s discovery of the obvious, namely that some segments of the industry were planning on 2x annual growth, and others on 10x growth. He did publish it, so it reached the 70,000 subscribers of GTR and all the people that the subscribers shared this issue with. So tens, and more likely hundreds, of thousands of people saw it written out explicitly, that there was a large discrepancy, and that it had important consequences for the industry. They did not have to follow Gilder’s fallacious reconciliation of the paradox, they could have done their own analysis, and gone to the effort of collecting additional relevant information. But if any did, they did not leave any public traces.
Any serious investigation would have shown, even to technically unsophisticated observers, that the rapid growth of Internet traffic in 1995–96, at about the proverbial “doubling every 100 days rate,” had come to an end in 1997. There were many snippets of reasonably hard data that demonstrated this. For example, the data released occasionally by MCI, one of the largest Internet backbone providers, showed this, as can be seen in issues of GTR or in slide 14 of Soja (1998) (although both these sources put a more positive interpretation on the data than it warranted in retrospect). And then there was an explicit e–mail, posted on a public Web site, from Vint Cerf, one of the “fathers of the Internet” and the MCI Internet guru, declaring at the end of 1997 that
“traffic in our backbone is now at 100% per year compounded annually. Others report even higher values. Last year traffic increased by 500%.” 
Then came the papers (Coffman and Odlyzko, 1998; Sevcik, 1999) with extensive data and documentation as to where it came from . Both were widely read. The circulation of the Business Communications Review in 1999 was around 13,000, with about half enterprise users, largely telecom/IT managers, with some in upper management through the CIO level, and a substantial fraction among telecom service providers, and some on Wall Street . The paper (Coffman and Odlyzko, 1998) was released for outside distribution by AT&T in early July 1998, and was immediately placed on a publicly accessible Web server, and announced on various mailing lists, resulting in around 2,000 downloads in 1998. It was also the most frequently accessed paper on First Monday in 1999, with over 28,000 downloads . However, these two papers (Coffman and Odlyzko, 1998; Sevcik, 1999) seemed to have no noticeable effect. The Sevcik work did attract a brief note in Upside magazine (Futrelle, 1999), but this note did not convey the main import of Sevcik (1999). Otherwise the world just went on its merry way, until around the middle of 2000, when a sense of alarm started developing, and reporters and financial analysts started looking around for solid information about Internet traffic.
Often rejection of accurate information was wilful. For example, Gilder was aware of the Sevcik report (Sevcik, 1999), and attacked it and the Upside article about it (Futrelle, 1999) in the April 1999 issue of GTR. If not he, at least his staff were also aware of Coffman and Odlyzko (1998). However, they chose not to believe those works, and apparently did not check the numerous sources of data presented in Coffman and Odlyzko (1998) and Sevcik (1999) that were publicly available.
An interesting perspective on information diffusion is provided by Hui Pan and Paul Polishuk of Information Gatekeepers, a telecom consulting firm. They are certainly well connected in the industry. However, what prompted them, around the middle of 2000, to organize the April 2001 conference on fiber glut (Information Gatekeepers, 2001) was not hard knowledge that there was a big mismatch between supply and demand. Instead, they just heard various expressions of unease and concern about the large number of players jumping into the industry (private communication). Another, similar, perspective, is provided by the famous mid–2000 memo of Leo Hindery, the CEO of Global Crossing, that his company and its competitors were “like the resplendently colored salmon going up river to spawn, at the end of our journey our niche too is going to die rather than live and prosper.”  He also apparently did not have hard facts about the disaster that was coming, and was only deducing this from the behavior of competing firms. This lack of awareness of publicly and easily available information is at wide variance with the assumption of the efficient market theory that information diffuses quickly.
The telecom bubble also shows the limited reliance one can place on the press to dig up and distribute information contrary to the accepted wisdom. Reporters did not notice the many discrepancies and implausibilities in the stories about Internet traffic, and for a long time ignored serious published data. Many reporters, even when prompted to investigate, had no interest. When one got intrigued enough to write a story in May 2001 about the falsity of the “Internet traffic doubling every 100 days” fable, she e–mailed that she had “submitted a piece to [her] editors, but they haven’t had the space to run it.” (This was a year before the WorldCom bankruptcy, and an inquisitive look at the traffic myth might have led to the accounting frauds that destroyed the company, and so might have brought that house of cards down a year earlier.)
The general picture from the telecom bubble is similar to the one from the Madoff fraud, and also from the British Railway Mania of the 1840s. The herd instinct dominates, and there is great reluctance to question the accepted wisdom and upset the status quo. This is contrary to the efficient markets theory, but very similar to what has been observed in politics. There, too, splinter groups get essentially no attention, until either they grow big enough, or some prominent person or institution takes notice of them, or else some giant disaster takes place. That politics is similar to business should perhaps not be too surprising. Both depend on tales, “beautiful illusions,” as well as credulous simplicity.
“Information viscosity” will be considered again in Section 12. First we consider some more general issues of how our society and economy function.
Many people express incredulity at the repeated failures of regulators to shut down the Madoff and R. Allen Stanford scams, in spite of the extensive evidence that was accumulating to show their enterprises were fraudulent. However, it is not absolutely necessary to invoke malfeasance or perverted ideology to explain this phenomenon. Credulous simplicity is pervasive in our society (and, as will be argued in Section 13, may be vital to its functioning). Just consider the Nigerian 419 scams, with e–mail messages that promise you 30 percent of US$14.2 million dollars, say, if you help the daughter of Jonas Savimbi move the entire amount to your country. The name of this scam comes from the section of the Nigerian penal code that covers it. The scam itself is old, predating the Internet. So one would think that by now, all the people naive enough to fall for it would have been fleeced, and the scammers would have nothing to gain. Instead, the scam flourishes, and in 2006, the U.S. Secret Service “estimate[d] that 419 swindlers gross[ed] hundreds of millions of dollars a year” (Zuckoff, 2006). Contrary to popular impressions, educated and supposedly sophisticated people are often among the most gullible. We see that in the numerous fads in academia, and we see that in the account in Zuckoff (2006) of a Massachusetts psychotherapist who fell for the Nigerian 419 scams.
Other examples of extreme gullibility among trained professionals abound. The Jérôme Kerviel affair, which cost the Société Générale bank about US$7 billion dollars, and brought it to the brink of ruin, “revealed extensive management failures and weaknesses in the bank’s risk–control systems, which had allowed at least 74 alerts about Mr. Kerviel’s activities to go unheeded over the course of 18 months.” (Clark, 2010) The Madoff affair provides multiple additional examples. Victims were lured by a variety of methods, including the appeal of being in an exclusive club, with threats of expulsion if they became obstreperously inquisitive, or, in cases of some large and sophisticated investors, apparently by the implications that the profits were coming from illicit trading (Markopolos, 2010). Thus the Madoff scheme built on several well–known principles that scams are built on (cf., University of Exeter, 2009; Sundby, 2010). (It will be argued in Section 13 that these principles, as incorporated into legitimate practices, may be key to economic progress). But perhaps the most interesting instances of the Madoff fraud were the numerous trustees of charitable and educational institutions who entrusted to him the endowments they were in charge of. These were supposedly “sophisticated” individuals, who clearly had the best interests of their institutions at heart. Yet they failed to do even basic due diligence.
And then there were regulators, private and government. “Between 2003 and 2005, [the predecessor of FINRA, the financial industry’s independent watchdog] received credible information from at least five different sources claiming that the Stanford CDs were a potential fraud.” (FINRA, 2009, p. 2) Further, “[n]obody was more surprised that the Securities and Exchange Commission did not discover Bernard L. Madoff’s enormous Ponzi scheme years ago than Mr. Madoff himself.” (Henriques, 2009)
At the pinnacle of the regulatory establishment we find three professional economists, Alan Greenspan, Ben Bernanke, and Larry Summers. They were among the most gullible of a very gullible profession. Alan Greenspan has been mentioned already. He is actually capable of learning. The Internet crash of a decade ago convinced him that accountants needed to be regulated (Goldstein, 2002). And the 2008 crash famously convinced him that bankers could not be trusted either. Shareholders and taxpayers might complain about the length and especially cost of these lessons, but they do show his ability to learn. The more interesting question is how come somebody so naïve could stay at the helm of the world’s most influential central bank for two decades without anyone in authority taking note? Is it perhaps an indication of the system’s bias towards credulous simplicity?
Greenspan (2010) has tried to defend his inaction and lack of curiosity about the dangers of the real estate and finance bubble by arguing that there are always dangers and there are always doomsayers. However, what we had during the last decade was a combination of developments, each of which should have raised warning flags. Total debt (as fraction of GDP) reached record levels, higher than in 1929. Housing prices (relative to household incomes) reached records. Corporate profits soared to record levels, even while long term interest rates were very low. Balance of payments deficits were higher than ever before. Financial industry profits as fraction of total profits reached unprecedented levels. There was an explosion of derivative trading, without any commensurate effect on society’s well–being. And so on. Yet no serious investigation was undertaken by the Fed. (And, in a noteworthy development, there was little curiosity about these developments among economists.)
Greenspan’s dogmatic conviction that there was no need to be concerned about bubbles was fortified by some oft–cited academic papers of Ben Bernanke. A decade ago, before he became the head of the Federal Reserve, Bernanke published (together with Mark Gertler) a paper which claimed, for example, that “[a]dvocates of bubbles would probably be forced to admit that it is difficult or impossible to identify any particular episode conclusively as a bubble, even after the fact” (Bernanke and Gertler, 1999) . Ben Bernanke’s contributions to the benign neglect of bubbles was not limited to his academic work with Gertler. His many official pronouncements, as head of the Fed, in the runup to the crash of 2008, that the financial system was in good shape do not need retelling.
Larry Summers contributed to the recent events in multiple ways. At the the Treasury in the Clinton administration, he apparently played a key role in preventing regulation of derivatives. Afterwards, at Harvard, he seems to have pushed that institution’s endowment into some disastrous investments. And, when Raghuram Rajan, one of the few economists to take a serious look at dangers developing the financial system, presented his findings, Summers publicly found “the basic, slightly lead–eyed premise of [Mr. Rajan’s] paper to be misguided” (Lahart, 2009).
The aim here is less to point out mistakes made by these three economic policy–makers, and more to observe that their views were well known when they were placed in positions of power. Moreover, two of them, Bernanke and Summers, are now in charge of much of U.S. economic policy, and of carrying out reforms. Can they be expected to do that well? Are they even truly expected to do so? After all, the political process that had placed them in positions of power years ago is still in operation. Greenspan adamantly refuses to admit any serious fundamental mistakes (Greenspan, 2010). Bernanke and Summers have not produced similar documents, outlining their views on the recent crisis, but then they have not acknowledged any serious mistakes either, and have not apologized for their actions. Further, it appears that the papers (Bernanke and Gertler, 1999; 2001) have not been retracted (even though Bernanke’s actions as Chairman of the Federal Reserve effectively repudiate the conclusions of those papers).
The studious efforts of the economics profession to avoid any serious investigation of the Internet bubble are also consistent with the view that our society is wedded to cultivating gullibility. One finds a few hints in the literature on that mania that there were solid reasons to expect it to collapse (e.g., Lewis, 2002), but in the economics literature the issue is ignored. Instead, professional journals have plenty of attempts to “prove” that markets are efficient.
On the other hand, many convincing proofs that market are inefficient are totally ignored by the scholarly literature. I am referring here to the extensive work that went into the numerous class action as well as private lawsuits against investment banks after the Internet bubble burst. Plaintiffs’ lawyers in many cases assembled teams of economists, accountants, engineers, and finance experts. These groups prepared cases demonstrating negligence on the part of investment analysts, in not evaluating available information properly. These cases were convincing enough to obtain payments of billions of dollars from the investment banks (without any admission of guilt, of course), but unfortunately the evidence is kept confidential, and goes unnoticed in the scholarly literature.
The real estate and finance bubble that burst in 2008 was detectable. In fact it was detected, and in spectacularly profitable ways, as has been described in Lewis (2010) and Zuckerman (2009). Some hedge fund managers, John Paulson most prominently, figured out why and approximately when and how the housing market in the U.S. was going to decline, and earned huge sums for themselves and their shareholders.
However, determining that this giant bubble was going to burst in destructive ways was not a simple matter that could be done by any intelligent observer spending a few hours Web surfing. The Economist, Robert Shiller, and many others had been warning for years that real estate prices were rising to an unsustainable level. (And others, such as Raghuram Rajan and Nouriel Roubini, warned of the dangers to the banking system.) However, warnings about housing prices exceeding recent norms, while they should have stimulated more detailed studies, were not definitive. The problem is that housing prices do vary, often in counter–intuitive ways. For example, U.S. President (2010) has a chart of inflation–adjusted housing prices in the U.S., starting around 1900 . It showed that prices declined substantially in late 1910s, stayed low through the boom of the 1920s, increased during the Great Depression of the 1930s, and took a big upward jump in the 1940s. Then they remained almost level until almost the end of the twentieth century, when they went up dramatically.
How could one possibly justify the big upward move in housing prices at the start of the twenty–first century? Well, there was a multitude of experts with plausible explanations. (An intriguing observation is that in this bubble, as well as in others, there was far more effort, especially when measured by the volume of scholarly publications, devoted to justifying the bubble than to debunking it. This may be another manifestation of society’s trend towards increased gullibility.) One was that stricter zoning regulations had limited the supply of housing (Gertner, 2006). Another was that low long–term interest rates, together with the Great Moderation, the observation that the world economy had “experienced a striking decline in the volatility of aggregate economic activity since the early 1990s” , made people able and willing to pay more for housing.
In retrospect, those explanations were fallacious. But they were supported by papers that were written by experts and published in peer–reviewed journals. To disprove them required some hard work. Many of the hedge fund managers who made huge profits collected data about the quality of mortgages, and constructed quantitative models of the housing market. This enabled them to see that the real estate market was supported by marginal buyers, who could only make their payments if the house prices continued to go up rapidly forever. Thus they were not acting on hunches, but by betting on a sure thing (Gladwell, 2010). In other words, they did their homework.
On the other hand, U.S. regulators (and those in most other countries) for the most part refused to even recognize there was any homework to do. It was only in 2006, after Sheila Bair took over at the Federal Deposit Insurance Corporation (FDIC), that, at her urging, this agency “bought a database of subprime loans from a company called Loan Performance in order to study the problem more closely, something that, apparently, no other government regulator had thought to do” (Lizza, 2009). A better demonstration of gullibility can hardly be asked for. However, it is easy to find one, in the person of Alan Greenspan. Even in the face of all the evidence, supported by billions of dollars in profits, that some hedge funds succeeded through simple investigations of the type that one might expect regulators to do routinely, Alan Greenspan still obdurately refuses to acknowledge the obvious, and calls those profits “statistical illusions” (Burry, 2010).
Still, one has to admit that Greenspan’s non–apology (Greenspan, 2010) does have valid points. Not all bubbles are easy to detect, and there are always people around who warn that the end of the world is nigh (as well as others who claim that the Millennium will arrive tomorrow). As was mentioned in Section 3, the dot–coms, which are often held up as examples of extreme irrationality, were by some measures a great success. Further, there have been many technology manias where social gains likely outweighed investor losses. (It is hard to think of a purely financial mania, like the recent real estate and banking one, where this was true.) Even for the telecom bubble one could argue that it produced social good on balance. The excess fiber that was laid down, terrestrially as well as subsea, did contribute to a restructuring of the telecom industry, spread of new technologies, globalization, etc.
The British Railway Mania of the 1840s, by far the greatest technology mania in history, if measured by real investments as a fraction of GDP, was almost surely productive for society as a whole (Odlyzko, manuscript). That episode is particularly interesting for several reasons. Investors in it included Charles Darwin, John Stuart Mill, and the Brontë sisters. The fallacies of that bubble were not as obvious as those of the telecom mania, but they were were very visible to anyone with a skeptical turn of mind and access to some basic statistical information, and were identified in print by at least one observer, see Odlyzko (manuscript). Still, investors ignored those warning signs and proceeded to plow the equivalent (for the U.S. today) of US$4 trillion into the construction of the new infrastructure. Losses to private investors were heavy, but hardly any of the lines shut down, and Britain acquired the most modern transportation infrastructure in the world. What is especially interesting about the Railway Mania is that it had many powerful, influential, and generally insightful opponents, such as the Times and the Economist. However, as is detailed in Odlyzko (manuscript), these opponents fell subject to another collective hallucination, concentrated on the wrong danger signs, and as a result their warnings likely served to stimulate investment, not retard it.
An even more interesting example is the smaller British railway mania of the 1830s (Odlyzko, manuscript; Odlyzko, 2010). It involved real investment comparable, for the U.S. today, to US$2 trillion. It was a wildly speculative affair, venturing huge sums on an unproved technology with unproved demand estimation techniques. It definitely did merit the skepticism of its many critics, who included John Stuart Mill. Given the size of the investment, it is easy to claim that this mania was more speculative than the dot–com one. Yet this episode of extreme investor exuberance turned out to be rational. Britain derived great benefit from the new transportation infrastructure, and in addition investors earned above–market returns .
Thus technology manias do often contribute to economic progress, and perhaps gullibility among policy–makers should be encouraged for that reason. But there might be other reasons as well, some discussed in Section 13. Some of these reasons were understood by particularly perceptive observers among the early Victorians. Bubbles (technobubbles as well as purely financial ones) provide excitement, the type of excitement that entertainment and gambling provide for many. As our society evolves and becomes more prosperous, bread is becoming less important than circuses. And bubbles do provide much of the entertainment value of circuses (as well as the value of gambling). They may even serve to divert some of the energies that used to be channeled into violence, and especially into war.
Section 9 introduced the concept of “information viscosity,” in which relevant information does not get incorporated into pricing. Some types of information disseminate extremely fast. For example, the high frequency traders position their computers in the same buildings where the electronic trading platforms are located, since even milliseconds make a difference in their business. However, other types of information diffuse very slowly, or else even stay confined in some parts of society without penetrating others. We saw that with the telecom bubble in Section 9. Other examples come from the recent real estate and finance bubble. The U.S. hedge fund managers who modeled the real estate market and earned huge returns were apparently not particularly secretive. (They had to recruit investors, had to deal with investment banks and insurance companies, …) Yet the market did not incorporate this information into pricing, which took a couple of years to crack.
Another example comes from the Madoff affair. It is not as directly relevant to the issue of market efficiency, since there was no Madoff security to trade or short. However, it does demonstrate how ineffective modern information dissemination methods are, even in the age of the Internet. Even aside from the activity of Harry Markopolos (2010) in trying to unmask the Madoff fraud, there were prominent publications (especially a piece in Barron’s) that pointed out the suspicious nature of Madoff’s venture. Furthermore, any time any competent group did true due diligence, by examining Madoff’s claimed methods and results, they quickly concluded that something was fishy, and refused to invest. This appeared to be quite widely known among investment advisers. Yet there were some supposedly sophisticated and well–connected people in the financial industry who appeared to be unaware of these suspicions, and put the money of the institutions they were trustees of in Madoff’s care. The press was also remarkably uninterested in investigating the affair. One would think that the prospect of unmasking a giant fraud would draw reporters like honey draws flies, but instead, the ones that Markopolos contacted always had more pressing questions to pursue (Markopolos, 2010).
All these examples suggest that to understand market behavior, especially in mania times, one needs to have a measure not just of gullibility, but of information viscosity, the tendency of important data items to stay confined in narrow circles, and not reach (or at least not be accepted) by the appropriate decision–makers.
A natural hypothesis, based on the few examples cited here, is that as gullibility increases, so does information viscosity. But to establish this will require concrete measures of both quantities to be developed and validated.
We all live in Lake Wobegon, “where all children are above average.” Surveys around the world consistently find that people rank themselves higher than they really are, whether it is in intelligence, looks, honesty, or any of myriad other desirable characteristics. While such findings have occasioned much research and speculation, and much hand–wringing, the universality of the Lake Woebegon effect suggests it has survival value for human society, a suggestion strengthened by the finding that it applies even more strongly to leaders than to the population at large. In any case, let us simply accept this phenomenon for now as something that is well established and close to universal, and one not likely to be changed any time in the near future. It may be that Homo sapiens has difficulty operating on facts. Or, perhaps more likely, the human race simply functions more effectively under some degree of delusion.
This suggestion is consistent with what we observe in sports, where individual athletes and entire teams get “psyched up,” and become convinced they are ready for the world championships. It is also consistent with the solidly documented observation that people are far more certain of their opinions and decisions than warranted by their knowledge and situation. Such behavior may be optimal for society. As Yogi Berra is said to have declared,
“When you come to a fork in the road, take it.”
It might indeed be best for society to have cohesive teams, each with a different view, but with all members of a team convinced of the group’s view as the one and only true path to the Holy Grail. That might well maximize the chances that at least one of the teams will accomplish something useful. When revolutionary technologies appear that increase uncertainty, the benefits of having many committed teams take every branch of the fork increase.
Suggestions have been made by previous observers that bubbles, often small ones that escape attention, are frequent (Gisler and Sornette, 2010; Haacke, 2004), and play an important role in economic activity. The discussion in this section suggests a more general phenomenon, in which human society as a whole is based on a multiplicity of delusions on different scales. (We see that in politics, where parties construct alternate views of the world, and their adherents adopt them, to the point that, as has been documented through scientific studies, they interpret the same news and events differently.) Occasionally reality intrudes, more frequently in business than in politics. But even in business, enough hot air can keep an illusion inflated for a long time.
Most of the time, the divergent views of reality that are held by different groups cancel each other out in the marketplace. Economic models, based on the assumption of economic reality and rationality, do provide good representations much of the time for this reason. Why do they then fail in times of mania or crisis?
In this view, the large bubbles that have attracted public and scholarly attention come from exceptional circumstances. A powerful new impulse, either a revolutionary new technology, or simply a very attractive “beautiful illusion,” grips society. As a result, instead of different teams taking different branches of the fork in the road, they all pile into a single one, and in the heated race for the perceived prize, they are oblivious of the signs warning they are all about to fall off a cliff.
Large bubbles are likely the result not just of accidental growth of small bubbles. Their formation is facilitated by interested parties, the creators and maintainers of “beautiful illusions.” An important synergistic factor is likely also the affinity of Homo sapiens for joining crowds. Recent research has elucidated and quantified the extent to which people’s opinions and actions depend more on the opinions of family, friends, and colleagues than on facts or arguments. In the interests of brevity, let us not get into details, but just posit that people like to be part of a crowd, however irrational that might be. An excellent illustration is provided by sports. There we have tens of thousands of ardent fans screaming themselves hoarse in a ballpark, cheering their chosen team of athletes. It matters little that those athletes are of different gender, race, and religion than the fans, are far wealthier and probably live elsewhere, and are in incomparably better physical condition than those fans. Much of the attraction of such events appears to come from the opportunity to join in with all those other fans, and the level of excitement reaches levels that once even helped spark a war . Just like small group bubbles, this phenomenon depends on gullibility for its operation (even if in many cases, at some level, people know they are suspending their better judgment when they join a crowd).
Last but not least in the list of important human factors that affect the desirability of gullibility is the issue of trust. Trust is essentially inseparable from gullibility, although they are not the same. Without trust, our economy could not function. Consider the Jérôme Kerviel affair that nearly brought down the bank Société Générale, and which was cited in Section 10. To quote Clark and Benhold (2010):
“Mr. Mustier, who left the bank in 2009 amid an unrelated insider trading investigation, argued that rather than having fostered a culture of excessive risk–taking, Société Générale had failed by creating an environment where there was ‘too much trust.’ ‘I take responsibility for that,’ he said.”
As society becomes more complicated, individuals understand smaller and smaller fractions of what is going on around them, and rely on others. As an example, today doctors are among the most trusted professionals. They can do incomparably more than their predecessors could in the early Victorian times. Those had not progressed too much beyond blood–letting, and the placebo effect of snake oil and other “patent medicines” was among their most effective tools. However, while those earlier physicians could be said to have complete mastery (however little it was worth) of their craft, today their heirs have to rely on labs to perform tests correctly (and not mix up samples), on specialists to provide correct diagnoses, on pharmaceuticals to be manufactured according to specifications, etc. Thus there is a need for growth in trust to parallel the growth in complexity of society. (There is an extensive scholarly literature on the role of trust in modern economies. For surveys and references, see Collins (2005) and Lascaux (2005), for example.)
When we consider the important role of trust, we can see the Madoff fraud as a positive indicator of the level of trust (and gullibility) in our society. That he was able to fool so many people for so long indicates the deep reservoir of trust that exists. That provides hope for future ventures and further development of society.
The brief discussion above of how human nature is involved in economic activity, and how gullibility in particular is important for progress suggests that more attention should be paid to such factors when considering economic growth. It is now widely known that there was no sharp transition to the Industrial Revolution. Instead, there was a gradual process of development, and gradual acceleration of growth rates. However, there was a perceptible speed–up in the rate of growth around 1850. It can be argued that this was associated with the development of laws, regulations, and institutions, especially corporations. That development, though, appears to have been paralleled and enabled by the development of trust and gullibility. It was also paralleled by the growth in “beautiful illusions,” and growth in rewards for those who create and maintain them.
The institutions that have evolved to produce economic progress draw on various facets of human nature. The prospect of gain is certainly very important. But we should not neglect others. Altruism certainly plays some part. And so do gambling and entertainment, especially as our society’s priorities move from bread to circuses. This development does not appear to have been carefully planned out in advance by any one. However, there is evidence that some perceptive observers in the early Victorian times were aware of the important contributions of such factors to economic growth, and may have nudged development of laws and institutions so as to maximize their effectiveness.
The many varied effects that contribute to economic progress today were very clearly presaged by a tale from late Victorian times, and from the other side of the Atlantic. There is no indication that Mark Twain had this in mind, but the adventure of Tom Sawyer and Aunt Polly’s fence is very prophetic. Tom, condemned to a Saturday of drudgery, painting the fence, had “a great, magnificent inspiration.” When another boy came along and made fun of Tom for supposedly liking to work, Tom’s response was:
“Like it? Well, I don’t see why I oughtn’t to like it. Does a boy get a chance to whitewash a fence every day?”
That changed the mind of the new arrival, and after a great show of reluctance by Tom, the other boy bought himself the right to do some of the painting in exchange for an apple.
“Tom gave up the brush with reluctance in his face, but alacrity in his heart. And while the [other boy] worked and sweated in the sun, the retired artist sat on a barrel in the shade close by, dangled his legs, munched his apple, and planned the slaughter of more innocents. There was no lack of material; boys happened along every little while; they came to jeer, but remained to whitewash. … And when the middle of the afternoon came, from being a poor poverty–stricken boy in the morning, Tom was literally rolling in wealth. He had besides the things before mentioned, twelve marbles, part of a jews–harp, a piece of blue bottle–glass to look through, a spool cannon, …
He had had a nice, good, idle time all the while — plenty of company — and the fence had three coats of whitewash on it! If he hadn’t run out of whitewash he would have bankrupted every boy in the village.”
In Twain’s words, Tom Sawyer “had discovered a great law of human action, without knowing it — namely, that in order to make a man or a boy covet a thing, it is only necessary to make the thing difficult to attain.” It appears that many have discovered this, including some policy–makers. Some have likely drawn additional lessons from it. Tom Sawyer created a “beautiful illusion” that induced the village boys to exert themselves for the public good, while feeling happy about what they paid Tom. (And is that much different from what we observe in free software, say, aside from the fact that free software projects have not yet advanced to the stage of being able to charge participants for contributing?) That the rewards all went to Tom was not particularly material. And that may be why, in recent times, so many of the material rewards of economic growth have gone to other creators of “beautiful illusions.”
It should be said that not all creators of “beautiful illusions” are as cynical as Tom Sawyer. In practice, it appears that the most effective promoters are the ones who are gullible enough to believe their tales, to “drink their own Kool–Aid.” People like Benjamin Disraeli, who induced many Britons to lose their shirts in the bubble of the mid–1820s, but who also lost his own, and did not settle the last of his debts from that episode until a quarter century later (Chapter 5 of Odlyzko [manuscript]).
Of course, “beautiful illusions” cannot accomplish much unless there is plenty of credulous simplicity. Fortunately there is plenty of that, courtesy of the widespread gullibility.
No claim is made here that the encouragement of gullibility is the result of a conscious policy decision by any policy–makers, legislators, or any secret cabal. It more likely evolved through trial and error, in the complex adaptive process that has led to our modern laws, regulations, and institutions, and our entire institutional culture. How that operates can be illustrated by the career of Henry Blodget. During the Internet bubble, he gained prominence through an extravagant prediction of a giant jump in the price of Amazon shares that was fulfilled within a month. This, together with his gift of gab and photogenic presence, catapulted him to the top ranks of Wall Street analysts. After the dot–com crash, investigations unearthed e–mail messages Blodget had sent which demonstrated he had been convinced that many of the stocks he had been recommending to investors were garbage. This led to a US$4 million penalty, and being barred from the securities industry for life.
Note that had Blodget been more gullible, and believed the hype he was spreading, he would likely still be a star analyst on Wall Street. And had he been more clever, and just pretended to believe the hype, without putting down his contrary thoughts in an e–mail message, the outcome would likely have been equally positive for him. This and similar incidents have probably not gone unnoticed, and business and technology leaders now know that gullibility is an effective defense against claims of malfeasance. (Note that it is not clear whether any high–level managers will go to jail over the collapse of the real estate and finance bubble. There is such a bewildering array of players, the appraisers, mortgage originators, rating agencies, investment banks, etc. who were involved, and bear some part of the blame, that pinning responsibility on any one may be impossible.) Thus the growth that appears to be taking place may be partly a matter of selecting the most credulous, and partly from people learning to suspend their curiosity and skepticism.
Note that in spite of his disgrace and not being able to work in the securities industry, Blodget now has a new career as a writer of often very perceptive comments on the financial industry. In the meantime, the Merrill Lynch analyst who was replaced by Blodget because he was too conservative has apparently disappeared from public view.
Much more can be said on this topic, especially on the influence of monetary rewards (through bonuses, stock options, and the like) on the propensity to engage in “willing suspension of disbelief.” It has been observed that the investment analysts, the ones in the best position to notice the implausibilities and outright impossibilities in the investment stories peddled to the public, were “paid a lot not to notice.” Even seemingly disinterested participants often had much tied to the success of the bubble. For example, academic researchers had an interest in increasing the flow of research funding to their field, newspaper reporters had an interest in getting more coverage in their areas of specialization, and so on. But let us leave that aside for now.
If we accept Alan Greenspan’s (2010) view about bubbles, then everyone in the telecom bubble acted rationally, coping as best they could with a world where uncertainty had risen to new heights. But then we are left with the puzzle of leading business and technology leaders showing such appalling lack of quantitative reasoning ability that they should not have been allowed to leave grade school. Yet they not only graduated from grade schools, but went on to finish high schools, colleges, get PhDs from MIT and MBAs from Harvard, and become CEOs and CTOs of leading technology companies, Wall Street analysts, and tenured professors at renowned research universities.
An alternative view is that all those business, technology, and research leaders had earned their credentials honestly, and were blinded temporarily by a collective hallucination induced by the promise of a revolutionary new technology (assisted by the usual proclivities of Homo sapiens for irrational herd behavior, and by artful creation of “beautiful illusions” by some talented individuals). That view implies that Alan Greenspan was (and still is) blinded by a durable and apparently incurable collective hallucination about the way the world work
Occam’s razor implies that the second explanation should be preferred. This of course causes some problems. It undermines the foundations of much of modern economics, for one. It also raises questions about the “Singularity” concept (Vance, 2010). There are many serious doubts about the reality and significance of this concept already. However, the analysis of this paper suggests in addition that instead of worrying about future computer intelligence surpassing that of Homo sapiens, we should worry more about human intelligence (at least as it is manifested in collective decision–making) descending to the level of current computers .
Collective hallucinations that affect thinking and decision–making by top business and technology leaders also suggest limitations on the effectiveness of AI, data mining, business analytics, and many other highly touted technologies. Perhaps we can develop techniques that will conclude, from the spectrum of light reaching us, that the sky is blue, but if people are driven by a collective delusion to insist that it is purple, how much good will those technologies do?
Still, all this discussion reinforces the call for devising objective measures of gullibility and rationality. Note that we cannot expect too much from them. There does not appear to have been any simple rule that would have decided that the British railway mania of the 1830s was going to succeed, and the one of the 1840s was going to fail. For that, deeper investigations appear necessary. Still, even a simple thermometer is a useful tool for physicians, although it does not show when a patient is about to have a stroke.
Can one construct an objective measure of gullibility? Modern research offers hope. Asking financial regulators what is 2 + 2 is clearly not going to be productive, but more subtle approaches could bear fruit. Occasional surveys of investors appear to indicate that in bubble times their expectations for returns from stock investors soar to the 20 percent per year level, as opposed to something like 10 percent during more sober times. Such surveys should be conducted on a monthly basis, distinguishing between various categories of people, such as general public, private investors, investment managers, regulators, etc.
Even more promising might be systematic search for cases of innumeracy at various levels. There is certainly plenty of it at all times, but, as shown by the telecom bubble, it seems to grow in frequency and seriousness in boom times. Modern technologies (semantic web, Wolfram Alpha, …) appear to provide promising approaches to detecting such phenomena. It should be possible to devise automated systems that would have detected, from publicly available information, that Internet traffic growth estimates were important in decision–making during the telecom bubble, and that there were striking disparieties in what different groups assumed, from 2x through 4x to 10x per year. That should have rung alarm bells for any sensible person.
Other approaches might also be fruitful. We could monitor the frequency with which Nigerian 419 scam attempts succeed (and try to correlate it with the degree of bubble thinking, as well as geography, socioeconomic status, etc. of the victims). We could even attempt active experiments with sending out 419 scam e–mail messages, extending the work that has been done (University of Exeter, 2009). How does the rate of response vary with the amount of money being mentioned, and the plausibility of the story in the message? The approaches that are already being used to investigate spread of gossip could be extended to test susceptibility to “beautiful illusions” such as that “Internet traffic doubling every 100 days.” Some work on measuring the impact of electronic message board rumors on stock prices has already been done (cf., Bettman, et al., 2010), and could be extended. We could try passing out tales of various degrees of credibility, combined with various degrees of material rewards, to see how people react (and how that changes, depending on the presence or absence of a boom mentality). The degree to which mutually contradictory gossips coexist in the network would also provide a measure of “information viscosity.” These are all just some simple preliminary ideas, but they do suggest that one might be able to construct a scientifically sound measure of gullibility and other aspects of human behavior that are implicated in bubbles.
Will there be enough opportunities to test such measures in the real world? After the Internet bubble crashed, many observers predicted that this was a once–in–a–generation event. But less than a decade later, we had the real estate and finance bubble, an even bigger one. Further, Alan Greenspan, that enthusiastic disciple of Ayn Rand, recently shocked some, and amused others, by speculating about the possible need to nationalize the financial system once a century . It is therefore reasonable to expect that we will soon have other central bankers, just as fervent in their advocacy of free markets as Greenspan, but even more gullible, presiding over actual nationalizations of the financial system every decade . Hence, on present course, there will be plenty of opportunities to test and validate any indices of gullibility, information viscosity, or other quantitative objective measures relevant to bubbles that might be developed.
About the author
Andrew Odlyzko is Professor in the School of Mathematics at the University of Minnesota.
E–mail: odlyzko [at] umn [dot] edu
The many individuals and institutions that assisted in the project on technology bubbles from which this paper is derived are listed at http://www.dtc.umn.edu/~odlyzko/doc/mania-ack.html.
1. Punch, volume 9, July–December 1845, p. 193. The quote itself comes from Thomas Gray’s 1742 poem Ode on a Distant Prospect of Eton College.
2. Much higher figures are often mentioned. For example, the article “Crash course” in the 28 February 2009 issue of the Economist estimated that the decline in stock market valuations of telecom companies from the Internet bubble crash was US$2.8 trillion, as opposed to US$4.6 trillion for banks in the crash of 2008. However, those were just stock market valuations, which were far, far higher than real tangible investments. The growing disparity between real investment and valuations is one of the interesting trends of the last two centuries.
The entire direct real investment in the telecom bubble in the U.S. was only about US$150 billion. This estimate is obtained by subtracting from total telecom investment by the service providers during that period the trend line from before and after that bubble. Of that, somewhere close to US$100 billion appears to have been devoted to long–haul fiber networks. However, that includes just the investments of service providers. If one includes the rest of the information and communication technologies industries, one obtains figures several times as high, although still far short of the US$2.8 trillion of the Economist estimate for the decline in share valuations.
The real dot–com investments were far lower.
3. The work on herding and informational cascades, see Baddeley (2010), for example, for references, is clearly relevant. However, those are concepts applied in economics primarily to rational behavior in the presence of uncertainty. What is demonstrated here is behavior that ignores available hard evidence.
4. The Times (27 November 1847), p. 4.
5. On 13 March, 20 May, 11 August, and 14 October, as well as in the 19 March story (Hershey, 2000) cited above.
6. It is quite possible that the dot–com mania, and the European 3G spectrum auction, would have occurred even if the true growth rate of Internet traffic were known. After all, the real success story in telecom over the last three decades has been that of wireless, and primarily in low–bandwidth voice and texting wireless. However, there would have been no way to justify the buildout of the fiber networks, at least not of the size that took place, had the truth been known.
7. E–mail of 22 April 1998. The 1999 edition of this report did not repeat the claims of Internet traffic doubling every 100 days.
8. It might be worth noting that at the end of 1999, Microsoft invested US$175 million in Asia Global Crossing, an underwater cable in East Asia, a joint venture with Global Crossing and Softbank. Thus the ranks of believers in the bandwidth shortage story included many of the most sophisticated technology leaders.
It is also worth remarking that the torrent of money pouring into the telecom sector started slowing down about the time of the peak in NASDAQ and the dot–com market in March 2000. However, the telecom sector did not start crashing until late 2001, with the bottom reached in mid–2002, with the bankruptcy filing of WorldCom. Most of the discussion here is about the information that was, or was not, available through early 2000.
9. See Odlyzko (2003) for more information. Today, the growth rate is down to the 40–50 percent per year range.
10. See, for example, some of the citations in Malik (2003).
11. This was already noted in the first two published papers to document the dramatic decline in Internet traffic growth in 1997, (Coffman and Odlyzko, 1998; Sevcik, 1999).
12. Several are discussed in Odlyzko (2003). As yet another example, consider the WorldCom Supplement to 1998 Annual Report. It was not part of the annual report that was filed with the SEC, which is available at http://www.sec.gov/Archives/edgar/data/723527/0000950134-99-002242-index.html. However, it is available today through the Internet Archive at http://web.archive.org/web/20030329185301/http://www.worldcom.com/global/investor_relations/annual_reports/1998/financials/internet/. It claimed that
“Five years ago demand for Internet services bandwidth was doubling every year. That seemed like a steep curve. But since 1996, the demand has grown at an annual rate of 1,000 percent. One thousand percent growth means DOUBLING EVERY THREE AND ONE HALF MONTHS. This growth rate dwarfs the rate in the famous “Moore’s Law,” which describes computer performance growth as doubling every 18 months.
If this growth continues, and all signs indicate that it will, the Internet will consume half the bandwidth in the world by the Year 2000. By 2003 the Internet will consume 90 percent of the world’s bandwidth. Traditional voice traffic is not going to disappear. In fact it continues to grow, but at a pace much more in line with population growth.”
Wonderful, stirring words. However, if the Internet were to be half of the global network in 2000, and continued growing at 1,000 percent per year, it would grow by a factor of over 1,000 times by 2003. If the other half of the global network were to grow “in line with population growth,” then the Internet in 2003 would be 1,000 larger than the non–Internet piece, or, in other words, would be 99.9 percent of the total, not just 90 percent. Such a discrepancy could be due to the presence of non–Internet, non–voice networks. (Although seldom discussed, and not mentioned in the WorldCom document, they took up about half the bandwidth at the end of 1997, see Coffman and Odlyzko ) More likely, though, this was the result of simple innumeracy. Still, this should have raised some questions as to the competence and possibly veracity of WorldCom.
Not everybody forgot how to do compound interest in those days, and some people did notice the glaring discrepancies in WorldCom claims. For example, Tom Soja’s notes, made during the preparation of the 1998 presentation deck about bandwidth estimates that is available at http://www.dtc.umn.edu/~odlyzko/isources/, show computations demonstrating that John Sidgmore’s claims on when data traffic would overtake voice traffic did not make sense. (These notes became semi–public by being subpoenaed during the numerous class–action lawsuits following the collapse of the telecom bubble.)
13. The term “inactionable puffery” was apparently coined inside the American legal system, to denote claims that this system disclaims any responsibility for determining the truth of, or correcting, such as the boast of having the world’s best apple pie. Hence no legal action can be brought to correct them.
14. This form was filed on paper only, and so is not in the online SEC Edgar database (http://www.sec.gov/edgar/quickedgar.htm). However, any financial analyst covering technology would have had access to it at work.
15. Grann (2010). This also fits Principle 3.6 of Stajano and Wilson (2009): “Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.” And most of the people during the Internet bubble wanted to be manipulated.
16. Peter Sevcik’s investigation, which led to Sevcik (1999), was motivated by several factors (Private communication from Peter Sevcik). One was that UUNet was claiming implausibly high growth rates, far higher than any other ISP. Another was that they had been frequently reporting growth at the rate of “doubling every three and a half months,” but just what was doubling that fast seemed to change from one occasion to another. See Futrelle (1999).
17. Message available at http://www.interesting-people.org/archives/interesting-people/200011/msg00058.html. It was a response to an earlier message, http://www.interesting-people.org/archives/interesting-people/200011/msg00055.html.
18. This assumes that the average distance packets travel stays constant. If traffic becomes more local, as evidence from UUNet and other ISPs indicated, then the drop in utilization will be even greater.
19. On terrestrial backbones, average utilizations appear to have moved up to the 20–25 percent range, from the much lower ranges that prevailed a decade ago, which are noted in Odlyzko (1999; 2000a). On transoceanic links, they are even higher, in the range of 40 percent.
20. The most skeptical of the three was a 2001 report from Probe Research, a telecom consulting outfit. It was entitled “The economics and evolution of wired and wireless networks: Current thinking on network evolution and its ‘laws’,” and had only the name of Allan Tumolillo listed. It did not endorse the O’Dell claims, but did not examine them in detail, nor state that they were preposterous, either.
21. This was likely a high–level simplified summary. The Gilder Technology Report (GTR) for December 1999 stated
“Nortel was bullish on bandwidth and had the data to prove it. Pointing out some 500 previous industry bandwidth projections had all drastically underestimated bandwidth demand, Anil Khatod, President of Nortel’s optical Internet project, stepped me through an authoritative demonstration that he had developed initially for Jim Crowe of Level 3 (LVLT). … Putting all these numbers together and adjusting for double counting, Khatod shows that available bandwidth will increase 80 fold over the next four years while demand will rise between 100 and 260 times. The result is a capacity crunch.”
22. For example, GTR, November 2000, p. 3 says that “John Sidgmore declares that IP traffic on UUNET continues to soar at close to tenfold every year or so, which means some one thousand fold in half a decade.” Whether the fault is Sidgmore’s or Gilder’s, this is nonsense. Tenfold growth each year means thousand fold growth in three years, not in five.
23. GTR, September 1999, p. 3. It appears that Gilder, in common with most observers, never grasped the difference between capacity and traffic, which was made so much of by WorldCom/UUNet folks.
24. To explain the slow growth assumed by Global Crossing, Gilder came up with an imaginative scenario. In the November 1998 GTR he wrote:
“[Global Crossing] offers embarrassingly conservative estimates of the global growth of data traffic. The company has hired some of the best talent from the leading telcos, from Carter of AT&T to CEO Jack Scanlon, who served 24 years at AT&T before moving to Motorola (MOT). They may be prone to Telco nostalgia, cherishing the importance of voice and underestimating the Internet. Winnick [the founder and controlling shareholder] will have to keep an eye on them.”
Gilder, unlike many observers outside the subsea cable industry, understood that increasing the capacity of underwater cables was much harder than of terrestrial networks. The number of fibers in a submarine cable was then limited by technology to about a dozen, as opposed to hundreds that could be placed on land.
25. The Gilder Friday Letter, issue 199.0 (29 April 2005). Note that there is a segment of the underground economy that preys on people who fell victims to scams. Such people are often scammed again, through approaches such as a promise to help them recover their lost funds.
26. E–mail to Henning Schulzrinne, posted on his Web page at http://www.cs.columbia.edu/~hgs/internet/traffic.html and available to this day.
27. Coffman and Odlyzko (1998) had to be based on externally available, open data, since AT&T would not allow the use of any of its internal statistics, which would have made the arguments even more persuasive.
28. Personal communication from Eric Krapf, the editor of the Business Communications Review.
29. Usage data for First Monday is not available for 1998.
30. Copy of the 5 June 2000 memo, from Leo Hindery to Gary Winnick, the Chairman of the Board of Global Crossing, and a small group of other executives, is available at http://www.dtc.umn.edu/~odlyzko/isources/.
31. See also Bernanke and Gertler (2001). The main argument of those works was that on the basis of the authors’ mathematical models, it was better to let any bubbles that might occur blow off, and use monetary policy to mop up afterwards.
32. Figure 1–1 on p. 27 of U.S. President (2010), data derived from Shiller’s work.
33. Words taken from Davis and Kahn (2008), one of several papers about the Great Moderation that appeared in prestigious economics journals just as the world was going through an economic upheaval.
34. Both of the British railway manias, the one of the 1830s and the one of the 1840s, also illustrate the limitations of monetary policy in dealing with powerful bubbles. Both of the manias faced several instances of adverse monetary conditions, in one case a systemic financial crisis. Yet the investments in railways proceeded, little hindered by these circumstances.
35. The “Football War,” also known as the “Soccer War,” in 1969 between El Salvador and Honduras.
36. Scholarly literature has extensive coverage of the concept of “bounded rationality.” It is very relevant to the subject of this paper, and is not discussed to keep the presentation simple. The examples presented here are ones of rationality in times of mania becoming extremely bounded.
37. K. Guha and E. Luce, “Greenspan backs bank nationalisation,” Financial Times (18 February 2009) cited Greenspan as saying “It may be necessary to temporarily nationalise some banks in order to facilitate a swift and orderly restructuring. I understand that once in a hundred years this is what you do.”
38. Perhaps even more frequently than once a decade. Jamie Dimon, the chairman and CEO of JPMorgan Chase, one of the banks that survived the crash of 2008, and is now in the Much Too Big to Fail category, has opined that a financial crisis is “the kind of thing that happens every five to seven years.” (Sundby, 2010) And Hank Paulson more recently said that “the next financial crisis … is inevitable, probably within the next six to 10 years” (Wall Street Journal, 2010b).
G.A. Akerlof and R.J. Shiller, 2010. Animal spirits: How human psychology drives the economy, and why it matters for global capitalism. Ninth (paperback) printing. Princeton, N.J.: Princeton University Press.
M. Baddeley, 2010. “Herding, social influence and economic decision–making: Socio–psychological and neuroscientific analyses,” Philosophical Transactions of the Royal Society B, volume 365, number 1538 (27 January), pp. 281–290.
B. Bernanke and M. Gertler, 2001. “Should central banks respond to movements in asset prices?” American Economic Review, volume 91, number 2, pp. 253–257.
B. Bernanke and M. Gertler, 1999. “Monetary policy and asset price volatility,” Federal Reserve Bank of Kansas City Economic Review, (Fourth Quarter), pp. 17–52; available at http://www.kc.frb.org/publicat/sympos/1999/4q99bern.pdf.
J.L. Bettman, A.G. Hallett, and S. Sault, 2010. “Exploring the impact of electronic message board takeover rumors on the US equity market,” (5 August manuscript), at http://ssrn.com/abstract=1654142.
M.J. Burry, 2010. “I saw the crisis coming. Why didn’t the Fed?” New York Times (4 April), at http://www.nytimes.com/2010/04/04/opinion/04burry.html.
N. Clark, 2010. “Former SocGen Chairman has sharp words for trader,” New York Times (23 June), at http://www.nytimes.com/2010/06/23/business/global/23socgen.html.
N. Clark and K. Benhold, 2010. “A Société Générale trader remains a mystery as his criminal trial ends,” New York Times (26 June), at http://www.nytimes.com/2010/06/26/business/global/26socgen.html.
K.G. Coffman and A.M. Odlyzko, 2002. “Internet growth: Is there a ‘Moore’s Law’ for data traffic?,” In: J. Abello, P.M. Pardalos, and M.G.C. Resende (editors). Handbook of massive data sets. Boston: Kluwer, pp. 47–93; also at http://www.dtc.umn.edu/~odlyzko/doc/internet.moore.pdf. Preprint released for outside distribution by AT&T on 12 July 2000, and posted on the AT&T Labs — Research Web server that day.
K.G. Coffman and A.M. Odlyzko, 1998. “The size and growth rate of the Internet,” First Monday, volume 3, number 10, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/620/541. Preprint released for outside distribution by AT&T on 8 July 1998, and posted on the AT&T Labs — Research Web server that day.
G. Collins, 2005. “Trust in post–bureaucratic organizations,” In: J. Finch and M. Orillard (editors). Complexity and the economy: Implications for economic policy. Northampton, Mass.: Edward Elgar, pp. 172–190.
S.J. Davis and J.A. Kahn, 2008. “Interpreting the Great Moderation: Changes in the volatility of economic activity at the macro and micro levels,” Journal of Economic Perspectives, volume 22, number 4, pp. 155–180.
B.J. Feder, 2003. “John Sidgmore, 52, dies; Headed WorldCom,” New York Times (12 December), at http://www.nytimes.com/2003/12/12/business/john-sidgmore-52-dies-headed-worldcom.html.
Financial Industry Regulatory Authority (FINRA), 2009. Report of the 2009 Special Review Committee on FINRA’s examination program in light of the Stanford and Madoff schemes, at http://www.finra.org/web/groups/corporate/@corp/documents/corporate/p120078.pdf.
D. Futrelle, 1999. “Bandwidth overkill: Is Net growth really ballooning?” Upside (July), p. 38.
J. Gertner, 2006. “Home economics,” New York Times (5 March), at http://www.nytimes.com/2006/03/05/magazine/305glaeser.1.html.
M. Gisler and D. Sornette, 2010. “Bubbles everywhere in human affairs“ (May manuscript), at http://ssrn.com/abstract=1590816.
M. Gladwell, 2010. “The sure thing: How entrepreneurs really succeed,” New Yorker (18 January).
B. Goldstein, 2002. “Word for word/‘Greenspan shrugged’; When greed was a virtue and regulation the enemy,” New York Times (21 July), at http://www.nytimes.com/2002/07/21/weekinreview/word-for-word-greenspan-shrugged-when-greed-was-virtue-regulation-enemy.html.
D. Grann, 2010. “The mark of a masterpiece: The man who keeps finding famous fingerprints on uncelebrated works of art,” New Yorker (12 and 19 July), at http://www.newyorker.com/reporting/2010/07/12/100712fa_fact_grann.
A. Greenspan, 2010. “The crisis,” white paper (9 March), at http://www.brookings.edu/~/media/Files/Programs/ES/BPEA/2010_spring_bpea_papers/spring2010_greenspan.pdf.
C. Grice, 2000. “What lured Microsoft exec to Canadian start–up?” CNET (5 January), at http://news.cnet.com/What-lured-Microsoft-exec-to-Canadian-start-up/2100-1033_3-235223.html.
C. Haacke, 2004. Frenzy: Bubbles, busts, and how to come out ahead. New York: Palgrave Macmillan.
M. Heinzl, 2000. “Nortel’s rivals seek the firm’s gigabit crown,” Wall Street Journal (11 May).
D.B. Henriques, 2009. “Lapses kept scheme alive, Madoff told investigators,” New York Times (31 October), at http://www.nytimes.com/2009/10/31/business/31sec.html.
R.D. Hershey, Jr., 2000. “A Nasdaq correction; Now back to business,” New York Times (19 March), at http://www.nytimes.com/2000/03/19/business/market-insight-a-nasdaq-correction-now-back-to-business.html.
R. Hundt, 2000. You say you want a revolution: A story of information age politics. New Haven, Conn.: Yale University Press.
IBM, Global Technology Services, 2006. “The toxic terabyte: How data–dumping threatens business efficiency,” July white paper, at http://www-03.ibm.com/systems/resources/systems_storage_solutions_pdf_toxic_tb.pdf.
Information Gatekeepers, 2001. “Photonics briefing #8: Fiber bandwidth glut — Fact or fiction” (18–19 April), proceedings at http://www.telecombriefings.com/pastbrief/brief8.html.
W.E. Kennard, 2000. Report card on implementation. draft strategic plan. A new FCC for the 21st century, FCC (March), at http://www.fcc.gov/21st_century/report_card_march2000.txt.
J. Lahart, 2009. “Mr. Rajan was unpopular (but prescient) at Greenspan party,” Wall Street Journal (2 January).
A. Lascaux, 2005. “Trust and transaction costs,” In: J. Finch and M. Orillard (editors). Complexity and the economy: Implications for economic policy. Northampton, Mass.: Edward Elgar, pp. 151–171.
M. Lewis, 2010. The big short: Inside the doomsday machine. New York: Norton.
M. Lewis, 2002. “In defense of the boom,” New York Times (27 October), at http://www.nytimes.com/2002/10/27/magazine/27DEFENSE.html.
R. Lizza, 2009. “The contrarian: Sheila Bair and the White House financial debate,” New Yorker (6–13 July), at http://www.newyorker.com/reporting/2009/07/06/090706fa_fact_lizza.
A.W. Lo and M.T. Mueller, 2010. “WARNING: Physics envy may be hazardous to your wealth,” 19 March manuscript, at http://arxiv.org/abs/1003.2688.
O. Malik, 2003. Broadbandits: Inside the $750 billion telecom heist. Hoboken, N.J.: Wiley.
H. Markopolos, 2010. No one would listen: A true financial thriller. Hoboken, N.J.: Wiley.
M.D. O’Dell, 2000. “Racing with an exponential or the dangers of linear thinking in an exponential world,” presentation at the 16 May Stanford University symposium on “The optical Internet: The next generation;” Transcript available at http://www.dtc.umn.edu/~odlyzko/isources/.
A.M. Odlyzko, in preparation. Beautiful illusions and credulous simplicity: Technology manias from railroads to the Internet and beyond, book manuscript in preparation.
A.M. Odlyzko, manuscript. “Collective hallucinations and inefficient markets: The British Railway Mania of the 1840s,” available at http://www.dtc.umn.edu/~odlyzko/doc/hallucinations.pdf.
A. M. Odlyzko, 2010. “This time is different: An example of a giant, wildly speculative, and successful investment mania,” B.E. Journal of Economic Analysis & Policy, volume 10, number 1, article 60, at http://www.bepress.com/bejeap/vol10/iss1/art60. Preprint available at http://www.dtc.umn.edu/~odlyzko/doc/mania01.pdf.
A.M. Odlyzko, 2003. “Internet traffic growth: Sources and implications,” In: B.B. Dingel, W. Weiershausen, A.K. Dutta, and K.–I. Sato (editors). Optical transmission systems and equipment for WDM networking II, Proceedings of SPIE — the International Society for Optical Engineering, volume 5247, pp. 1–15. Available at http://www.dtc.umn.edu/~odlyzko/doc/itcom.internet.growth.pdf.
A.M. Odlyzko, 2000a. “The Internet and other networks: Utilization rates and their implications,” Information Economics & Policy, volume 12, pp. 341–365. Presented at the 1998 Telecommunications Policy Research Conference. Also available at http://www.dtc.umn.edu/~odlyzko/doc/internet.rates.pdf.
A.M. Odlyzko, 2000b. “Internet growth: Myth and reality, use and abuse,” iMP: Information Impacts Magazine (November); available at http://www.dtc.umn.edu/~odlyzko/doc/internet.growth.myth.pdf.
A.M. Odlyzko, 1999. “Data networks are mostly empty and for good reason,” IT Professional, volume 1, number 2, pp. 67–69. A preprint, entitled “The low utilization and high cost of data networks,” is available at http://www.dtc.umn.edu/~odlyzko/doc/high.network.cost.pdf.
J.A. Paulos, 2003. A mathematician plays the stock market. New York: Basic Books.
J.A. Paulos, 1988. Innumeracy: Mathematical illiteracy and its consequences. New York: Hill and Wang.
J. Rendleman, 2002. “WorldCom soldiers on with new service,” Information Week (30 August), at http://www.informationweek.com/story/IWK20020830S0010.
G. Rivlin, 2002. “The madness of King George,” Wired, volume 10, number 7, at http://www.wired.com/wired/archive/10.07/gilder.html.
L.G. Roberts, 2000. “Beyond Moore’s Law: Internet growth trends,” IEEE Computer (January), pp. 117–119.
P. Sevcik, 1999. “The myth of Internet growth,” Business Communications Review, volume 29, number 1, pp. 12–14. Available at http://www.netforecast.com/Articles/myth%20of%20growth.pdf.
J. Sidgmore, 1998. “The largest ISP and getting larger,” Vortex98 Conference Proceedings (19–22 May), pp. 157–165. Available at http://www.dtc.umn.edu/~odlyzko/isources/.
T.A. Soja, 1998. “Megatrends to megademand: A look at future demand scenarios,” presentation deck from the Global Crossing International Conference and Atlantic Crossing Project Update Meeting (6–8 April), Barcelona, Spain. Available at http://www.dtc.umn.edu/~odlyzko/isources/.
T. Standage, 2006. “Your television is ringing: A survey of telecom convergence,” Economist (14 October) at http://www.economist.com/surveys/displayStory.cfm?story_id=7995312.
F. Stajano and P. Wilson, 2009. “Understanding scam victims: Seven principles for systems security,” August manuscript, at http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-754.pdf.
A. Sundby, 2010. “Bank execs offer head–scratching answers,” CBS News (13 January), at http://www.cbsnews.com/8301-503983_162-6093076-503983.html.
U.S. Department of Commerce, 1998. The Emerging Digital Economy, April white paper, at http://govinfo.library.unt.edu/ecommerce/EDEreprt.pdf.
U.S. President, 2010. Economic report of the President, transmitted to the Congress in February. Available at http://www.whitehouse.gov/sites/default/files/microsites/economic-report-president.pdf.
University of Exeter, School of Psychology, 2009. The psychology of scams: Provoking and committing errors of judgment. May report prepared for the U.K. Office of Fair Trading, at http://www.oft.gov.uk/shared_oft/reports/consumer_protection/oft1070.pdf.
A. Vance, 2010. “Merely human? That’s so yesterday,” New York Times (13 June), at http://www.nytimes.com/2010/06/13/business/13sing.html.
Wall Street Journal, 2010a. “LeBron’s tax holiday,” editorial, Wall Street Journal (10 July).
Wall Street Journal, 2010b. “Grading the bill,” Wall Street Journal (16 July).
J. Willoughby, 2000. “Fiber bloat? 360networks faces a glut of capacity,” Barron’s (17 April).
G. Zuckerman, 2009. The greatest trade ever: The behind–the–scenes story of how John Paulson defied Wall Street and made financial history. New York: Broadway Books.
M. Zuckoff, 2006. “The perfect mark: How a Massachusetts psychotherapist fell for a Nigerian e–mail scam,” New Yorker (15 May), at http://www.newyorker.com/archive/2006/05/15/060515fa_fact.
Received 28 August 2010; accepted 3 September 2010.
“Bubbles, gullibility, and other challenges for economics, psychology, sociology, and information sciences” by Andrew Odlyzko is licensed under a Creative Commons Attribution–NonCommercial–NoDerivs 3.0 Unported License.
Bubbles, gullibility, and other challenges for economics, psychology, sociology, and information sciences
by Andrew Odlyzko.
First Monday, Volume 15, Number 9 - 6 September 2010
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2014.