First Monday

Economic Experiments in Internet Access Markets by Shane Greenstein

Innovation within Internet access markets can be usefully understood through the lens of economic experiments. Economic experiments yield lessons to market participants through market experience. The essay distinguishes between directed and undirected economic experiments. It discusses how spreading of lessons transforms a market. As a lesson becomes common it becomes a part of industry know–how. Further innovations build on that know how, renewing a cycle of experimentation.


What is an economic experiment?
Spreading experiments




Did economic experiments play an important role in the development of commercial Internet access? Economic experiments yield lessons to firms and these lessons can be acquired only through market experience. Usually these lessons pertain to the value of evolving goods and markets. Importantly, this type of learning cannot take place in a laboratory. Scientists, engineers, or marketing executives cannot distill equivalent lessons from simply building a prototype or merely interviewing potential customers and vendors.

There is good reason to think economic experiments played an important role in this market’s development. While the commercial network today generates tens of billions of dollars in revenue a year, [1] the passage of time gives a false sense of inevitability to this accomplishment. The firms who commercialized the Internet in the United States did not follow a prescribed road map. No firm in a young market such as this could have planned for all events. Learning and serendipity must have shaped actions during the earliest years, while value remained uncertain.

This essay begins with a somewhat narrow goal: to show that directed and undirected economic experiments shaped the evolution of Internet access markets. Directed experiments are those undertaken and learned by firms for their own purposes, while undirected experiments are those that arise from the interplay of many firm’s actions. Both types of economic experiments shaped commercial Internet access markets, shaping pricing, the quality of services, and the identity of leading firms.

The underlying motivation is broader than this narrow goal. This historical experience highlights that the accumulation of knowledge industry know–how depends on the spreading of lessons learned from economic experiments. Further innovations build on that know how, renewing a cycle of accumulated lessons from experiments. This accumulation is a key driver of the market’s evolution, setting the conditions for innovative behavior.



What is an economic experiment?

Economic experiments may pertain to any change that alters knowledge about the value of a good or service or the costs for bringing a service to market. Economic experiments shape more than just technical invention, also change in business operations and organization that translate technology into economic value. By this broad definition, economic experiments encompass a range of market–based learning, as when, for example, market events reveal the previously unknown value of primitive technologies, make managers aware of broader applications for technologies invented for narrow applications, or help firms learn how to routinize a business processes through customer–suggested refinements [2].

The emphasis from examining a market through the lens of economic experiments contrasts with the emphasis from another common model for R&D–intensive markets, what Ralph Gomory labeled and critiqued as the “ladder model.” [3] To stress the differences between the ladder model and the model of economic experiments let me characterize the ladder model in a somewhat cartoonish form: It presumes that matters follow a sequence. Initially someone invests in basic R&D at either a university or within a corporate laboratory. As a result of such investment, a researcher invents something new. It might be possible for contemporaries to forecast its usefulness in the future, but all recognize the need for more investment. Eventually this leads managers at companies to make investments in marketing and distribution, which then leads to a launch of a product using the technology. Buyers then try the new product, make use of it, and give their response to sponsoring firms. After those sales firms begin a product cycle comprised of incremental upgrades to existing features. Through this sequential path the performance of technology improves.

There is a significant grain of truth to the ladder model. For example, the original investment by DARPA in the fundamental science of packet switching led to a set of events that broadly fits this model. In other words, this model has a place in modern times and policy makers should not throw away its insights.

Policy–makers also should not rely on it exclusively, however. It overlooks a wide set of innovative conduct that helps improve the economic performance of society. In particular, the model of economic experiments does not necessarily begin with events in a laboratory. It is not a sequential model at all. It places emphasis on activities outside of a laboratory, highlighting innovations coming from market experience. It also focuses on a cycle of innovations that reinforce one another through the spreading of lessons, an insight the ladder model does not develop at all. As we shall see, the model of economic experiments also yields different insights about the role for policies, insights that the ladder model would not develop.

Prior research has identified four factors that shape the value of economic experiments. Here is a sketch and we will return to these throughout the essay. First, some humans have a limited ability to imagine actual economic activity in all its complexity and detail, especially when new technologies enable new goods and services. As a result, experience may be the best path to teaching a decision maker about new technical or commercial opportunities.

Second, and related, many choices among the details about operations to serve buyer cannot be learned except through trial and error. For example, even market participants with extraordinary imaginations would still find it impossible to forecast how demand will change when prices decline, or how the majority of customers will react to different menus of products.

Third, investment in anticipatory learning can help but never overcome the first two limits. We should expect forward looking managers at firms to invest in anticipating change and, nonetheless, still expect economic experiments to continue.

Fourth, and finally, firms have a difficult time forecasting the activities of others. That, in turn, makes it difficult for anyone to forecast any firm’s response after observing unanticipated activities from near competitors or business partners, i.e., those offering either substitutes or complements. Observing events in a market resolves open questions about pricing, features, and appeal to users, among other questions of this form.

These conditions motivate firms to undertake two broad types of learning activities. First, it motivates some firms to experiment in their own operations in ways that benefit their business. Second, it motivates firms to monitor the conduct of others, seeking to learn lessons from the experience of others. These motivations pervade a wide range of activities. That will make it comparatively easy to illustrate their importance in specific instances, as we do below.

Economic experiments come in two observable forms. In one form firms learn through deliberate action or investment, which I will label “directed.” In another type of experiment firms learn through the interplay of their activities with one another, which collectively yields (often unanticipated) lessons. I will call these “undirected.” After illustrating both with recent events, the essay turns to discussing the institutions that encourage and discourage experiments.

Directed economic experiments

The most common directed experiment is incremental in its technical scope and ambition. It aims at learning lessons with immediate consequences for a business. Though incremental, it can involve decisions of the utmost importance to the business, such as learning about the pricing for a new service using a new technology. For example, at the very outset of the browser–based commercial Internet in 1995, many ISPs wrestled with fundamental decisions about how to commercialize the innovation.

Recall how the first widespread directed experiments arose. The commercialization of the Web browser contributed because it raised expectations about future demand. The release of Mosaic browser began in the fall of 1993. Netscape’s beta browser was released in the winter of 1995 and its IPO in August followed. Microsoft’s unveiled Internet Explorer in December. Around the same time a number of other entrants also began exploring new businesses during this time, including Yahoo!, eBay, Amazon, Vermeer, and others. These events fueled expectations among industry insiders, futurists, and venture investors that substantial demand for the Internet at households and businesses would emerge quickly.

By 1996, ISPs offered service in every major U.S. city, and many large firms had begun building national networks. The growth was rather astounding to mainstream firms who had not closely followed the spread of the PC and bulletin boards. By the fall of 1996 there were over 12,000 local phone numbers in the U.S. to call for commercial Internet access, and more than 65,000 by fall 1998 [4]. That build–out involved both scores of large national firms and thousands of small local firms.

The build–out of ISPs did not happen without considerable experimentation to resolve many open questions. A crucial question at the outset concerned the design of the opening page users would see when they first clicked on their browser, a page later given the label “portal.” What should an ISP do? Should it design its own portal (potentially at great expense), default to another’s (such as Excite or Yahoo!), or leave the decision to users altogether?

There were many contrasting strategies for addressing the question. Different ISPs made quite distinct choices and learned quite different lessons about the trade–offs between these choices. No single choice dominated, and, related, as firms learned more, perceptions about the costs and benefits of each changed over time.

Some ISPs maintained quite minimal home pages. Many marketed this choice as a virtuous attempt to give users freedom to choose for themselves among Yahoo!, Excite, Lycos and a myriad number of other young portals springing up at that time. Some ISPs succeeded with this choice (or, in some views, in spite of this choice).

After the fact it is possible to rationalize why a firm made the choices it made. For example, AOL chose to continue activity they already performed in the era of bulletin boards, perceiving that its prior investments in community building would continue to have value as its users transitioned into using the Internet more frequently. Its portal decisions continued to nurture those communities.

While AOL’s choice may seem savvy in retrospect, many Internet enthusiasts regarded it as risky at the time. Indeed, AOL was the only firm among the prior large “online service providers” to succeed with this strategic choice in the medium term, so the concern had some merit. For example, AOL was the only firm to attract the mass market user with investment in a “walled garden,” which both controlled a large fraction of the user experience while sacrificing sophisticated users to other suppliers. CompuServe, Prodigy and Genie all failed at this. MSN also attempted a similar strategy, and with the help of Microsoft’s marketing advantages and budgetary tolerance for operating losses did not exit. However, MSN was no better than a distant second in market share throughout the 1990s.

Not every one of these types of experiments turned out well, nor should we expect them to. For example, in the mid– to late–1990s some cable companies believed they did not understand the requirements of the market place, so they ceded these decision initially to others, making deals with @home (among the most well known example). Eventually @home merged with Excite, a decision that would later be regretted by several cable firms. When the cooperation between cable firms and @home/Excite ended, it produced a large amount of recrimination [5].

While it did not turn out well for the parties involved (in a financial sense), the surviving firms — cable companies, in this case — learned valuable lessons about how to structure their ISP services. Those lessons proved valuable in the future in two senses: certain useful “investments” were recreated, such as geographic caching of content, and certain “mistakes” were avoided, such as not depending on advertising for revenue.

Exploration did focus on other fundamental determinants of value, such as the price paid for services. For example, throughout 1995 to 1998 many firms experimented with different contracting plans to offer households. More specifically, by 1995 there was already a general movement to offer unlimited monthly service for a fixed price. After AT&T WorldNet announced its intention to enter the household market with a US$20/month contract this contractual form became the focal standard, eventually leading to the end of marginal pricing of services. AOL’s conversion in early 1996 was the last, most publicized, and most difficult of these conversions among the largest ISPs at the time [6].

It would be an error to think that AOL’s well–publicized troubles (and marketing recovery from them!) were the end of the experiments with prices. They continued for years, but only the major successes received wide publicity. There were many attempts to give users choices among monthly hourly limitations in exchange for discounts. Most of these experiments did not generate large reactions. One such experiment did in 1999: a set of entrepreneurial firms experimented with formats that offered free dial–up access services in exchange for requiring users to view advertising. Netzero eventually was the most successful of them, though, arguably, that success arose because it departed from its initial strategy and eventually charged for access [7]. In other words, the most fundamental determinant of value in the retail household market — the contracting terms and pricing norms for access — continued to evolve throughout the entire first decade of the commercial Internet.

During this same period, many firms also experimented with the range of services offered. Virtually all ISPs experimented with changes to the standard bundle offered, such as default e–mail memory, instant messaging support and hosting services where the ISP maintained Web pages for clients. As another example, in response to user requests, some local ISPs arranged for the availability of phone numbers in other locales for traveling clients. A wide range of regional ISPs experimented with performing services complementary to access, such as hosting services, networking services and Web design. In general, very few of these product line decisions remained fixed for very long.

Nobody was immune from this type of experimentation. Even the dominant firms extensively experimented with their product lines. AOL greatly expanded the range of services in the latter half of the 1990s, achieving this through a mixture of in–house development, purchasing other innovative companies, and making many alliances. MSN also tried to provide a similar experience.

Part of AOL’s expansion matched a similar (and almost parallel) expansion of services occurring at online portals, such as Yahoo!, Excite, and Lycos and so on. This matching and initiating went back and forth for years and, as it gained new features, it became increasingly important for some small ISPs, because they had chosen not to heavily invest in their own portals. In this sense, many small ISPs and these portals together competed with AOL and MSN.

Whether it was the redesign of a home page, the offering of new contractual forms, or changing the range of services offered, all of these experiments were directed. The firms who conducted them would expend resources to learn something of value to their business, usually while also performing their routine business activities. And it was not just a few leading firms who did this or a few entrepreneurial climbers. These types of experiments took place at virtually all the firms conducting business during the latter part of the 1990s.

Looked at from one perspective, this activity was quite mundane and almost routine. Managers would authorize the expenditure of resources, redirect personnel, alter a feature of an existing service or develop a new one, advertise it or not, and then wait to find out whether these investments paid off in terms of additional revenue, market share, or pricing authority. Failure was not regarded automatically as a waste of resources if it led to valuable learning, e.g., if a small scale experiment helped managers avoid costly mistakes at larger scale.

Interpreted broadly through the lens of economic experiments, this activity should be understood as risky and knowledge–building. Investments in and commitments to these actions had to be made before any of the managers at these firms fully knew the additional gain generated from the existing customer base. Firms were learning about customer responses they could not fully imagine, using experiences to understand how to refine key business decisions, deliberately learning through trial and error in market experience what could not be learned in a laboratory. In short, that learning led firms to change what they did.

Undirected economic experiments

Some economic experiments resulted from the interplay of one firm’s action with another. While directed experiments might have partially motivated the actions of any single firm, it would be an error to regard the lessons as singularly resulting from only one firm’s actions. Rather, the interplay of firms yielded a form of serendipity in learning, learning that results from the unanticipated combination of lessons learned from others. I will use one example to illustrate the broad point.

Developments in Wi–Fi access technologies illustrate how such unanticipated serendipity can arise as firms learn from one another. This example is particularly good for our purposes because some form of a market for wireless data transfer was expected. However, it took an unexpected direction towards one mode.

Futurists had predicted the rise of mobile computing even before the rise of the commercial Internet. After the boom in Internet access investment that began in 1995 those predictions were made with additional urgency. Numerous efforts arose to anticipate it, including several efforts to design short–range data communications standards. One prominent effort was known as HomeRF and another as Bluetooth. Both were founded in 1998. The former was organized by firms such as Motorola, Seimens and at its peak involved over a hundred companies before it disbanded, while the latter was established by Ericsson, Sony-Ericsson, IBM, Intel, Toshiba and Nokia, and currently still exists, involving thousands of firms.

That was not all. Because of the tremendous number of investments made by cellular equipment providers and carriers in technology to carry data over their infrastructure, a substantial number of futurists foresaw wireless data services emerging out of the cellular phone industry, as part of a number of initiatives in 3G technologies. This effort was also large, involving virtually every equipment firm and carrier in the cellular phone business, as well as many others.

Most of those predictions turned out to be correct in a broad sense — i.e., there was demand for wireless data communication technologies. Yet, it turned out to be far off the mark in the specific sense — i.e., HomeRF did not generate enthusiastic sales, as those who designed it predicted because they viewed it as technically superior to alternatives [8]. After a slow start, Bluetooth eventually found its way into a variety of products, particularly attachments to cell phones and many other consumer devices. The 3G products and services also did not grow as hyped, gaining little traction with consumers at first in the U.S. It has not started to make a dent in the U.S. until recently.

More surprising, a technology popularly known as Wi–Fi became dominant. Wi–Fi arose out of undirected economic experiments. More to the point, development of Wi–Fi did not arise from a single firm’s innovative experiment with it. Rather, it began as something rather different, and it evolved through economic experiments at many firms.

What eventually became Wi–Fi originated from the discussions about a technical standard designed at the IEEE subcommittee for Committee 802. The IEEE sponsors many committees to design standards. Committee 802 was formed in the early 1980s, before the commercial Internet was ever proposed. It was well known among computing and electronics engineers because it had helped design and diffuse the Ethernet standard [9]. By the mid–1990s it had grown larger, establishing committees for many areas, ostensibly to extend the range of uses for Ethernet.

Subcommittee 802.11, like all sub–committees of this broad family of committees, concerned itself with a specific topic, in this case, designs for interoperability standards to enable wireless data traffic using Ethernet protocol in short ranges. As with all such committees, any standards emerging from these discussions were not legally binding on industry participants, but the committee was formed with the hope that such a standard could act as focal point, helping different firms make equipment that was interoperable, such as routers and receivers. As with most such committees, the committee tried to involve members who brought appropriate technical expertise and who represented the views of most of the major suppliers and users for the type of equipment in which this standard would be embedded. Since participation was voluntary it might be appropriate to say, broadly, that participants showed up to learn about what others were proposing and because many wanted new products to emerge from their efforts.

At first the designers focused on the needs of big users of local area network technologies (e.g., FedEx, UPS [United Parcel Service], Wal–Mart, Sears, and Boeing), who, it was believed, would find valuable uses for short–range wireless Ethernet, such as in a large warehouses with complex logistical operations. To be clear, there were many potential business applications for this standard and focusing on any of them was not a bad idea at all, since it is often a smart strategy to focus development on valuable use or user with a history of tolerance for the technical challenges affiliated with being an early adopter. At the same time, in this sense, the original charter and motivation for this committee was somewhat narrow, not focused on what eventually became a large market in home and public spaces, such as coffee shops. Related, and broadly speaking, what happened next fits a category of unanticipated learning that Rosenberg labels (paraphrasing) “an invention motivated by specific application that unexpectedly finds broader use.” [10]

Events proceeded as followed: The committee first proposed a standard in 1997 that received many beta uses, but also failed to resolve many compatibility problems, among many technical issues. What came to be known as 802.11a was ratified in 1999. At the same time the committee published Standard 802.11b, which altered some features (changing the frequency of spectrum it used, among other things). The latter caught on quickly and eventually widely.

Because many vendors had experimented with earlier variations of this standard, the publication of 802.11b generated a vendor response from those who were already making equipment — and others soon thereafter. As it turned out, it also generated a response from Internet enthusiasts, who at the time began using this equipment in a variety of settings, campuses, buildings, public parks, and coffee shops. Unsurprisingly, vendors tried to meet this demand as well.

Around the same time as the publication of 802.11b, firms who had helped pioneer the standard — including 3Com, Aironet (now a division of Cisco), Harris Semiconductor (now Intersil), Lucent (now Agere), Nokia, and Symbol Technologies — formed the Wireless Ethernet Compatibility Alliance (WECA). WECA branded the new technology Wi–Fi. This was a marketing ploy for the mass market, recognizing that “802.11b” was a much less appealing label. The aim was clear: nurture what enthusiasts were doing and broaden it into sales to many users.

WECA also arranged to perform testing to ensure equipment conformed to the standard, certifying interoperability of antennae and receivers made by different firms, for example. This is quite valuable when the set of vendors becomes large and heterogeneous, as it helps to maintain maximum service for users with little effort on their part. The earliest experience with 802.11 had reiterated the importance of such activity. In other words, while the IEEE committee designed the standard, a different body performed conformance testing. This activity further promoted interoperability between equipment from different vendors, which made sure an issue with the earliest designs did not reappear.

Events then took on a momentum all their own. Technical successes became widely publicized. Numerous businesses began directed experiments supporting what became known as “hot spots,” which was an innovative idea altogether. A hot spot in a public space could either be free, either maintained by a building association for all building residences, for example, or supported by the café or restaurant trying to support its local user base. It also could be subscription–based, with users signing contracts with providers. Both became common. The latter would become common at Starbucks, for example, which subcontracted with T–Mobile to provide the service throughout its cafés.

A hot spot was a use far outside the original motivation for the standard. However, as long as nothing precluded this unanticipated use from growing, grow it did. It grew in business buildings, and in homes, and in public parks and in a wide variety of settings, eventually leading the firms behind the HomeRF to give up. The growing use of Wi–Fi raised numerous unexpected technical issues about interference, privacy, and rights to signals. Most of these did not slow Wi–Fi’s growing popularity. Web sites sprouted up to give users, especially travelers, directions to the nearest hot spot. As demand grew suppliers gladly met it [11]. As in a classic network bandwagon, the growing number of users attracted more suppliers and vice versa.

Unlike the prior examples about directed economic experiments, no single firm initiated an economic experiment that altered the state of knowledge about how to best operate equipment using IEEE standard 802.11b. However, like the prior example, many firms responded to user demand, demonstrations of new applications, tangible market experience, vendor reactions to new market situations, and other events that they could not forecast but yielded useful insights about the most efficient business actions to generate value.

Direct experiments built on top of undirected

Later events in the development of Wi–Fi illustrate how directed learning can build on top of an undirected economic experiment. Specifically, in recognition of Wi–Fi’s growing diffusion, WECA renamed itself the Wi–Fi Alliance in October 2002. At about the same time, Intel announced a large program to install wireless capability in its notebooks, branding it Centrino. By this point, a new upgrade, 802.11g, was coming to market with high expectations.

This Centrino program is easy to misunderstand. Embedding a Wi–Fi connection in all notebooks that used Intel Microprocessors did not involve redesigning the Intel microprocessor, i.e., the component for which Intel is best known. It involved redesigning the motherboard for desktop PCs and notebooks, adding new parts. This came with one obvious benefit, eliminating the need for an external card for the notebook, usually supplied by a firm other than Intel, and installed by users (or OEMs) in an expansion slot.

Intel had crept into the motherboard business slowly over the prior decade as it initiated a variety of improvements to the designs of computers using its microprocessors. Years earlier Intel designed prototypes of these motherboards and by the time it announced this program, it was making some, branding them, and encouraging many of its business partners to make similar designs. To be clear, Intel did this for a variety of reasons having to do with its own forecasts about what was most valuable in the PC market and how it could help the entire value chain improve [12]. Enabling “wireless Ethernet” was not part of the grand Intel strategy when this started in the early 1990s in any explicit form, though it was not precluded either.

Centrino diffused into a mix of support, ambivalence and hostility in the value chain. Intel’s motherboard designs increased the efficiencies of computers, but that benefit was not welcomed by every OEM who assembled PCs. As Intel’s design became employed more frequently it eliminated some differences between OEMs and other component providers. Many of these firms resented losing control over their designs and losing the ability to strategically differentiate with their own designs. Other OEMs liked the Intel design, since it allowed the firms to concentrate on other facets of their business.

Intel hoped that its endorsement would increase demand for wireless capabilities within notebooks by, among other things, reducing weight and size, while offering users simplicity and technical assurances in a standardized function. It also anticipated that the branding would help sell notebooks using Intel chips and motherboard designs instead of AMD’s. Antenna and router equipment further makers anticipated it might help raise demand for their goods.

Intel ran into several snafus at first, such as insufficient parts for the preferred design and a trademark dispute over the use of the butterfly, its preferred symbol for the program. Also, and significantly, many motherboard suppliers, card makers, and OEMs (original equipment manufacturers) did not like Intel’s action, as it removed some of their discretion over the design of notebooks.

Only Dell was able to put up any substantial resistance, however, insisting on selling its own branded Wi–Fi products right next to Intel’s, thereby supporting some of the card makers. Despite Dell’s resistance, the cooperation from antenna makers and (importantly) users helped Intel reach its goals. By embedding the standards in its products, Intel made Wi–Fi, or rather Centrino, easy to use, which proved popular with many users.

Intel’s management liked the outcome, learned many things from their experience, and initiated several new projects as a result. Intel’s management invested in further related activities, such as writing upgrades in IEEE committee 802.11 (to 802.11n) and writing an upgrade to a whole new wireless standard for longer ranges (to 802.16, a.k.a. Wi–Max, and related, 802.20).

This example illustrates the array of deliberate firm activities taken during a short period, building on top of learning from an earlier undirected economic experiment. The activities in IEEE Committee 802.11 ended up touching the activities of many other firms, such as equipment manufacturers, laptop makers, chip makers, and coffee shops, which then shaped new activities at the committee as well. That change in purpose altered many business plans, such as investments in equipment design and distribution, as well as marketing campaigns. More to the point, undirected economic experiments often can and do involve at least some directed experiments as well. In this case an undirected economic experiment took an entirely new direction after a large firm decided that it could invest in shaping events in ways that served its commercial needs.



Spreading experiments

We have seen from several examples that few lessons learned from the experiments stayed at a single firm. Rather, economic experiments generated lessons that spread, and additional experiments built on the prior ones. Though no conscious collective purpose guided the process at every step, it has many systematic features. We now discuss many of these.

What happens as lessons spread to firms who did not conduct the lesson in the first place? Two types of patterns ensue, depending on whether the lessons arose from directed or undirected economic experiments.

Consider directed economic experiments. There are many types of lessons: Those that guide firms to avoid mistakes in the future, those that guide firms to invest in services with positive returns, those that inform firms about customer needs, and so on. Most such experiments helped a firm understand the value of some unknown aspect of its business activities. Whether or not the same lesson applied to another firm was secondary to the private motivation to conduct the experiment.

Lessons are valuable for private purposes when only one firm makes use of it. It gains its value by generating more revenue through improvement of an existing service, enhancing profits by lowering cost, or enhancing pricing power by making the firm different from its nearest rivals. In general, a lesson loses its some of its value as it becomes more common, that is, as many firms put it to use. In particular, it loses the part of value that made a firm unique, because spreading eliminates differences across firms. The loss of that type of value motivates firms to try to prevent some types of lessons from spreading. For reasons discussed below, typically such efforts fail completely or work for only a short period.

Many lessons do not lose much value as they spread. For example, a lesson about how to lower the cost of service in a rural location may help another firm in another rural location, but have little if any competitive implications. More broadly, a lesson useful for one firm typically has features that make it valuable for others, and its spread may not have large competitive implications.

Looking more closely, distinct types of lessons — here labeled as technical, heuristic and complex lessons — exhibit different patterns of spreading. Technical knowledge pertains to the design for a piece of equipment. Heuristic lessons combine both technical knowledge and operational knowledge about how employees behave in firms and customers react to firm behavior. Complex lessons are marketing and operational lessons that involve many functions inside an organization. For reasons discussed below, virtually all technical lessons tend to spread rather fast, as do some heuristic lessons, though not all of them. Complex lessons tend to display a much wider variance of spreading speeds.

In the case of undirected economic experiments, many of the same observations hold, except that typically there is no single firm conducting an economic experiment that yielded the lessons. Hence, no single firm may be acting to prevent the spread of lessons that alter comparative value, so the spreading occurs almost by definition. The events involve actions at multiple firms, and, as one firm monitored the experience of another, these actions interplayed with one another. In most of the practical circumstances when this occurs, multiple firms know the technical and heuristic lessons because all participating firms monitor market events or participating in them, whether it involves the demonstration of a new technology, or the rollout of a new service.

While numerous firms lead to the emergence of an indirect experiment, the open question concerns the number of others to whom those lessons will spread. Generally, the answer is “all market participants who make efforts to learn the pertinent lessons.”

In both a direct or indirect economic experiment, the spreading of a lesson changes its role. It becomes part of an industry’s accumulated knowledge base, what Richard Nelson has called an industry’s know–how [13]. The lesson is still valuable in that second role because the entire knowledge base seeds and supports more experiments. However, there is no distinct difference between the know–how at one firm and another. In that sense the lesson becomes more common as it spreads.

Technical lessons spread into many locations and firms because most computing and electronics markets in the United States display dispersed technical leadership [14]. That is because participants in a market with dispersed technical leadership take for granted that any small team of technically skilled personnel can learn the latest publicly available technical information in their industry. Markets with dispersed technical leadership are markets where small or medium sized teams of technical skilled engineers possess the knowledge to quickly comprehend technical issues and contribute to valuable activities. As a practical matter, existing firms either already possess such teams, or can assemble them by reassigning employees. Or, if such employees are not yet employed, they are easily hired from labor markets.

Dispersed technical leadership cannot exist without many market participants informing each other. While industry conferences, consulting reports, and trade magazines have always informed market participants, today Web pages and community/industry forums supplement them. Any reasonably sized product market attracts an abundance of product reviewers and bloggers who track gossip about business initiatives and point out design flaws or triumphs.

The United States in the 1990s and beyond happens to be an era in which many technically skilled people live in many places. Fast communication among such people about new technical developments can produce the same developments in many locales across wide geographic spaces, sometimes quickly. This means the lesson from an economic experiment in one location can become known by other decision–makers in other locations, again, sometimes rather quickly. It means the accumulation of lessons involves learning done in a many locations.

Fast communication has one other consequence. Because firms monitor each other they end up imitating one another, even when they are not in close competitive contact. It may even appear as if firms are acting in concert as they imitate one another — for example, as many coffee shops in vastly different locations did when they each installed hot spots. Such imitation also may take place over long distances, as a result of firms in different locations monitoring one another — for example, as small ISPs tended to do to one another. More interesting, because it involves lessons arising out of the experiences of a variety of firms taking a variety of approaches, the accumulated lessons may become larger than any single firm would have or could have developed on its own. End of article


About the author

Shane Greenstein is the Elinor and Wendell Hobbs Professor of Management and Strategy in the Kellogg School of Management at Northwestern University.
E–mail: greeenstein [at] kellogg [dot] northwestern [dot] edu



I am grateful for useful conversations with Brian Kahin, Scott Stern, and Joel West. I am responsible for all errors.



1. As of 2004, Internet access markets generated US$24B in revenue, not counting online auctions, advertising, hosting and myriad other online activities.

2. There is a vast literature on firm learning during the evolution of industries. See, e.g., Nelson (2007), Rosenberg (1982), Stern (2007), Utterback (1994).

3. See Gomory (1997) for a discussion about why exposing this model to scrutiny would help eliminate the mental monopoly it held on the actions of many managers.

4. See Downes and Greenstein (2002) for a description of the dial–up market, or Downes and Greenstein (in press) for an analysis for why some areas had more entry than others.

5. See e.g., Rosston (2007).

6. See Swisher’s (1998) account of this crisis.

7. This strategy turned out to be effective for entry, but not for a sustainable business. Eventually, after growing a service for several million users, then merging with another firm, Juno, this organization would adopt a different pricing contract, one with a minimal charge.

8. For speculation about why homeRF failed, see, e.g.,

9. The story of the growth of a local area network market around the activities in committee 802 is well told in Burg (2001).

10. See e.g., Rosenberg (1982).

11. For example, in high density settings it was possible for there to be interference among the channels, or interference with other users of the unlicensed spectrum reserved by the FCC, such as cordless telephone. The diffusion of so many devices also raises questions about norms for paying for access in apartment buildings, from neighbors, and others. See Sandvig (2004) on the latter.

12. For analysis of Intel’s investments in different projects and why it chose to invest heavily in some complementary technologies and not others, see, e.g., Gawer and Henderson (2007).

13. See Nelson (2007).

14. See Bresnahan and Greenstein (1999) for a discussion of the role of dispersed commercial leadership in the development of platform strategies during the evolution of the computer industry.



Timothy Bresnahan and Shane Greenstein, 1999. “Technological Competition and the Structure of the Computer Industry,” Journal of Industrial Economics, volume 47, pp. 1–40.

Urs von Burg, 2001. The Triumph of Ethernet: Technological Communities and the Battle for the LAN Standard. Stanford, Calif.: Stanford University Press.

Tom Downes and Shane Greenstein, in press. “Understanding Why Universal Service Obligations May Be Unnecessary: The Private Development of Local Internet Access Markets,” Journal of Urban Economics.

Tom Downes and Shane Greenstein, 2002. “Universal Access and Local Commercial Internet Markets,” Research Policy, volume 31, pp. 1035–1052.

Annabelle Gawer and Rebecca Henderson, 2007. “Platform Owner Etnry and Innovation in Complementary Markets: Evidence from Intel,” Journal of Economics and Management Strategy, volume 16, pp. 1–34.

Ralph Gomory, 1997. “The Technology–Product Relationship: Early and Late Stages,” In: Michael L. Tushman and Philip Anderson (editors). Managing Strategic Innovation and Change: A Collection of Readings. New York: Oxford University Press, pp. 383–394.

Richard Nelson, 2007. “On the Evolution of Human Know–how,” mimeo, Columbia University.

Nathan Rosenberg, 1982. “Economic Experiments,” In: Inside the Black Box: Technology and Economics. New York: Cambridge University Press.

Gregory L. Rosston, 2006. “The Evolution of High–Speed Internet Access 1995–2001,” mimeo, SIEPR, Stanford University, at, accessed 7 June 2007.

Christian Sandvig, 2004. “An Initial Assessment of Cooperative Action in Wi–Fi Networking,” Telecommunications Policy, volume 28, numbers 7/8, pp. 579–602.

Karen Swisher, 1998. AOL.COM: How Steve Case Beat Bill Gates, Nailed the Netheads, and Made Millions in the War for the Web. New York: Times Books.

Scott Stern, 2005. “Economic Experiments: The Role of Entrepreneurship in Economic Prosperity,” In: Understanding Entrepreneurship: A Research and Policy Report. Kansas City, Mo.: Ewing Marion Kaufman Foundation, at, accessed 7 June 2007.

James M. Utterback, 1994. Mastering the Dynamics of Innovation: How Companies Can Seize Opportunities in the Face of Technological Change. Boston: Harvard Business School Press.



Contents Index

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License

Economic Experiments in Internet Access Markets by Shane Greenstein
First Monday, volume 12, number 6 (June 2007),