This paper establishes a context for the work of Eric Raymond and his description of the Linux phenomenon, by examining the emerging science of complex adaptive systems pioneered by John Holland, Christopher Langton, Robert Axelrod, among others. Raymond's evolutionary view is given an extended and more formal treatment under the terms of chaos and complexity, and chaos and complexity under the terms of sociology. In addition, this paper presents an ethnographic account of Linux, amassed from a series of electronic mail interviews with kernel developers. These interviews examine Linux as a social phenomena, which has prompted wide interest and become a subject of heated discussion. Comments and feedback of this paper can be found at http://www.cukezone.com/kk49/linux/contents.html.
Chapter 1"The centralized mindset is deeply entrenched. When people see patterns and structures, they instinctively assume centralized causes or centralized control. They often see leaders and seeds where none exist. When something happens, they assume that one individual agent must be responsible."
- Mitchel Resnick, Turtles, Termites, and Traffic Jams
We take our eyes for granted. For all our purposes, the eye is but an optical instrument, and nothing more. It is not much unlike a conventional camera in the conception of design, consisting of a lens and a diaphragm that adjust the focus and control the amount of light coming through. In this modern era, we are more likely to be enthralled by the advancing technologies of digital cameras than to marvel at the very existence of the eye. We fail to appreciate the idea that few things in the natural and articifial history of the Earth have perhaps achieved the complexity and the functional elegance of the human eye. But beneath its simple and unassuming appearance unfolds a micro universe of awesome complexity beyond ordinary imagination. The so-called retina at the back of the eye and directly behind the lens system contains dense layers of photo-sensitive cells - 125 million in each eye - each with an intricate internal architecture specialized for receiving and processing single photons of light. The data from the photo cells are converted to electric impulses by the ganglion cells in the adjacent layer and passed onto the optic nerves that feed the brain. The result is a vision-recognition system of such elegance and precision, whose functionality is yet to be matched by today's most advanced science.
Since the earliest days, the story of the eye has been told and retold by every generation of critics challenging the plausibility of Darwinian evolution - for how could a structure as complex and sophisticated as the human eye be shaped by nothing but brute forces of nature? Even before science had uncovered the precise anatomy of the eye, Darwin himself confessed to an American friend that "[the] eye to this day gives me a cold shudder."  For decades after Darwin's first postulation, scientific communities and the general public alike resisted the notion that man kind was but an accident of nature. Instead, the moral and intellectual climate of the time and long thereafter continued to favor the so-called watchmaker theory of the eighteenth-century theologian William Paley, claiming that all living forms must have been designed purposefully, just as a watch is created not by chance but by a trained craftsman . It was well into the twentieth century before proponents of evolution gained their ground in society with more robust theories and evidence revealing not only the indelible traces evolution has left behind, but also its blind yet omnipresent force in shaping the face of nature. Today, evolution commands firm and almost universal support in contemporary life science.
The Linux Operating System
The theory of evolution is perhaps yet to find an avid audience outside biological sciences - which is not to say, however, that one might simply dismiss the close parallels between the human eye and the Linux operating system. Consider its history: Linux was first written in 1991 by Linus Torvalds, then a 21-year old computer science student at the University of Helsinki in Finland, who was interested in writing a Unix-like system for his own PC. As he humbly noted, it was only a personal project - "nothing big and professional" - but after several months, he had successfully, if somewhat accidentally, written a working kernel . Although there was still much to be done, the project drew the immediate attention of curious programmers when Torvalds announced it to a newsgroup on the Internet, due very much to the fact that the entire source code was available for free download to anyone who was interested in using and modifying the system. By releasing the source code for free, then, Torvalds quickly found help, support, and feedback from other programmers, many of whom were enthusiastic enough about the project to provide actual patches of code for bug fixes or new functions. Even as the first official version was released in 1994, changes were being made on a daily and weekly basis while Linux continued to mature into a powerful and versatile operating system.
To be sure, Linux is neither the first nor the only "open-source" software. From the very beginning of computer science, it has been a common practice among researchers, students, and engineers to share source code. More recently, Richard Stallman has led the Free Software Foundation (FSF) since its beginning in 1984 in the development of a number of important and still widely used programs, including a word processor (Emacs - "the one true editor") and a multi-platform compiler (gcc), whose source code remain open to this day . The idea of an open-source operating system had also occurred to Stallman and many others well before the Linux project .
Linux nonetheless deserves a special place within the history of open-source software for several reasons. First, its stature as a first-class operating system finds Linux at the pinnacle of computer system design. An operating system is the software foundation upon which all user applications are executed. Based on remarkably complex instructions, it controls the hardware and manages the flow of information and communication among various components of the computer by allocating the computer's system resources to programs in execution. In the case of Linux, the kernel alone consists of close to one million lines of code in the kernel alone, with millions more in the peripheral programs that make up a commercial distribution of Linux . Building software as complex as an operating system is a daunting task even for the largest corporations, let alone part-time hackers  scattered across the Internet. The analogy of the eye to Linux in this respect is particularly suggestive.
Second, the size of the Linux project is simply unprecedented in the history of software development. At times, thousands of programmers have volunteered their time and effort in the daily development of numerous components and functions that comprise the operating system. According to one estimate, the project has involved over 40,000 people worldwide . Needless to say, none of these volunteers are financially compensated for their contribution. Nonetheless, the Linux project has grown to its immense size without formal organization around central authority. Fred Brooks, an authority on software engineering, has found that, due exactly to rising costs of coordination, production time increases exponentially with the number of developers . In the corporate world, the Brooks' Law has more or less become a truism. In the case of Linux, however, there has never existed centralized organization to mediate communication between Torvalds and the thousands of contributors, nor are there project teams with prescribed tasks and responsibilities, to which individual contributors are specifically assigned. Instead, from the beginning, it has been left to each person to decide what to work on at the moment, even at the potential risk of coordination difficulties. And yet, for all that, the Linux project has proceeded at a remarkable speed, with updates and version releases on a daily and weekly basis at times. For comparison, let us consider the fact that Linux was conceived five years after Microsoft began the development of Windows NT . But already, Linux is considered a competitive alternative to NT. Here, then, the relation between the eye and Linux is one of contrast rather than strict analogy: whereas the eye evolved slowly and gradually over millions of years, Linux developed at an exceptional speed, even by the standards of today's technology. In this light, it is all the more remarkable that the Linux project succeeded as well as it did.
Finally, despite the complexity of the design, the size of the project, and the rate of development, Torvalds and his co-developers have successfully - to say the least - written an operating system as powerful as Linux is today. It is said to have surpassed Microsoft Windows in many aspects of performance, including reliability, a quality often most desired yet equally difficult to achieve in operating systems . It is no surprise that, today, Linux boasts an installation base conservatively estimated at more than three million users worldwide, a figure that took Microsoft twice as long to reach .
If so, although the quality of the Linux kernel is significant in itself, what is particularly significant is the fact that Linux has achieved a standard of quality above that of proprietary software. Although Torvalds is a gifted programmer himself, as are many of his co-developers, it is one thing to produce a high-quality product for themselves to enjoy, and quite another to build a system that outperforms commercial products. And we must not underestimate the quality of corporate work forces, either. To say that Microsoft, for example, is just as capable of recruiting competent engineers is perhaps an understatement. Considering the immense resources Microsoft commands, it is difficult, and surely unfair, to simply dismiss the merits of monolithic organization. If one were to uphold Paley's discourse of creationism by a seasoned Watchmaker in software engineering, Microsoft is perhaps the best promise - and an impressive one at that. And yet, the fact remains that it is but a group of part-time volunteers scattered across the Internet with no binding responsibilities, bound only by common interest, that turned out Linux, the operating system par excellence.
The analogy of the eye is by now self-evident. Like the eye, Linux has dazzled engineers, users, and critics alike with its immense complexity and dazzling performance. And like the eye, its existence, as I wish to show, owes as much to accidental luck as to ingenious hack. It is a story of something out of nothing, a powerful testimony to the power of evolution. The next section lays out the evolutionary context of the Linux project in greater depth.
The Cathedral and the Bazaar
Among the very first to note the evolutionary dynamics of the Linux project is Eric Raymond, a long-time hacker with more than ten years of involvement in open-source projects prior to Linux. His interest in the Linux project stems from his own amazement at its unconventional development model when it caught his attention in 1993. As he recalls: "[the success of] Linux overturned much of what I thought I knew [about software engineering]."  Since then, Raymond has taken it upon himself to reveal the curious dynamics of the Linux project. His keen analysis, summarized in a series of much celebrated essays, owes to several years of extensive participant observation and his own experimentation with a personal open-source project.
The title of his acclaimed essay, The Cathedral and the Bazaar, is a metaphorical reference to two fundamentally different styles of software engineering. On the one hand, common in commercial development, is the Cathedral model, characterized by centralized planning enforced from the top and implemented by specialized project teams around structured schedules. Efficiency is the motto of the Cathedral. It is a sober picture of rational organization under linear management, of a tireless watchmaker fitting gears and pins one by one as he has for years and years. On the other hand is the Bazaar model of the Linux project, with its decentralized development driven by the whims of volunteer hackers and little else. In contrast to the serene isolation of the cathedral from the outside, the bazaar is the clamor itself. Anyone is welcome - the more people, the louder the clamor, the better it is. It is a community by the people and for the people, a community for all to share and nurture. It also appears chaotic and unstructured, a community where no one alone is effectively in charge of the community. Not all are heard or noticed, and not all are bound to enjoy the excitement. For others, however, the bazaar continues to bubble with life and opportunity.
However improbable it might seem that the chaotic, frenzied activities of the bazaar can ever outdo the top-down coordination of the cathedral, it is exactly what happened in the case of Linux. Raymond attributes the success of the Linux project to several important characteristics of the bazaar, summarized in the following imperatives:"As useful as it is to have good ideas, recognizing good ideas from others can be even more important and productive. Build only what you need to. Be modular. Reuse software whenever possible.
Prepare to change your mind, and be flexible to changing your approach. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong. "Plan to throw one away; you will, anyhow."
Feedback is key to rapid, effective code improvement and debugging. Release early and often. Treat your customers as co-developers, and listen to their feedback. Treat your beta testers as your most valuable resource, and they will become your valuable resource.
Peer review is essential. Given enough co-developers, problems will be characterized quickly and the fix obvious to someone: "Given enough eyeballs, all bugs are shallow."" 
Most fundamentally, what these attributes suggest is the importance of frequent version updates. Conventionally, updates are released no more than once or twice a year, only after a majority of the bugs have been fixed so as to not wear out the patience of the users. In comparison, updates of the Linux kernel were released as often as every day or every week. But rather than wearing out the users, the frequent updates not only kept the developers stimulated, but also exposed bugs directly to the eyes of the users for quick detection. Raymond elaborates:"In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect. In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena - or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release." 
Much of the same process is also at work in the development of the code. Given a user and developer base as large as the Linux community, Torvalds can safely rely on others to write and test the code for him. More likely than not, he can also expect multiple versions of the code from people who are more knowledgeable or interested in the specific component than he is. His task, then, is to simply select the best code to incorporate into the next release of the kernel.
As for the co-developers, the development of the kernel essentially amounts to a competitive process, in which they each seek to contribute the best code they can. Although none of them are financially compensated, Raymond argues that they derive enough satisfaction from the reputation their effort and skill earn from each other . In effect, ego satisfaction through reputation becomes directly tied to the self-interest of the individual hackers that in turn drives the Linux project.
If so, the story of the human eye is indeed telling. As Raymond argues, "the Linux world behaves in many respects like a free market or an ecology, a collection of selfish agents attempting to maximize utility which in the process produces a self-correcting spontaneous order more elaborate and efficient than any amount of central planning could have achieved."  Thus, in the same way that forces of nature created the human eye, the selective processes of parallel development and debugging eventually shaped line after line of code into a coherent and stable operating system of complexity and quality far beyond ordinary human design. What is really behind the dynamics of the Linux project, then, is evolution itself.
Linux: engineering from the bottom-up
Evolution certainly goes far to place Raymond's astonishment at Linux with what evolutionists felt for the eye. For Raymond, evolution is more than a convenient analogy for the improbable development of Linux. In his rhetoric, one is to note an assuring tone that Raymond is referring to evolution in its true sense and form. And indeed, there has been a legitimate claim, with which I am sympathetic, that evolution is not only a biological phenomenon, but more generally, a law of probability that applies broadly to changes in non-biological populations as well. And yet, relatively little is understood about the mechanisms of non-biological evolution. Is it indeed more than an analogy? Can we safely apply the concepts of organic evolution to Linux? How does non-biological evolution differ from biological evolution? And ultimately, how does evolution, with its random and blind forces, create something as complex and yet powerful as Linux? Regarding how much light evolutionary theory has cast over the hidden alleys of life science, these questions seem worth considering further than Raymond has. I feel, indeed, that there is much more to be said and learned about the Linux project as well as the basic nature of cultural epistemology by considering the issues more critically in the light of evolutionary theory. Moreover, to the extent that the theory of biological evolution has provided a conceptual framework for the development of organic life at all levels, from the unicellular architecture to the largest ecosystems, I suspect that a theory of cultural evolution will help us develop an equally broad and diverse perspective on the emergence of knowledge, from the "microscopic" level of the source code to the macro world of the Linux community as a distinct cultural group in the larger society. To this end, we will consider the fundamental concepts of evolution as the theoretical basis for evaluating the Linux project. In particular, we will examine the theory of mimetics, which has been gaining attention in certain segments of the scientific community as a promising application of ideas from genetics to cultural evolution. Memetics seems to promise a rich framework for our analysis as well.
Within the scope of our analysis, perhaps the most significant implications of Linux is as follows: Linux effectively challenges the top-down world-view - what Mitchell Resnick calls the "centralized mindset"  - that is so deeply entrenched in our everyday thinking. Take the example of the free market. For more than a century, economists ascribed to the invisible hand of Adam Smith, hoping to describe everything from the behavior of the market to that of the individual actors according to the simple law of equilibrium. In reality, however, the market is more unstable and unpredictable than we often expect, and vastly more complex than suggested by the law of supply and demand. Likewise, we still struggle to understand how something as complex as the human eye could have emerged from rudimentary forces of nature, without a master mind designing its exact shape and function.
In short, it is psychologically appealing to view aggregate phenomena in terms of simple universal laws of nature, but such a point of view neglects the true complexity of reality. The case of Linux suggests, instead, that aggregate behaviors are better understood as emergent properties - properties that emerge out of micro-level interactions among constituents - than as qualities intrinsic to the system and independent of its constituent parts. Moreover, systems that allow open and active interaction can be more adaptive and more dynamic than closed systems that restrict their constituents. In the case of Linux, nothing in its architectural design was particularly innovative or revolutionary. To the contrary, the monolithic design of the Linux kernel  was deemed by many as backwards, a major setback for any portable system. For others, Linux was just another Unix-clone. What set Linux apart from other operating systems in the level of quality and performance, Raymond submits, is not anything inherent in the original architecture, but the open and evolutionary development .
Before we qualify the Linux project as a complex system, however, there are several questions that must be answered. First and most obviously, is the Linux project really a "bazaar" as Raymond claims? Although the Cathedral-Bazaar analogy is convenient, it might be too simplistic. Moreover, in the words of Jonathan Eunice, The Cathedral and the Bazaar simply "assumes an open-is-good/closed-is-bad worldview," and as a result, Raymond confuses "development mechanisms and methodology with goals and philosophy."  In fact, many of the attributes of the bazaar Raymond identifies - flexibility, parallel development, peer review, feedback loops - do not necessarily require open source code. And likewise, Eunice notes that Microsoft and other cathedral-builders are equally able to - and already do - incorporate these attributes in their development schemes.If so, one is left to wonder if the Linux project is indeed as open as it claims to be. As the de facto leader of the project, Torvalds exercises a degree of autonomy and discretionary power in making final decisions throughout the development of Linux. At the same time, the actual degree of his involvement in decision making processes is not at all clear from Raymond's essay. How, then, are decisions made in the Linux community? Are there other leaders entrusted with decision-making authority? And more generally, how is the Linux project different, qualitatively and quantitatively, from cathedral-building? Where do we in fact draw the line between the Bazaar and the Cathedral?
These are questions that can be approached through a variety of methods, including participant observation, interviews, and literature reviews. Mailing lists and Usenet groups are particularly useful for observing actual discussions and interactions that take place day-to-day in the Linux community. One is most likely to encounter a variety of qualitative and quantitative differences, many of which may be idiosyncratic attributes of the Linux community rather than categorical differences that distinguish the Cathedral and the Bazaar. One decisive difference to look for, however, is whether Linux is created from a preconceived design implemented by a precocious watchmaker with a purposive mind, or from a primordial techno-form, modified incrementally and experimentally by many hands without a sight of its path. To what extent does Linus direct the course of the development, and to what extent does he determine in advance what needs to be written and modified?
If the Linux project turns out to be as open as it is purported to be, one must also question why it has not to this day collapsed under the weight of growing complexity and chaos. Two related issues are in order. The first regards motivation: What motivates people to contribute - and keep contributing - to the Linux project when they are not paid for their contribution? And why do they help each other when they are essentially strangers, linked together by nothing more than the tenuous ties of the Internet? As we saw, Raymond's answer is reputation. People volunteer their time and effort to the project in return for the respect they earn from their peers. Yet he notes elsewhere that there is a strict taboo in the hacker culture against self-promotion and ego-driven posturing, as summarized by the dictum: "you become a hacker only when other hackers call you a hacker."  How do we reconcile this paradox? I will argue that the reputation game is also a reinforcement mechanism sustained in evolutionary processes of interaction and embedded in a distinct cultural system within a certain political structure of the society. In this regard, reputation is not an end in itself for individual hackers, but part of a more general pattern of social and political landscape for the hacker community at large. We will see that the theory of the New Class, based on the Marxist idea of class conflict, provides one such scenario for understanding the Linux community at a larger level.
The second issue concerns coordination: How do contributors coordinate their efforts without rigorous planning under centralized authority? The importance of coordination cannot be stressed enough. On the one hand, people can be motivated and still fail to orchestrate their efforts effectively. On the other hand, coordination is also critical to explaining the quality of Linux in the very sense that quality, as we discussed earlier, is an emergent property of coordinated efforts. Coordination is hence a crucial element sustaining collective efforts, giving the Linux project its integrity that unfolds the seemingly chaotic yet infinitely creative processes of evolution. In particular, the issue of coordination highlights an important and intriguing aspect of evolution generally overlooked: self-organization.
Popular views of evolution tend to focus on the random and arbitrary nature of mutation and genetic recombination while paying much less attention to patterns of order underlying evolution. It has been observed, however, that adaptive populations are just as capable of organizing themselves into coherent units and sub-units as changing and mutating. This is not to say that there exists a certain global order or a natural equilibrium towards which our universe necessarily converges. To the contrary, self-organization is spontaneous and self-imposed, often as arbitrary as mutations. And at the same time, self-organization and mutation are two faces of the random process that evolution is - the yin and yang of Mother Nature. Whereas mutation and recombination propel evolution toward what John Holland calls "perpetual novelty," self-organization keeps it from spiraling out into chaos and disorder. Evolution is incomplete without both. If we qualify the development of Linux as an evolutionary process, we should also expect to see self-organization at work.
Outline of this paper
The rest of the paper is outlined as follows: Chapter 2 gives a brief history of the Linux project, so much for us to trace the general contours of its spontaneous development from its historical birth. Chapter 3 introduces the emerging field of complexity theory. After a theoretical discussion, we will see, based on personal interviews with Linux developers, that the Linux project is best understood as a complex system. Chapter 4 lays out the basic concept of evolutionary theory and its various flavors in discussing the dynamics of Linux as an open-source project. A particular attention will be given to the idea of evolution in the context of complexity. Finally, Chapter 5 concerns the issues of motivation and coordination in the context of game theory. In doing so, we will also highlight the concept of self-organization, which has become a bedrock of complexity theory.
Chapter 2"[Linux] started as a program for my own use. I was blown away by how many people there were with similar needs."
- Linus Torvalds, in Michael E. Kanell, interview"Releasing the source code was the best thing I did.
- Linus Torvalds, in Wired, interview
A Brief History of Linux
The Unix operating system was first developed between 1969-74 by two programmers at AT&T Bell Labs, Ken Thompson and Dennis M. Ritchie. Unlike Multics, OS/360, and other software ventures at the time, Unix was inspired by a simple-is-beautiful approach to software design and remains an outstanding example of its implementation. It also marks the first operating system written in an advanced systems language called C. Operating systems prior to Unix were written instead in an assembly language, which is not only difficult to learn, but also machine-dependent. Programs written in assembly languages needed to be modified for each family of machines. The C language, invented concurrently by Ritchie in 1972, was a dramatic improvement over assembly languages in its simple and logical structure as much as in its portability. Programs written in C could be ported to any machine that had a C compiler, a program that translates C into machine-readable code. Over the next years, the minimalist design and the great portability of Unix quickly found growing support and acceptance within academia as well as in industries.
As a student in computer science, Torvalds was interested in the task-switching property of Unix. Quite literally, task-switching allows the operating system to switch between multiple processes running simultaneously. It is a central component of multi-tasking, which in turn is one of the most important features that distinguishes Unix from DOS. Although DOS had grown in popularity among the first generations of personal computer users, it was technically no match for Unix as far as Torvalds was concerned. However, Unix was extremely expensive for personal users and much too cumbersome for personal computers at the time. Of course, the University of Helsinki had a limited number of machines running Unix for student use, but the hardware allowed only sixteen users at a time. Rather than wait in the long line every day to access a terminal, Torvalds decided to write his own Unix. He was twenty-one years old.
Torvalds knew he had a knack for programming. He wrote his first program in assembly language as an early teen. But writing a Unix kernel is not a casual task - not for full-time professionals, much less so for a college student. Neither could he by any means call himself a seasoned Unix expert. It was only in the autumn of 1990, the semester before beginning his first hack, that he took his first course on Unix. In the end, however, his inexperience proved fortunate: "Being completely ignorant about the size of the project, I didn't have any inhibitions about doing something stupid. I could say that if I had know, I wouldn't have started." 
His early inspiration was a Unix-clone called Minix. It was written by Andrew Tanenbaum at Vrije Universitat in Amsterdam for teaching purposes, for the sake of which he had sacrificed some essential features of standard Unix systems. Furthermore, it was copyrighted to the author, which forbid unauthorized use or modification of the source code without his permission. Nonetheless, its size - powerful yet small enough to run on PCs - and the availability of the source code at no cost struck a chord among students as well as general users. "Within two months of its release in 1987," reports Tanenbaum, "there was a newsgroup with over 40,000 users worldwide." 
Torvalds came across Minix in one of his textbooks, Operating Systems: Design and Implementation, also by Tanenbaum. He recalls: "That's when I actually broke down and got a PC."  He installed Minix on his machine, an Intel 80386 processor with four megabytes of memory, which thus became his scaffold for developing what soon became the first version of the Linux kernel.
The kernel of a Unix system is responsible for controlling access to various resources on the machine, everything from CPU to memory to network interfaces, graphic boards to mice to keyboards, and more. One the one hand, the kernel enables multi-tasking by allowing multiple applications to share the same CPU; on the other hand, the kernel prevents the processes, or the applications in execution, from interfering with each other in accessing the system resources. The term kernel describes the physical structure of the system, in which the kernel forms the core of the system, surrounded by an outer layer of user-end applications - much like the pit of a fruit. The kernel in turn consists of a number of components working together as a coherent unit, most importantly the virtual memory, the file systems, and device drivers. Although the term Unix refers colloquially to the entire working environment, it is technically the kernel that constitutes the actual operating system.
To be sure, Linus never actually intended to write a complete kernel. Much of his hacking during the first six months involved writing simple task-switching programs that ran on Minix: "I made two processes and made them write to the screen and had a timer that switched tasks. One process wrote A, the other wrote B, so I saw AAAA, BBBB, and so on."  It was not long, however, before he had a simple terminal emulation program for accessing newsgroups. As he saw it, it was only a matter of modifying the As and Bs into something else - sending output from the keyboard to the modem for one, and reading input from the modem to the screen for the other. Then, in the summer of 1991, he wrote a disk driver and a small file system so that he could download files and write them to a disk. "When you have task-switching, a file system, and device drivers, that's Unix."  Linux 0.01 was born.
Version 0.01 "wasn't pretty, it had no floppy driver, and it couldn't do much of anything."  It contained nothing more than the bare rudiments of a Unix kernel, and it was still hosted by Minix, i.e. the kernel could not run without Minix. Moreover, even as he announced his project to comp.os.minix, an Internet newsgroup, he had no clear intention to make his system as portable or, for that matter, as complete as a commercial Unix kernel:Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things) [...] I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-hard disks, as that's all I have :-(
It was not long, however, before he begin to entertain the idea that he could at least "chuck out Minix." By October 5, 1991, version 0.02 was able to run bash, a type of program called shell that provides an interface for sending commands to the kernel, as well as gcc, the GNU C compiler. The source code was also released at this point:"Do you pine for the nice days of Minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on an OS you can try to modify for your needs? Are you finding it frustrating when everything works on Minix? No more all-nighters to get a nifty program working? Then this post might be just for you.
As I mentioned a month ago, I'm working on a free version of a Minix-look-alike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be, depending on what you want), and I am willing to put out the sources for wider distribution [...] This is a program for hackers by a hacker. I've enjoyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have. I'm also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain), I'd like to hear from you, so I can add them to the system." 
The response was instantly positive at the prospect of a free and modifiable operating system. Among the first few to read the message, Ari Lemmke immediately offered Torvalds to store his system on an FTP server he maintained at the Helsinki University of Technology for public access and download. Incidentally, until this point, Torvalds felt the name "Linux" was too self-promoting and instead called his Unix "Freax" (free+freak+x). It was only at Lemmke's insistence that Linux became the official label .
Of the first ten people to download Linux, five sent back bug fixes, code improvements, and new features. It was the beginning of what became a global hack, involving millions of lines of code contributed by thousands of programmers. By the end of the year, when Linux finally became a stand-alone system in version 0.11, more than a hundred people worldwide had joined the Linux newsgroup and mailing list. The development of Linux continued at an accelerated pace even after the release of verion 1.0, the first official Linux, in 1994. Updates occurred on a daily and weekly basis throughout its development. The latest kernel contains more than three million lines of code and enjoys an installation base of more than three million worldwide.
Almost a decade since its humble beginning, the success of Linux is indeed a surprise to all, let alone Torvalds. Despite his early expectations, not only has Linux surpassed the popularity of commercial Unix systems, but many regard Linux as the best brand of Unix, one in which the philosophy of Unix - its simplicity and portability - has found a new voice. It is the most widely ported system available for personal computers today, and the only system gaining market share besides Microsoft Windows. And in the meanwhile, Torvalds himself has become the poster child of the open source movement.
Amidst such glitz and glamour of the sudden media attention, the obscure beginning of Linux is all the more unreal. How, in fact, did a project that started out as nothing more than a simple hobby end up with an operating system superior to anything the world's computer industry has produced? We now turn to a closer look at the dynamics and the structure of its improbable development.
Chapter 3"The important this is to observe the actual living economy out there. It's path-dependent, it's complicated, it's evolving, it's open, and it's organic."
Brian Arthur, quoted in Kelly, Out of Control
A Cathedral or a Bazaar?
The Cathedral and the Bazaar metaphor has served us thus far in illustrating two contrasting approaches to software development. In today's software industry dominated by the Cathedral, Linux, with its evolutionary forces driven from the "bottom up," has suddenly risen to astonish and challenge the hegemony of the top-down rationality. But before we explore the evolutionary processes in the Linux project, several questions are in order. Where Torvalds and Linux developers are concerned, is the Linux community really a Bazaar? What is the source of its bubbling creativity? Can we define the essential properties of the Bazaar in more formal terms?
In answering these questions, we will refer to a newly emerging branch of science known as the theory of complexity. Its ideas and questions are directly relevant to our discussion of the Bazaar model.
Complexity: An emerging science
So far I have used the term "complex" rather casually in describing Linux, only to stress its seemingly implausible development. On the one hand, this might very well suffice, for most of us have an intuitive feeling of what complex implies. It might indeed be futile to try to define complexity in formal terms, for the nature of complexity depends broadly upon one's perspective. On the other hand, however, complexity has a pivotal significance to our understanding of Linux as an evolutionary system inasmuch as evolution is directly concerned with complexity. Evolutionist Richard Dawkins goes so far as to claim that, whereas physics is a science of analytical simplicity, of "ultimate origins and ultimate natural laws," biology is a science of [real-world] complexity . On this ground, it would be useful to capture more clearly what is meant by complexity and what makes complexity so special - not so much as to determine for ourselves how complex Linux is, but rather, to bring our attention closer to the common principles of complexity and step beyond an intuitive understanding of Linux as a complex system.
Richard Dawkins defines complexity as "statistically improbable in a direction that is specified without hindsight."  By statistically improbable, it is implied that the object is unlikely to have arisen by chance alone. By without hindsight, it is suggested that the particular composition of the object is functionally unique among other possible but nonfunctional combinations. Simply put, there are vastly more ways to be dead than alive . This is essentially the aspect of complexity we have been concerned with until now. For all his interest in complexity, however, it is unfortunate that he leaves us with a definition so ambiguous, devoid of more precise characterization.
In comparison to Dawkins' evolutionary biology, the science of complexity represents a more systematic effort at unveiling basic aspects of complex systems. The science of complexity draws its core ideas from a wide range of disciplines, most notably biology, economics, and computer science. The practical significance of computer science in the study of complexity is particularly critical. Before the wide spread of personal computers, science was based on theory and experiment. A theory formulated hypotheses and experiments confirmed or countered them. Because of obvious limitations of computing every detail in the system, however, scientific models traditionally focused on aggregate or high-level behaviors of the system. High-level models predict patterns like fluctuations in the stock market based on fluctuations in the previous weeks while ignoring the decisions and behaviors of individual stock brokers at the micro-level .
Since the 1960s, computers have introduced a new mode of science: agent-based simulation. In a way, simulation represents a new synthesis of theory and experiment . With simulation, scientists are able to construct artificial systems based on some rules and equations (theory), fill them with autonomous decision-makers (experiment), and observe aggregate patterns emerge from their interactions-from the "bottom up." While our discussion will not rely on simulation in a direct manner, it will become clear presently that the study of complexity is centrally concerned with the idea of emergence from the bottom up. It seeks to explain how simple rules of interaction among individual agents turn out outcomes that are vastly complex and often surprising.
The idea of complexity is directly pertinent to social sciences as well. Economist Herbert Simon claims that "human beings, viewed as behaving systems, are quite simple."  As Michael Macy elaborates, much of our everyday life is governed by simple rules of behavior, "in the form of norms, conventions, protocols, moral and social habits, and heuristics." For Simon, it is not in these rules but, instead, in social interactions that the human society builds its complexity of social life .
Unfortunately, the idea of complexity has been rather slow to gain its foothold in modern sociology, and I am therefore forced to refer to bodies of literature in other disciplines to anchor my thesis. In the meanwhile, it is my hope to show before the conclusion of my thesis that there is much a sociologist would find appealing in the bottom-up approach to understanding social phenomena, and that the reader will come to greater appreciation of both sociology and of complexity theory as complementary disciplines.
For a complexity theorist, a complex system, quite simply, is a large number of diverse agents or actors interacting with each other in a great many ways . Often, these agents are adaptive decision-makers, constantly acting on and reacting to each other in parallel and evolving hand in hand - or co-evolving - as subunits of the system. At other times, these agents are non-autonomous entities, such as inanimate parts of a machine, that are nonetheless at constant interplay with each other, producing an exponentially wide range of behaviors in turn. In both cases, the agents are strategically interdependent, meaning "the consequences of each agent's decisions depend in part on the choices of others."  This rich connectivity is what underlies the highly protean nature of complex systems. In effect, a complex system forms a dynamic network of adaptive and interdependent agents, rummaging and searching ceaselessly for best behavioral responses and yet never settling into a stasis.
In other words, the agents of a complex system, while adaptive, are not truly optimizing . The reason concerns the unavailability of global information to individual agents, on the one hand, and the absence of external intervention by central authority, on the other hand, as a basis for determining optimal responses in interactions. As John Casti of the Santa Fe Institute elaborates,"In [complex systems] no single agent has access to what all the other agents are doing. At most each agent gets information from a relatively small subset of the set of agents, and processes this 'local' information to come to a decision as to how he or she will act." 
It is not to be assumed, however, that agents of a complex system are totally unaware of the macroscopic order or the aggregate behaviors of the system as a whole. Actors in the human world are constantly inferring the state of the system at each moment. The stock market is a familiar example. In a complex system, however, their inferences are largely inductive, conjecturing about global patterns from what is observable and definable locally.
Finally, a complex system typically consists of multiple layers of organization, with each level complementing or comprising a higher level. Economist Herbert Simon calls such structures hierarchic systems. This, however, is not to be understood in the narrow sense of the word as it expresses a vertical relation of authority. More generally, a hierarchic system is a collection of nested subsystems, often without any relation of subordination and vastly more complex than the linear patterns of formal organizations .
The implications of the properties of complexity - adaptive interactions based on localized information at multiple levels of organization - are critical and often surprising. First, from the centrality of local interactions based on inductive inferences, it follows that local fluctuations and disturbances can often create cascading effects across the entire system. This idea is well captured in the famous butterfly effect: a butterfly flapping its wings in Florida causes a storm in Hawaii . According to John Casti, the nonlinear amplification of local instabilities is a defining fingerprint of a complex system .
The sensitivity of the complex to local conditions need not imply chaos, however. For economist Brian Arthur, this sensitivity is precisely what underlies the dynamics of positive feedback, or increasing returns - essentially, the idea that initial conditions are, by virtue of the mutual interactivity of agents and its amplifying property, self-reinforcing and self-replicating in their ensuing patterns . Consider the case of the VCR market: VHS and Beta systems were introduced around the same time and began with approximately equal market shares. Why, then, did VHS dominate the market, when Beta was considered technically superior? Because more people happened to buy VHS in the beginning, which led to larger stocks of VHS at storefronts, which led more people to buy VHS, and so on . In other words, "Them that hath gets." 
In short, the first property of a complex system concerns the role of local interactions in producing large and far-reaching effects through nonlinear amplification of microscopic changes and activities. Despite what is suggested by the magical butterfly of chaos theory, however, positive feedback can be as stabilizing as it is destabilizing. Although small changes can alter the face of the entire system, once they settle, they can also lock the system into a new path and reorganize the agents into increasingly coherent patterns, as in the case of the VCR industry. Similarly, I will argue that self-organization has played a critical role in shaping the course of the kernel development and containing the Linux project in relative order.
A second aspect of the complex system concerns the importance of locally interacting agents in what is called the phenomenon of emergence. Colloquially, emergence refers to the idea that the whole is greater than the sum of its parts. Sociologists might be reminded of Durkheim's social fact, the idea that an external force or structure constrains the very individuals constituting it. Similarly, emergence refers to the property of local interactions generating large-scale effects that are non-existent in the individual agents. In chemistry, hydrogen and oxygen gasses chemically bonded together produce liquid. In economics, individuals make up a firm that often behaves hardly like an individual person. And for that matter, life itself is an emergent property, arising from proteins, DNA, and other biomolecules .
Emergent properties are, by their very nature, difficult to predict. They can hardly be understood on their own terms or by inferring from individual agents in isolation. They are, in other words, irreducible to local properties . Emergence must instead be modeled from the bottom up, from the complex chain of local interactions as its foundation. Yet, computing such mass behavior accurately is a task not easily done. For observers, emergence therefore adds to the system an element of ever-present possibility for surprise, such as we see in Linux .
Linux as a complex system
The Linux project can also be understood as a complex system in its own right. It is, first of all, a hierarchical system, consisting of at least two distinct levels of local interaction. At one level is the source code. Linux, like all other operating systems, consists of numerous lines of code organized into logical hierarchies of discrete units and subunits. The units of code in turn mediate each other in executing a task, forming an immense web of interdependent functions for even the most basic system operations. At the operating system level at large, this rich connectivity allows the operating system to process a wide range of input from the user and respond with an equally wide array of logical behaviors in a remarkably interactive manner.
Of course, the operating system is interactive only to the extent that it is specifically designed by its developers. And likewise, it evolves only insofar as the developers modify it. At a second level of the hierarchical structure, then, is the community of developers and users, who form their own web of locally interacting agents.
In short, our interest in the evolution of Linux is two-fold. For one, we must appreciate the sheer complexity of the operating system on its own terms. To the extent that the performance of an operating system is a close function of tightly integrated components, a seemingly innocuous bug (a digital butterfly, if you will) in a single line of code might trigger a storm elsewhere unexpected and not easily traceable. At the same time, this logical and densely interconnected design of the operating system is to be sharply contrasted with the shapeless contours of the growing development community. In particular, we must understand the serious implications the decentralized structure of the community pose for the issues of motivation and coordination among developers. How do Linux developers manage to build a coherent operating system when nobody in particular is in charge? Analytically, Linux is therefore twice improbable - once for its technical complexity, and twice for its social complexity.
And still yet, what makes the success of the Linux project all the more remarkable in practice is not the hierarchical structure per se, but the unyielding combination of its technical complexity and its social complexity. Herbert Simon describes the path of an ant foraging on a beach . However complex and nonlinear the path may be, the complexity is not an indication of the complexity inherent in the behavior of the ant, but a reflection of the rugged terrain. The idea is that complexity is not an innate quality in a system but an emergent property born out of interactivity between the actor and the environment. Similarly, the complexity of the Linux operating system is a function of the growing community, its diverse needs and changing demands. What is interesting and remarkable about Linux from this standpoint is that its quality and performance came to be, not from particular design principles of the kernel alone, but much more so from the massive and localized interactivity among its developers. And as such, the complexity of Linux grew hand in hand with the growth of the community.
The localized nature of interactions in the Linux community is suggested by various patterns of development. For one, it is practically beyond any developer to keep up with the growing size and speed of the project, as well as the highly technical nature of the operating system:"People have lives, and it takes a special person who can juggle their career and family lives to donate the time required to [L]inux. We'd all like [to], but a lot of us can't. I work a lot, I can't afford to spend a couple hours on [L]inux a day." 
The linux-kernel mailing list is the central discussion forum for kernel developers, including Torvalds himself. On any ordinary day it is bubbling with activity. Zack Brown reports that the traffic on the linux-kernel mailing list can reach up to six megabytes of postings in a given week . This reflects numerous projects in progress, a wide range of new ideas, old bugs, persisting discussions, and recurring questions directly pertinent to the development of the kernel. One needs to look no further than the hustle and bustle on the linux-kernel to appreciate the metaphor of the Bazaar.
The size and the speed of the project are not the only qualities that distinguish the Linux project from the Cathedral, however. Also important are the differences in its internal organization, particularly the lack of global coordination under central authority, on the one hand, and the absence of top-down planning from design, on the other hand.
Torvalds is the undisputed leader of the Linux project. He oversees the project as a whole, keeping track of major developments and reviewing numerous patches of code submitted by his developers, from which he builds new versions of the kernel. New releases are announced on the linux-kernel mailing list."Hi, I just made 1.3.11 available on the normal ftp-sites (ftp.funet.fi and ftp.cs.helsinki.fi). I also have an experimental patch to go on top of that, and I'd like to know if it (a) works and (b) speeds up task-switching. Does anybody want to test this out [...]?" 
Torvalds is also an obvious authority on the Linux system. On any typical day, he receives around 200 e-mail messages directly from Linux developers . His activity on linux-kernel reflects a small fraction of his presence in the actual project.
Torvalds works closely with the so-called "Inner Circle" of technical advisors, or "lieutenants," immediately around him. They are core developers who have established their status as competent programmers and proven their expertise valuable to the project through years of involvement. Opinions vary, however, as to who actually constitute the Inner Circle:"There are a number of well trusted individuals like Alan Cox who could be said to exist in the inner circle. I'm not sure if there is a defined inner circle or not. Most likely it's a group of individuals who know the kernel inside and out and have similar goals in mind." 
Others claim that:(separate interviews)
"[It] is a small and well-defined group: Linus [Torvalds], Maddog Hall, Alan Cox, and somewhere between 6 and 12 others (varying at times)." 
"Watch the linux-kernel mailing list. The "Inner Circle" becomes very obvious. People go in and out of the Circle, so a list probably isn't possible [...] I would say it includes may be 2 dozen people." 
There are many. Linus is obviously, and unquestionably at the top. There are many others down the line [...] whose positions are not so well definable." 
"Within the Linux world, there's no strictly defined core team, not even a loose one[...] You can probably say that there are 100 very active kernel folks [...] but that still leaves a rather large gray area." 
The inconsistency and the ambiguity of these statements bear testimony to the fact that, despite the universal recognition of its presence and stature in the project, the Inner Circle is neither handpicked specifically by Torvalds nor clearly defined by the community. It is a group that formed naturally, without external intervention but simply by virtue of their involvement and expertise acknowledged by the community. For example, Torvalds devotes most of his attention to the experimental version of the kernel, whereas Alan Cox is currently responsible for the maintenance of the stable, non-experimental version of the kernel for general users. The experimental and the stable versions refer to the pattern of kernel development, indicated by a standard numbering convention. Experimental patches and new, untested function are applied to odd numbered versions of the kernel, such as 1.1 or 2.3, for development, whereas only remedial bug-fixes are added to even numbered versions for stable releases. Tim Waugh suggests that "this is more to do with Alan actively maintaining them than Linus delegating."  Or, as Torvalds himself sees it, "[Alan] seems to have taken over the stable tree these days [...]." 
The Inner Circle, then, is more than just a convenient designation applied to a group of developers with certain recognition for their skill and experience. The members of the Inner Circle are entrusted with responsibilities over the development and the maintenance of specific components of the operating system they personally started or took over:"Leonard Zubkoff wrote and maintains the Buslogic and Mylex SCSI drivers. Doug Ledford maintain the AIC7xxx driver for mid to somewhat high end Adaptec SCSI cards. Gerard Roudier maintains the NCR/Symbios 53c8xx SCSI driver. [As far as I am concerned], Linus pretty much trusts these people to submit patches to him for the drivers they maintain." 
Because of the increasing volume of development in general and the technical details of numerous internal components of the kernel in particular, Torvalds encourages developers to submit their ideas and patches of code to the primary maintainer of the component first:"What happens is that if I get a patch against something where I'm not the primary developer (against a device driver or a [file system] I have nothing to do with, for example), I am less likely to react to that patch if it doesn't come from the person who is responsible for that particular piece of code.
If I get a networking patch, for example, it's a lot more likely to get a reaction if it is from Alan Cox, and then it usually goes in with no questions asked. Similarly [...], if it as a patch against the SCSI subsystem, I much prefer to have it from somebody I know works on SCSI [...]" 
With the continuing growth of the Linux project, Torvalds has been forced to delegate more and more coding to the Linux community, resigning himself to the role of the general project manager. After accepting his position as a software developer at Transmeta in California, and with his two daughters, he has even less time at his hand to review each patch of code reaching him, let alone code himself. As such, the Inner Circle and other primary maintainers of subsystems mediate Torvalds and the development community, providing an effective filter to reduce the load reaching Torvalds - effective to the very extent that, while Torvalds still insists that he reviews every line of the code he applies to the kernel, some people feel that this is unnecessary  (one person regards it "probably the most significant Linux bottleneck" ), suggesting the general reliability of the decentralized development below Torvalds.
Ultimately, all changes to the kernel go through Torvalds: "Linus has the final word on all kernel modifications."  He generally has an absolute sense of what to avoid adding or modifying in the kernel:(separate messages)
"Sorry, no. I refuse to use a kernel that depends on kerneld. That's final." 
"Trust me, you don't want to do this. This results in horrible stuff when the file grows past the cut-off point, where you'd have to start shuffling the indirect pointers on disk around or something equally disgusting." 
Such responses from Torvalds are not at all uncommon. With a growing developer base, he has naturally become more selective in regards to accepting new code for the kernel."In Linux's early days, working code was far more important than quality of design [...] Today, Linux has a groundswell of developer support. This means that Linus can be much more demanding of particular quality standards." 
Torvalds enforces not only quality standards, but stylistic standards also. Several developers recounted their personal experiences, in which a patch they submitted was rejected, simply because it was too complex . A particular case in point is Andreas Arcangeli from Italy, who started submitting patches for the printer code. As Garst Reese explains,"[Arcangeli] made substantial improvements, then branched out, but tended to do some pretty sloppy things - to the point where Linus said "go away." Andreas refused to go away, and eventually had major changes to the kernel accepted. All Linus did was enforce coding standards." 
More generally, in his effort to keep the kernel at its minimum in size and complexity, Torvalds is increasingly wary of patches that add completely new functions without improving existing functions. Much of the recent growth in the size of the kernel has come from the addition of new device drivers outside the kernel proper .
Torvalds' decisions are generally well regarded for their technical merit. As Andrea Arcangeli notes,"[I]t's amazing how [rarely] his opinions are wrong and how rarely he needs to change his first decisions. I believe he is almost always right due the large picture he has over the kernel." 
On the other hand, he is also quick to admit when he is wrong . Developers report two primary reasons for Torvalds to change his initial decision. The first reason is technological. From time to time, changes in the state of technology have forced Torvalds to reconsider certain segments of the kernel. In a way, in fact, the entire development of the kernel has been driven to a large extent by technological innovations outside the Linux project. Examples abound:"For example, when Linus first began work on a non-Intel architecture, he was forced to generalize the concepts behind memory management (as well as a number of other concepts) so that the very different mechanisms on different machines could still be dealt with using the same abstractions.
For example, the steady and continuing drop in memory prices has also forced him to restructure memory management - this time so that the kernel didn't waste time doing redundant operations while managing memory (the system would be way too slow, otherwise).
For example, the growth in the popularity of [Plug-and-Play] is gradually forcing rearchitecting of device initialization code." 
Aside from increasing hardware support in Linux, Torvalds is always looking for code improvements. As one developer reports, "Linus is somewhat stubborn on basic design decisions and development directions, but happily accepts improvements in implementation and (obviously) actual bug fixes." 
Torvalds might change his initial decision for reasons more social. Specifically, a number of developers claim that, aside from keeping in pace with technological changes, Torvalds has also been seen to concede under mounting pressure from dissatisfied developers, particularly from more trusted members of the community. Ballbach reports, for example, that he has seen "Alan yell at Linus, and [...] Linus yield." 
To be sure, it is difficult to disentangle social motivations from technical concerns. Take the example of the Raw I/O patch, submitted by Stephen Tweedie in late 1998 and rejected by Torvalds several times before the final version was finally applied into the kernel. While the technical discussion is beyond the scope of this paper, it suffices to say that, while Tweedie himself claims that the issue was "purely a matter of finding a technical implementation which was clean,"  more than a few others mentioned this case as a recent example in which developers rose to convince Torvalds to accept the patch .
At the same time, however, developers unanimously acknowledge that "you cannot just press him hard enough."  It is a commonly held view within the Linux community that there always exist technical solutions to technical problems . On the linux-kernel mailing list, where every discussion is open to the general public, subjective opinions with little or no technical support are rarely taken up seriously by anyone. Likewise, Torvalds rarely concedes to the demands of the community to modify the current system or consider alternative ideas without a logical argument on technical grounds, and conversely, Torvalds is meticulous about justifying his decisions thoroughly and responding to questions in technical detail, even if in a tone of voice that is more inciting than convincing at times."[I]n reading things from Linus in the past, he does seem to value working code much more than complaints or rants or high-level descriptions. Show me the code, I think he said. Find a way to make it work and make it work." 
When disputes erupt, and no immediate consensus avails, there is no standard procedure to reach a final decision. Torvalds might consult particular maintainers or ask the community for feedback, but there is no formal vote or clear balance of power. As Ballbach notes, "for the most part he'll add his ideas, and if people don't like it, a flame war ensues, and he'll either accept the group consensus or ignore it."  Granted that this is a gross overgeneralization, it nonetheless serves to show that Torvalds alone reserves the final veto or approval in the Linux community.
On the other hand, "if people have problems [...] they don't make it a secret."  Frustration at certain decisions or general lack of consensus come to the surface in public discussions quite often. Ironically, the frustration is inflamed in part by the aforementioned belief that technical problems should always be resolved on technical grounds. Solutions are not always obvious, however, and what appears to be purely technical are often actually less so. Even if the developers realize that the issue is not reducible to the hard facts of computer science, subjective preferences and experiences nonetheless enter personal opinions to muddle the nature of discussions, sometimes leading to flaming.
In the Linux project, the lack of formal structure invested with decision-making procedures resonates with the absence of central planning from Torvalds. Indeed, it is his deliberate effort to avoid imposing long-term plans and visions on the community:"That way I can more easily deal with anything new that comes up without having pre-conceptions of how I should deal with it. My only long-range plan has been and still is just the very general plan of making Linux better." 
To be sure, Torvalds is not completely without a general picture of what should be done and how. When occasions arise during discussions, he does not hesitate to throw suggestions."Right now I can't think of a good way to avoid this, except to actually disassemble the instruction that we hit on when taking the f00f bug. We probably need to do that anyway to handle the "int3" (one byte) and the "int0x3" (two bytes) instructions correctly to make debuggers happier ... Anybody want to tackle these issues?" 
Sometimes Torvalds delegates tasks to others, not to reduce his load per se or to leave them to those technically more qualified, but simply "to skip the boring problems and get the fun and challenging ones." "Actually, I've been thinking about adding a "mem=probe" command line type function, that would enable a probe routine. I've been too lazy to do it myself, though (hint hint)." 
The following appeared in June 1995:"Ok, in that case we'd want to add the "data" field to the interrupt handler regardless. Do you feel like you'd actually want to try this out? (hint, hint, Linus wants to get out of doing it himself)
Linus "get all the glory, do nothing" Torvalds" 
While such suggestions from Torvalds plays an important role at times, they are hardly binding and less common than his technical consultations. When people approach him with questions or problems, his responses are often quite exact, as Riley Williams recounts an incident:"I explained the [...] problem to Linus, and he promptly told me exactly what the problem was [...], and exactly how to fix it, and I promptly rolled off a variant of the patch that was legal [...]. that variant went into the following kernel, and the resulting patch has been unchanged since."
At the same time, however,"[...] I have no doubt that Linus could easily have implemented the fix himself, but he returned the patch to me with a note pointing that out, and I implemented the change to fix that and resubmitted it to him." 
Consistent with these statements, virtually every developer interviewed confirms that Torvalds generally refrains from supervising anyone's project or assigning specific tasks to particular developers. In the least, "assign is too strong a word. May be suggest, as in could you take a look at ... Still, this is not often."  It is not a mere coincidence that Torvalds has very rarely, if ever, started a thread - a chain of responses and counter-responses on a particular topic  - on the linux-kernel, except when announcing new kernel releases.
Likewise, none of the developers interviewed report that they have been assigned a specific task by anyone ever, Torvalds or others. Surely, the developers understand that they are volunteers, and that the entire project is dependent on them offering service at their own will. As long as there is no contractual obligation or formal compensation, monetary or otherwise, for their contributions, neither Torvalds nor the Inner Circle can force on developers tasks they personally do not find interesting or necessary. Instead, it is up to the individual developers to "figure out what needs doing, and what [they] want to do, and then just do it." 
It is not always the case that the developers are most satisfied with the lack of clear direction in the Linux project:"firstname.lastname@example.org: Linus, how about making a commitment to the [mailing] list [that Version] 2.3 will be short? Set some goals. there are a thousand wish lists out there ... How about one from you?
Torvalds: I don't want to set goals, because goals change, and I've been happier being more fluid. However I agree with you 100% that the 2.1 series was way too long. I'd be happier with a 2.3 that was half the length or less." 
But for the most part, the developers are quite content with Torvalds' policy: "It's a feeling of freedom," claims Daniel Egger ."In developing Linux, you have complete freedom to do whatever you feel like doing. There's no specification anywhere of what the Linux kernel has to end up doing, and as such there is no requirement for anyone to do anything they are not interested in." 
The upshot of the decentralized development without global planning, then, is that the development of the kernel essentially comes down to a function of people's personal interests and needs. Nobody, even Torvalds, knows what will be added or modified in the next release of the kernel. Torvalds simply picks out good code from the submissions he receives. Paraphrasing in the words of Theodore Tso, "Linus' power is the power to say No."  In effect, although Torvalds has a general sense of what will not be added to the kernel, he can hardly anticipate or dictate what will be added to the kernel.
The relative indeterminancy of the kernel development contributes to the success of the project in at least two ways. First, through decentralization, the rich and varied human resources of the community are allocated much more efficiently. Under centralized mode of software development, people are assigned to tasks out of necessity or economic considerations. In comparison, in the Linux project, each developer is free to work on any particular project of his choice as his skills, experience, and interest best dictate. Second, and more important for our discussion, the decentralized mode of production is a key underlying element in local, adaptive interactions, setting the stage for the parallel development of the Linux kernel as we turn to the next chapter.
From complexity to evolution
Using the theory of complexity as our starting point, we have found ample evidence that the Linux project marks a radical departure from the Cathedral model of software development. True, Torvalds depends heavily on the Inner Circle to maintain large segments of the kernel. Yet the hierarchy in the Linux community can hardly be understood in the narrow sense of the term implied by hierarchy. True, Torvalds does not hesitate to offer generous help when approached. Yet neither does he assign specific tasks or monitor his developers at any time. True, Torvalds plays a dominant role at top in regulating final modifications to the kernel. Perhaps the Bazaar-Cathedral dichotomy overstates the openness of the Linux project. Yet, Torvalds' veto does not need to imply a Cathedral in Linux. After all, selection in biological evolution reserves the final say - a point we will revisit later - but it hardly amounts to evolution from design. Likewise, as unyielding as Nature is, Torvalds also selects only what works, i.e. the patches with "high fitness."
On the other hand, whereas the Cathedral thrives on monolithic management, the Linux project has neither top-down planning nor a central body vested with binding and enforcing authorities. Its power, the source of its bubbling creativity, is instead in the ceaseless interactivity among its developers. Herbert Simon claims that intelligence is not a local property inherent in individuals, but a global property, emerging out of local interactions . If he is correct, there is something encouraging in the finding that in the Linux project, too, ideas come from the bottom, in the form of actual code or discussions, and not always from Torvalds or the Inner Circle.
In the following chapters, we will consider two critical implications of complexity in the Linux project. First, consistent with the idea of emergence, the Linux operating system represents a synthesis of incremental changes contributed by developers that cannot be understood solely in terms of isolated contributions from individual developers. Rather, our discussion suggests that the quality of the operating system is better understood as an emergent property of interactions that amount to evolutionary processes.
Second, the interactions at the developer level have shown increasing tendencies toward self-organization. In particular, we will see that the Linux community has dealt with the lack of centralized organization through an implicit reputation system. That is, as much as reputation among peers might serve to motivate volunteers in the absence of material reward, as Raymond and others argue, I submit that reputation also underlies a self-reinforcing pattern of collaboration around core developers in a project without central coordination.
The present chapter illustrated the importance of local adaptive interactions in the making of Linux. However, we have yet to see how such interactions comprise the mechanisms that generate surprise and wonder. In this light, the two issues will motivate us to delve further into the nature of interaction and examine its critical role in the context of evolution.
Chapter 4"There are only two ways we know of to make extremely complicated things. One is by engineering, and the other is evolution. And of the two, evolution will make the more complex."
- Danny Hills, quoted in Out of Control"Linux, it turns out, was no intentional masterstroke, but an incremental process, a combination of experiments, ideas, and tiny scraps of code that gradually coalesced into an organic whole." - Glyn Moody, Wired
The previous chapter was motivated by the idea, pronounced by Richard Dawkins, that an intimate link exists between complexity and evolution. But while we spent at length establishing complexity in Linux, questions still remain: How does complexity relate to evolution? And how will understanding Linux as a complex system help us come to terms with its evolutionary dynamics?
In one sense, evolution is a natural element in a complex system in as much as local adaptation amounts to evolutionary processes. The idea is captured explicitly in what John Holland calls cas, or complex adaptive system.  For Holland, adaptation is what underscores the kaleidoscopic nature of a complex system and makes a complex system as complex as they are.
In turn, evolution builds greater complexity, whether in individual actors or in their aggregations. With time, an evolving system accumulates more niches, more diversity, more specialization. Hence higher life forms consist of cells increasingly diverse and specialized, just as today's society consists of more heterogeneous communities and populations. The idea that society evolves toward greater complexity should remind sociologists of The Division of Labor in Society, in which Durkheim argued that modern societies are characterized by greater division of labor .
The rest of the paper will scrutinize the idea that an evolutionary system without central planning will grow more complex and yet more coherent. The constructive aspect of evolution is often overlooked in the domineering shadow of its counterforce, natural selection. The tendency to associate natural selection with evolution is so pervasive, indeed, that the historical trend persists in depicting Darwinian evolution as a subtractive, even destructive, process, synonymous with the survival of the fittest, which is only one side of the story .
Curiously, the tendency to regard evolution as destructive resonates with one of the cornerstones of all physical sciences, the Second Law of Thermodynamics, which states that all systems tend towards greater disorder, a state of low kinetic energy. Hence a drop of ink in a glass of still water diffuses to tint the water without reassembling into a discrete droplet. Yet, in an evolutionary system, order arises out of disorder, complexity from simplicity. Evolution builds.
Our goal in this chapter is to present an overview of evolutionary theory that will guide us in exploring the issue of complexification in Linux. To this end, we will consider how the nature of the Linux project lends itself to evolution in the light of Darwinian and other flavors of evolutionary theory. After we qualify the Linux project as an evolutionary system, time is ripe to address the central topic of the paper, namely, how evolution builds increasingly complex yet coherent design without vision and without purpose, which will lead us into the final chapter.
Evolution is centrally concerned with the flow of information among individuals. More precisely, vital to a theory of evolution is a system of information that 1) provides instructions for the internal processes of the individual and 2) mediates evolutionary processes in a particular population.
Evolutionists recognize three fundamental processes that must occur in an evolutionary system: variation, selection, and replication . First, there must be variation in the population such that not all members are identical. Second, given a heterogeneous population, the process of competitive selection must rid of unfit members based on differential success of reproduction. Finally, the genes of surviving members must be passed onto, or replicated in, the offspring.
Biological evolution is mediated by the genetic system. First, much like a cooking recipe, the gene provides instructions for building proteins that make up much of the living body and its internal processes, defining the physical and behavioral characteristics of the organism. Second, the gene provides the physical medium for sharing genetic information from generation to generation. And in doing so, the gene creates copies of its genetic material by replicating itself through intricate processes of cell division .
Two distinct processes contribute to variation in genes. Mutation introduces variation through spontaneous and random changes in a particular gene. Crossover introduces variation through a sexual exchange of genetic material between organisms . Much as the term implies, crossover recombines - thus known also as sexual recombination - the genes from the parents into hybrid genes in the offspring. As in mutation, however, crossover is also a random process - gender, for example, is determined by the process of recombination - although the process of mating ensures that recombination occurs necessarily between each generation. Together, mutation and crossover account for the endless variability introduced between each generation.
In turn, genetic variation sets the stage for natural selection. Thus given a population of genetically varied members, or a heterogeneous gene pool, selective pressures trigger competition for the propagation of one's genes based on the organism's reproductive fitness - the ability to mate and reproduce. Taking the idea further, Richard Dawkins argues that evolutionary games are best understood in terms of competition between genes and not between the organisms that carry them. What drives evolution is not the selfish interest of the organisms to survive and procreate, but the "selfish interest" of the genes to propagate themselves. The organism is merely a carrier of information, a physical substrate that plays out the competitive fitness of the genes in mediating the selective processes .
An important distinction is therefore made in biology between genotype and phenotype. Genotype refers to the genetic constitution of the organism. Phenotype refers to the observable attributes of the organism, physiological and behavioral, as determined by particular genetic combinations.
It must be stressed, finally, that the phenotype of an organism is determined by particular combinations of genes. Genetic information, unlike a blueprint, is not a point-to-point representation of the organism. There are no single genes for blue eyes or blond hair. Rather, as a recipe calling for a number of ingredients to concoct a new hybrid flavor, or as hydrogen and oxygen mix together to produce water, genes interact with each other to generate certain effects. It is often the case that some genes present in an organism are not expressed at all in certain combinations, with the implication that natural selection will not eliminate these latent genes, regardless of their fitness.
The Blind Watchmaker"[The evolutionary model] eliminates one of the greatest hurdles in software design: specifying in advance all the feature of a problem."
- John Holland, quoted in Out of Control
The wide appeal of the evolutionary theory is very much due to its simplicity. Aside from the technicalities of genetic science, the concepts of variation, selection, and heredity are very easy to understand and apply to a wide range of systems. Indeed, there is a firm and well-established recognition - an idea referred to as Universal Darwinism - that evolution is not a property exclusive to genetic and organic systems and that any system with variation, selection, and heredity can evolve just as well . Ironically, the simplicity of the theory is also a cause of much misunderstanding and criticism. The idea that complexity evolves out of simple and mindless mechanisms is not at all obvious.
Perhaps the most convincing discourse in Darwinian evolution is summarized in The Blind Watchmaker by Richard Dawkins. His basic premise is that evolution is blind - blind because "it does not see ahead, does not plan consequences, has no purpose in view."  Evolution does not anticipate.
In extending the idea to the Linux project, I have prompted a critique from several developers, as thus:"So [...] the fact that instead of one person's technical vision, there are many people who are simultaneously looking into the future, and this makes it blind? If it's one person's guess of the future [...] and if we have a large multitude of people trying to guess the future, we call it "blind"?" 
This misunderstanding stems largely from failing to appreciate the complexity of a complex system - that no individual has effective global control over the system. Complexity theory was introduced in the previous chapter to set a framework for the idea of the Blind Watchmaker. This is not to say, however, that Blind Watchmaker theory amounts to calling the individual actors blind. The developers have direct control over what they code, and they are hardly blind in doing so. It is not the actors that are blind, but evolution that is blind.
We certainly must not rule out foresight on the part of developers in shaping the operating system. Linux is what it is today because of the experience and the expertise of passionate developers worldwide, and foresight might very well explain the phenomenal speed of the Linux development - although it is not the only explanation, as we will see. Considering the quality of human resources in corporate realms, however, neither foresight nor individual experience by itself is an adequate account for the superior quality of Linux to its rivals on the market. Nor is it a sufficient explanation that the Linux project has produced an operating system of such complexity and coherence without central planning. If "blind" is too strong a word, we might substitute it with "short-sighted." But the point remains. As Rik van Riel asserts, "there is no way [Torvalds] could look into the future and see what needs to be done."  I seek an answer to the question of Linux' quality therefore in evolutionary theory, which, without imposing unlikely assumptions of perfect rationality or full foresight on the actors, and without assuming a monolithic top-down system, presents an elegant explanation of how complex yet elegant designs arise in nature - the eye, for example.
The idea of the Blind Watchmaker is in fact nothing particularly new. Philosopher Daniel Dennet regards evolution as an algorithm, a mindless procedure that turns out an outcome . The significance of the Blind Watchmaker theory rests not on the novelty of its premise, but rather on its logical and lucid exposition of evolutionary mechanisms that account for the emergence of complexity in the natural world in the absence of a Creator. While I do not wish to summarize his entire treatise, one core idea stands out that needs to be mentioned: cumulative selection.
Dawkins is quick to emphasize that evolution, although blind and non-purposive, is not necessarily random. Chance certainly plays a role in introducing genetic variation, either through mutation or sexual recombination, although debates persist as to how random these processes actually are . But chance alone cannot, within a realistic frame of time, produce complex life forms in a single step. Dawkins estimates, quite plausibly, that a chimpanzee, typing random keys on a typewriter, will not produce a Shakespeare prose given the history of the earth. Not in a single step. Not without editing.
Cumulative selection is akin to the idea of editing. Changes must be accumulated in a gradual, step-by-step fashion, much as a writer producing a series of drafts for publication. This cumulative process is hence directed by selective processes, which is fundamentally a nonrandom process - nonrandom because it eliminates only those who are unfit for survival. Only Nature, aka the Blind Watchmaker, determines who will survive and who will die, but it is not arbitrary. Fate might throw the die, but it does not kill those who are fit to survive.
The idea of cumulative selection is more obvious than it sounds. When we code a computer program, we do not rewrite the entire thing every time something fails to work. Instead, we isolate problematic areas and edit them accordingly, hoping that it runs more smoothly. Of course, with due respect, programmers are more intelligent problem-solvers than a flock of birds. Nonetheless, the logic of the process is the same for programmers as for the Blind Watchmaker.
Of course, cumulative selection, although a crucial process in evolution, is not unique to evolutionary systems. Editing is an integral part of Cathedral building as well. Time has seen proprietary software continue to mature in today's competitive and variable market. For what their worth, the Windows operating system has surely blossomed from the pre-pubescent 3.x series to Windows 2000 for a new millennium, with vast improvements for our increasing technological needs.
In the end, what differentiates the Bazaar from the Cathedral is not the extent of editing per se, but the extent of parallel editing. In the absence of central planning - of a single guiding vision - there are multitudes of actors pursuing their visions simultaneously, as opposed to one central body standing in charge of the problem. This is the idea of meliorization, or "hill climbing." On a landscape with a single peak, it is easy to find the highest summit. It is the peak, obviously. But in a more rugged landscape, it is more difficult to pick out the highest point without some aerial view. Unless the climber knows beforehand where the highest summit is, he is prone to climbing a local peak believing that it is the global peak. Evolutionary actors are not globally optimizing on their own in the sense that they each have a limited range of vision over the landscape of all possibilities, but an evolutionary system can be optimizing in that, of all the climbers climbing a different mountain, one of them is likely to find the highest peak. As Kevin Kelly notes, "parallelism is one of the ways around the inherent stupidity and blindness of random mutations."  And further:"It is the great irony of life that a mindless act repeated in sequence can only lead to greater depth of absurdity, while a mindless act performed in parallel by a swarm of individuals can, under the proper conditions, lead to all that we find interesting." 
Restating the idea in the context of software development,"If top-down management asked for a stupid feature that would make the system worse, developers would think "If they want to pay me ..." The developer gets paid to do what is asked, not what is best." 
In the Linux project, however,"A free programmer will always have many goals and work in many directions as his freedom allows and so get a better of idea of logical connections which results in more efficient products." 
In parallelism we hence come to perhaps the most crucial difference between the Cathedral and the Bazaar. Under economic and bureaucratic constraints of the market, the Cathedral seeks to minimize parallel efforts by specifying the course of development beforehand, mandated from the top. Such constraints are practically absent in the Bazaar. In Linux, there is no bureaucracy, and there is no fiscal bottom line, for Linux project is sustained by the efforts of volunteers. At once, this frees the project from the issues of efficiency and profitability that would not only interfere with its commitment to technical quality but also restrain the extent of parallel efforts. In the Cathedral, then, parallelism translates to redundancy and waste, but in the Bazaar, parallelism allows a much greater exploration of a problem-scape for the global summit. And consequently, whereas the Cathedral is constrained to a linear course of evolution for the most part, the Bazaar is able to harness a plethora of partial solutions for what amounts to parallel evolution.
Of course, the extent of parallel efforts in the Linux project cannot be exaggerated, either. When asked if they ever witness parallel efforts in the Linux project, developers most typically answered "occasionally [...] but not very frequently," "every couple of months," and the like. Even then, "usually the two projects find each other very quickly and merge." Nor do they necessarily feel that parallelism is a healthy trend in the first place. As Michael Chastain explains,"We have communication mechanisms, and we use them. The linux-kernel mailing list is a prime point of contact for the announcement of kernel-related work in progress. It's to everyone's advantage to spread out and occupy lots of different niches rather than competing for the same niche." 
Alan Cox, who is presently responsible for the stable kernel tree, reports that he feels obliged to ensure that no parallel efforts occur in his projects without the developers' awareness . Short of discouraging parallel efforts entirely, such efforts at coordination serve to filter out unnecessarily redundant projects unless developers feel strongly that they can write better code than others can. In the end, the Linux project is better able to explore the vast and variegated landscape of new ideas and possibilities than a single corporation can.
Evolution can be terribly inefficient. A certain joke posits, "How many evolutionists does it take to unscrew a light bulb?" The common answer is "One, but it takes him 10 million years." Of course we know now that a better answer should be "It takes a population of evolutionists over hundreds of generations, brainstorming in parallel." The joke notwithstanding, the beauty of evolutionary theory is that it explains how, given the essential ingredients of evolution - random variation, nonrandom selection, and retention - any system, natural or artificial, can evolve into a complex design through incremental changes explored in parallel. To reject the power of evolution on the basis of statistical improbability is to confuse single-step and cumulative selection, on the one hand, and singular and parallel search, on the other hand.
The fundamental processes of evolution are the same in the Linux project as for the biological world. The Linux kernel developed incrementally over the span of several years, through gradual additions and modifications in the hands of variation, selection, and replication. The development of Linux differs in some notable ways from genetic evolution, however, which is the topic of the next section.
Science of memetics
An equivalent of the genetic system in biological evolution is also at work in cultural evolution. The meme, introduced by Richard Dawkins, is "a unit of cultural transmission, or a unit of imitation."  It is a cognitive or behavioral pattern in the form of knowledge, ideas, rumor, fashion, songs, recipes, or anything else capable of spreading without undue mutation via replication. Like a gene, a meme is therefore a replicator, propagating itself across populations through "a process which, in the broad sense, can be called imitation." 
And a broad sense, it is indeed. Susan Blackmore gives an example of a story spreading among friends:"So if, for example, a friend tells you a story and you remember the gist and pass it on to someone else then that counts as imitation. You have not precisely imitated your friend's every action and word, but something (the gist of the story) has been copied from her to you and then on to someone else. This is the 'broad sense' in which we must understand the term 'imitation'. If in doubt, remember that something must have been copied." 
To be sure, social scientists have long been interested in the role of imitative behaviors. Psychoanalyst Sigmund Freud addressed the issue of imitation and the contagion of group emotions in Group Psychology and the Analysis of the Ego.  Psychologist Albert Bandura studied the role of imitation in television violence. And more recently, political scientist Robert Axelrod demonstrated the emergence of norms through imitation using computer simulation .
Sociologists are also interested in social learning, or learning influenced by the presence of other people, through observing and interacting with them. Douglas Heckathorn, for example, writes of observational learning, "in which actors compare their own outcomes to those of their peers, imitating those who do best. In essence, actors look around to see who is doing well, and take as role models those who appear most successful." 
Social learning as such assumes the basic premises of Skinnerian conditioning, based on the principles of trial and error, or reward and punishment. Through mutual observation and interaction, the positive and negative consequences of a person's behaviors are reflected on others, in turn providing an experiential basis for their decision making. Variation is introduced through mutation of a meme in the individual or recombination of memes from separate individuals. Social learning thus amounts to evolutionary learning in as much as natural selection reflects a consequence-driven approach to the emergence of novel patterns.
Mimetic learning, on the other hand, differs slightly from social or adaptive learning in that it involves true imitation as a core process. That is, whereas social learning simply induces latent behaviors through interaction with or observation of other people, true imitation introduces something new in the observer, such as a new popular tune catching on through radio. As Susan Blackmore notes, "imitation is learning something about the form of behaviour through observing others, while social learning is learning about the environment through observing others." 
The idea of learning brings us to the limit of the gene-meme analogy. Outside of asexual reproduction among relatively small groups of lower life forms, a gene is transmitted only from the parent to the child, from one generation to the next. It cannot travel laterally, from one organism to another within the same generation. Each organism therefore carries genes from two parents, a male and a female, and no more, no less. And incremental changes in biological evolution occur only across generations, which amounts to an extremely slow and long process just to introduce minor changes.
In comparison, a meme can disseminate laterally and vertically, across time and across populations. A meme can spread from multiple parents or a single parent. And by virtue of lateral exchange as well as non-specific parenting, a meme can spread much faster than a gene can - even overnight, as anyone involved in a scandalous rumor would know. Noting these dynamics, Dawkins and others have adopted the virus-meme analogy over the gene-meme analogy . Yet others, noting that an individual can acquire new memes during one's lifetime, that an individual can learn, have brought back the idea of Lamarckian evolution .
Jean Baptiste Lamarck, a French natural scientist active before Darwin, proposed the idea of the inheritance of acquired traits, an idea that continues to earn him the reputation of a misguided intellect. Lamarck suggested that an organism can acquire new traits, such as muscles built up from rigorous training, which are then passed on to its descendents. The difficulty of the theory is that empirical studies have not confirmed such inheritance of acquired traits in animals, nor have advances in genetics found evidence that such phenotypic changes can be re-coded in the genome of the organism and passed onto the offspring .
Despite evidence against the inheritance of acquired traits in the natural world, Lamarckian evolution is a plausible and viable model of cultural evolution. Unlike genes, memes bypass physical media for assimilation into the individual, however they might rely upon physical media for storage. Information from the outside is entered through our eyes and ears and processed in our brain as electrical signals. And to every student's lament, knowledge cannot be assimilated internally, through pills and such. In the absence of physical laws governing the assimilation of information, however, memes can hence spread like an epidemic, traveling from an individual to an individual through social, but without physical, contact. A vivid picture, a cogent lecture, or repeated air time on the radio is enough to diffuse a meme. When a new meme is acquired, we might say that it constitutes learning.
The importance of Lamarckian evolution cannot be underestimated. Using computer simulation, David Ackley and Michael Littman found that Lamarckian evolution can outperform Darwinian evolution by significant measures . Classical Darwinian evolution, we have seen, is dumb and blind. Neither does it allow individual learning - it only eliminates the unfit - nor does it anticipate the future. Lamarckian evolution, if blind or short-sighted, is certainly not "dumb." Particularly, the capacity to learn, coupled with the gift of foresight, means that random variation is less random and more self-directed for Larmarckian evolution. And as such, neither is the Linux project completely at the mercy of random mutations as Nature might be.
The theory of memetics therefore suggests a model of evolutionary epistemology more dynamic, if you will, than suggested by biological evolution. In particular, lateral dissemination of memes extends the evolutionary basis of human learning in two ways. First, genetic evolution works at the population level, while memetic evolution works at the population level as well as the individual level. The difference might be summarized as follows: Genetic evolution introduces changes in the frequency distribution of certain traits in the population. If big wings are favored in a population of a particular type of birds, natural selection will simply rid of those with smaller wings while favoring those with big wings to reproduce chicks with big wings. To this, memetics adds a new dimension of population change by allowing individual learning, on the one hand, and the inheritance of acquired traits, on the other. Memetic evolution therefore introduces changes in the probability distribution of a behavioral or cognitive pattern in the actor's repertoire. Second, whereas social learning focuses on adaptive learning on the part of the actor, mimetic learning might occur independent of the consequences of the behavior, outside of the narrow context of their advantage or disadvantage to one's survival in society. In social learning, behaviors are shaped according to their positive and negative values to the actor, whether through other people's experiences or her own. In imitation, there need not be such feedback. It is not through Skinnerian conditioning that we learn to hum the tune, but simply through repeated exposure to it. Conversely, the tune spreads simply because it is catchy, not necessarily because of its competitive value to the person.
This is not to say, of course, that selective pressure does not apply to mimetic learning at all. To the contrary, it might perhaps be inevitable that the process of trial and error eventually mediates mimetic learning to reinforce it positively or negatively. Mimetic learning might in fact be dependent on reinforcement to some degree. To this extent, it is difficult to separate the processes of imitation and adaptation in our everyday learning, and it remains to be settled whether and how imitation actually spreads memes in reality. Unfortunately, this is a debate well beyond our topic. For now, it seems safe to consider imitation and adaptation as two faces of the same mirror.
Much more empirical research is also needed to establish that memetics is more than a convenient analogy - a "meaningless metaphor," in the words of Stephen Jay Gould . Genetics has made leaps since the discovery of DNA, its structure and its mechanisms. In comparison, science of memetics has yet to discover in the brain or elsewhere a singular tangible entity that corresponds to what we might believe to be a physical trace of a meme. And until then, we can specify neither the unit of information transmission or the exact mechanisms of replication and retention, as genetics has done. On the other hand, the fact that we have not uncovered these facts does not necessarily disqualify memetics as a promising approach to understanding cultural evolution. It was not until 1953, 94 years after Darwin's Origin of Species was published, that the structure of DNA was unveiled by Watson and Crick. And even now, science is still struggling with fundamental mysteries of genetics. As a new discipline in its incipience, memetics has already shown signs of promise, and it certainly seems reasonable to examine the development of Linux under the terms of memetic evolution.
Memetics, Evolution, and Linux"It's important to have people who steal ideas!"
- Louis Branscomb, in Waldrop, Complexity
It was mentioned earlier that genes interact with a number of other genes to produce a given phenotypic effect, that there is no single gene for blue eyes and such. The same might be said to a certain degree of memes, for memes infrequently occur in isolation of each other. Very often, the success of memes often depends critically on their ability to form a coherent memeplex, on the premise that memes spread better in a group of co-adapted, mutually reinforcing memes . In a religion, for example, a meme prohibiting certain types of meat might be reinforced by another meme claiming that god himself ordained the rule.
Linux might also be considered such a memeplex. The kernel consists of numerous patches of source code that are co-adapted, so to say, in the very sense that they are fine-tuned to each other to work seamlessly. And much like a recipe, much like a gene, these patches of code contain step-wise instructions for the various internal processes of the operating system. If Linux is a memeplex, its source code is the memes.
Memes spread by virtue of certain qualities. What constitutes viable or successful qualities in a meme depends greatly on the environment and the context of evolution. Francis Heylighen lists ten attributes that influence how much a meme might proliferate:"1) coherence: the meme is internally consistent, and does not contradict other beliefs the individual already has; 2) novelty: the meme adds something new [...]; 3) simplicity: [the meme] is easy to grasp and to remember; 4) individual utility: the meme helps the individual to further his or her personal goals; 5) salience: the meme is easily noticed by others [...]; 6) expressivity: the meme is easily expressed in language or other codes of communication; 7) formality: interpretation of the meme's expression depends little on person or context; 8) infectiveness: the individuals who carry the meme are inclined to spread [it]; 9) conformism: the meme is supported by what the majority believe; 10) collective utility: the meme is useful for the group, without necessarily being useful for an individual (e.g. traffic code)." 
It is easily seen that a number of Heylighen's selective criteria, if not all, are relevant to the Linux project. A new patch of code must be coherent, bug-free, and functionally novel or superior to the existing or alternative code. As well, the function must be immediately needed by the Linux community. And finally, as Torvalds stresses, the code should be simple and easily understood by others looking to use it and modify it. As noted on the linux-kernel mailing list FAQ page , developers are urged to read and comply with the specific coding guidelines on the Coding Style file, distributed with the kernel source code for installation. The increasing recognition of Linux, as a memeplex, is a function these qualities in its source code that complement each other in shaping a first-rate operating system. None of these ideas are particularly surprising, but they nonetheless serve to illustrate the selective pressures in the Linux project as a case of memetic evolution.
Patches of code are but one type of memes. One needs to only stroll through the linux-kernel mailing list archive to see that ideas and comments are shared just as effectively and widely in English. As well, it is a mistake to regard the Linux project as an isolated case of self-contained memepool. A memepool, analogous to a gene pool, is the set of all memes in a population. As much as genes can travel from one population to another, in the case of interracial or intercultural marriage in human societies, a meme can travel from a memepool to another.
The idea of Linux, to begin with, was conceptualized from Minix, a popular Unix clone. As the project took off, ideas from Unix, especially from particular flavors of Unix called the BSD systems, continued to find their ways into Linux. Many networking features in Linux trace their origin to the BSD system, including the original TCP/IP stack, a component allowing remote communication . The NCR SCSI driver was modified for Linux. Firewall rules from the BSD systems were also used in earlier versions of Linux. Like Linux, Unix was distributed for much of its history with the source code. To be sure, BSD code is covered under the BSD license, which allows free use, modification, and redistribution of the code, even under the terms of a new private license, whereas Linux code is covered under the GNU Public License, also called Copyleft, which allows free use, modification, and redistribution of the code, provided that the modified code is also covered under Copyleft. Linux is therefore under more restrictive terms, which prohibits direct inclusion of the BSD code in the kernel space (Copyleft will be discussed in more detail later).
Because the kernel was written from scratch, Linux is technically not Unix but a new operating system . Nonetheless, short of replicating the code from BSD, Linux owes much of its conceptual and architectural design principles to the example of the BSD. At the same time, Linux has been careful to avoid mistakes the BSD has made. And finally, in its commitment to portability and simplicity, Linux has carried the torch, lit by the fathers of the original Unix, farther than any other systems. The legacy of BSD and other Unix-bases operating systems proved valuable resources and inspiration for the development of Linux.
At the heart of the Linux project is active experimentation. The kernel must develop in the hands of innovative minds, and developers seek much joy and excitement in coding new programs. Experimentation spurs the kernel development as much as it stimulates the developers. As such, extra attention has been given by the core developers to maintaining the integrity and the modularity of the kernel to facilitate parallel hacking. I have already mentioned the stable and the experimental versions of the kernel, denoted by version numbers. Consistency of interfaces is another critical issue. Torvalds explains:"If someone wants to add something that involves a new system interface you need to be exceptionally careful. Once you give an interface to users they will start coding to it and once somebody starts coding it you are stuck with it. Do you want to support the exact same interface for the rest of your system's life?"
Of modularity he writes:"Without modularity I would have to check every file that changed, which would be a lot, to make sure nothing was changed that would [affect] anything else. With modularity, when someone sends me patches to do a new file system and I don't necessarily trust the patches per se, I can still trust the fact that if nobody's using this filesystem, it's not going to impact anything else."
Like a Lego toy, modularity makes Linux an extremely flexible system. But it is to overestimate Torvald's initial ambition in his project to claim that these considerations were given from the beginning. It was not until the kernel was rewritten after an early attempt at modifying the system, or porting, to another machine, that portability and modularity came to light. As Torvalds explains, "[...] that rewrite was motivated by how to work with a growing community of developers."  It was a management issue rather than a technical issue.
If so, Dawkins might be correct, once again, in pointing out that evolvability of a system is a quality that also undergoes evolution - a meta-evolution . Aside from its quality, Linux is a credible system because of its modularity that allows constant improvements through parallel experimentation.
As suggested in the earlier discussion on the differences between adaptation and imitation, an underpinning element of experimentation by trial and error is feedback. When a new patch is submitted, Linus insists on testing the code personally, and when he releases a new kernel, he urges the developers to test its new functions on different types of machines. Code shared and discussed before it is finalized in the system. In stressing the importance of feedback, Eric Raymond points to the role of peer review in academics . The terms are essentially synonymous.
Of course, feedback is vital to developers in any setting. The developer, whether in the Bazaar or the Cathedral, must examine and monitor the modified system closely to verify normal functioning. But it seems that, by decentralizing its efforts to local actors, the Bazaar model has reaped much more from feedback nested in microscopic interactions.
Feedback among Linux developers is facilitated by at least three factors that set the Linux project apart from previous ventures in software engineering. First of these is the Internet. There is no need to spend at length elucidating its significance as a medium of communication. From the beginning of the project, Linux has found its base on the Internet, and the project is deeply embedded in the expanding network of its developers online. Raymond reports that the birth of the World Wide Web in 1995, and its growth in the following years, has been enormously critical to the Linux project .
The Internet brings us to the second, and much related, factor. In biology, genetic information is coded in specific sequences of nitrogenous base molecules and stored as a main component of DNA, the genetic raw material. In Linux, the system information is written in C, a high-level programming language that is translated by the machine into a machine language consisting of 0s and 1s. In both systems, relevant information is represented in discrete digits rather than continuously variable quantities. That is, they are both digital systems.
It is no coincidence that digital systems are particularly well-suited for evolutionary processes. Richard Dawkins identifies three essential qualities of a replicator that facilitate its propagation: fecundity, copying-fidelity, and longevity . Simply put, a replicator will spread more successfully the more it can produce lots of copies accurately and the longer these copies survive to replicate themselves. The nature of digital information satisfies each of these criteria to a high degree. Especially, in the computing world, advances in digital technology has dramatically reduced the cost of replicating and storing digital data and has brought data transmission to amazing speeds. The Internet has become a breeding ground for digital goods online, from music to graphics to text, and of course, software.
This much said, the Internet is but a barren land without the goods to fill its landscape. The Linux project has found a fertile ground on the Internet only because the developers have committed to releasing the source code of the entire operating system. Recall that memes, like recipes, contain instructions. That is to say, they are unlike blueprints in that they are not linear representations of the end product. Unlike tracing a floor plan from an actual house, it is considerably more difficult to derive the recipe exactly from a completed dish. Likewise, without the source code, developers are without much clue as to the actual ingredients of the system, unable to participate effectively in the project as programmers. In particular, they are unable to debug the system. Opening the source code not only allows developers to trace problems to exact lines in the code, but more important, also exposes bugs to more eyes, increasing the likelihood of their early detection. From Eric Raymond's paper,"Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. "Somebody finds the problem," he says, "and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge." But the point is that both things tend to happen quickly." 
In January, 1997, a virus attack was reported on a Linux newsgroup. Bliss turned out to be a relatively innocuous virus  that infected Linux and other Unix-based systems alike. Bliss was rather atypical in that it kept a log of its actions and came with a disinfect capability . A newsgroup posting was found a few days after its discovery, in which the author of Bliss stated: "Bliss is not expected to survive in the wild. I have written this as proof that a Unix virus is possible, and because it is a fun program."  While the virus alarmed the Linux community for a few days, its quick discovery, despite its relatively benign nature, also bore testimony to many that open source code effectively exposed problems at issue. Besides this incident, no other cases of virus attacks were reported by Linux developers interviewed. While this is open to interpretation, it is certainly reasonable to attribute the lack of virus attacks in part to the robustness and the credibility of the open source system.
Earlier I argued that parallelism defines the single most notable difference between the Bazaar and the Cathedral. Raymond takes the idea further in claiming that parallel debugging, more than parallel coding, is"the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. [...]
In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena-or, at least, that they turn shallow pretty quick when exposed to a thousand eager co-developers pounding on every single new release [...]" 
We have hence identified the forces underlying the quality of Linux, on the one hand, and the speed of its development, on the other. The same evolutionary processes that invented our eyes are also at work in the Linux project. Changes are accumulated incrementally over great many generations, or version releases, through parallel processes of variation, selection, and retention. Blind as it may be, the Watchmaker nonetheless ensures that quality is built in the product, for there is no shortcut before the eyes of the Blind Watchmaker.
There is much to behold in the elegance of evolutionary theory. It is such a simple idea that explains so much. Still, it takes a leap of faith to believe that Linux, for all its hype, is no different than fish and frogs in its development. The development of Linux, we saw, develops via a process very much similar in nature to that which provided fish and frogs, but where differences exist, they are indeed crucial. For one, we saw that, by virtue of foresight and individual learning in human actors, variation is more self-guided and less random than it is in nature. For another, the Bazaar model, by opening the source code, facilitates lateral exchange of memes - ideas, code, feedback - sustained in a medium conducive to digital communication. As such, memetic evolution gives a convincing account for the unparalleled speed of the kernel development.
In addressing the issues of development speed and product quality in the Linux project, we have found our answer in evolution. But we are left to wonder, in a complex system so protean and so volatile, how order might arise against the currents of what Holland calls the perpetual novelty of evolution. In a Bazaar so great as the Linux project, wouldn't the clamor of parallel activities simply overwhelm the people and disrupt their transactions? Under the weight of increasing complexity, norms have nonetheless emerged across thin networks on the Internet.
As I mentioned in the introductory chapter, there is another side of evolution that has been neglected until recently, namely self-organization. If blind evolution is the Dionysisan, spontaneous self-organization is the Apollonian, the source of restraint and order. Whereas evolution searches and explores, self-organization locks in. And in doing so, self-organization is mediated by positive feedback loops. We have already seen an example of self-organization at the individual level-learning. In the next chapter, we will consider the role of self-organization at large, with particular attention to the emergence of norms regulating the Linux community.
Chapter 5"Most of the beautiful order seen in ontogeny is spontaneous, a natural expression of the stunning self-organization that abounds in very complex regulatory networks. We appear to have been profoundly wrong. Order, vast and generative, arises naturally."
- Stuart Kauffman, At Home in the Universe
Recall the story of the VCR market from Chapter 3. In a newly emerging market for VCR, split roughly equally by two different systems, minor fluctuations in early sales cascaded across the market. As chance event led to another, the VHS system grew and the Beta system dwindled, until the pattern became a self-sustaining cycle. Despite its technical inferiority to the Beta system, the VHS system has come out to dominate the market. Today, virtually every VCR produced is a VHS system.
The idea that minute changes are rapidly amplified through densely connected agents is known variously as positive feedback, cascaded increasing returns, echo effect, amplifier effect, or, more generally, nonlinearity. It was introduced earlier as one of the defining qualities of a complex system. Not surprisingly, positive feedback is implicit in the theory of evolution. Successful traits proliferate, which gives birth to more versions of the trait. But as central as positive feedback is to evolution, it has largely been neglected until quite recently. As Stuart Kauffman notes, "Darwin didn't know about self-organization." 
Where evolutionary theory is concerned with positive feedback, it seeks to explain population changes, i.e. how systems evolve from one phase to another. The underpinning motivation is akin to chaos theory, although not quite as extreme, in suggesting that positive feedback ensures a fundamental indeterminism at the system level. An evolutionary system searches and explores, rummaging through rugged terrain for new possibilities.
In contrast, the emerging theory of complexity seeks to bring evolution to terms with another aspect of positive feedback, namely, self-organization. Short of being determinate, a self-organizing system is nonetheless stable and path-dependent. Like the VCR market, it locks itself into a self-reinforcing equilibrium in the pattern of its own processes, without external intervention and without central authority. Stuart Kauffman goes so far as to claim that self-organization has little or nothing to do with natural selection . Order is not shaped by selective forces but simply builds on the initial configuration of the system. It is questionable whether selection can be ruled out completely as such, but Kauffman seems nonetheless correct in suggesting that chance events can provide a solid basis for coherent order to emerge. Hence the case suggests itself for a close link between parallelism from the previous chapter and self-organization from the present chapter: whereas variation explores multiple resources, positive feedback exploits a particular resource . In this chapter, we will explore how self-organization makes for a spontaneous order, self-imposed from the bottom up.
Homesteading the Noosphere
Eric Raymond observes a curious contradiction in open source projects. In "Homesteading the Noosphere," he reports,"Anyone who watches the busy, tremendously productive world of Internet open-source software for a while is bound to notice an interesting contradiction between what open-source hackers say they believe and the way they actually behave - between the official ideology of the open-source culture and its actual practice [...]" 
What hackers believe is expressed in the Open Source Definition (OSD), derived in 1997 from common elements of existing open-source licenses, including the GNU General Public License used on Linux. Motivating the OSD is the idea that anyone can hack any open-source software . An open-source program must hence allow unrestricted modification and redistribution with, of course, the entire source code.
The actual behavior of hackers, on the other hand, displays a social pattern rather contrary to the ideology of the OSD. In what is purportedly a culture intolerant of secrecy and hoarding, one nonetheless notices a norm of ownership regulating "who can modify software, the circumstances under which it can be modified, and (especially) who has the right to redistribute modified versions back to the community."  The norm of ownership manifests itself as a set of peculiar taboos:"1) There is strong social pressure against forking projects [Author's note: forking refers to the splitting of a project from the same source code into parallel projects]. It does not happen except under plea of dire necessity, with much public self-justification [...]
2) Distributing changes to a project without the cooperation of the moderators is frowned upon, except in special cases like essentially trivial porting fixes.
3) Removing a person's name from a project history, credits or maintainers list is absolutely not done without the person's explicit consent." 
Considering its divisive nature, it is understandable that forking is a matter of great controversy in open source projects. But juxtaposed to the other taboos, the motivation against forking begins to blend into the more general pattern of project ownership Raymond describes so convincingly. It is no longer an issue just for the community collaborating to maximize its output, but also for the individual claiming his share on the frontiers of open-source software ventures.
Here, Raymond's analogy of the Lockean theory of property is quite well placed. English political philosopher John Locke theorized that there are three ways to acquire ownership of land - by homesteading land that has never been settled by preparing it for one's own settlement, by transfer of title, and by adverse possession if one improves and claims title of a piece of land that has been abandoned. Raymond recognizes similar dynamics in open-source projects. Only, what people own are programming projects instead of land. In the hacker culture, "the owner(s) of a software project are those who have the exclusive right, recognized by the community at large, to re-distribute modified versions" (emphasis in the original) .
The most curious historical parallel between the theory of land tenure and the hacker customs, however, is that they both "evolved organically in a context where central authority was weak or nonexistent." If the absence of external intervention characterizes frontiers, so do the social orders that spontaneously emerge in the matrix of daily interaction. Surely, in the Linux case, as in actual land settlement, patterns of demographic distribution are less random than they might appear. For example, people on the frontier may be self-selected for certain qualities or personalities. Nonetheless, to the extent that criteria for self-selection are subjectively determined, populations that inhabit the frontiers are also random and spontaneous. And yet, it is a spontaneous pattern as such that sets the basis for a global order in the community. As one Linux developer explains,"If there is something that needs to be done, one of us simply steps forward and does it. Given the entire community, he might not be the absolute best person to do the task technically, but if we see that he is doing the job well, we continue to send him patches and assure him credit for his efforts because we know that he is a volunteer like the rest of us." 
It is not a mere coincidence that Alan Cox and others in the Inner Circle entrusted with large projects today are among the first Linux volunteers. As technically qualified as they may have been, it was also "being in the right place at the right time" that led these individuals to the Linux project in the first place . And yet, today, they are names that command much respect in the Linux community. The length of their service has given them competitive advantage in the form of reputation, experience, and knowledge over others new to the project. As maintainers, then, their voices are heard across the community.
We have already discussed the importance of maintainers as filters for Torvalds at the top. Common practice in the Linux project dictates that new patches of code are first submitted to particular maintainers for approval before they are sent to Torvalds. To this, Raymond adds a particularly critical implication of territoriality:"According to the standard open-source licenses, all parties are equals in the evolutionary game. But in practice there is a very well-recognized distinction between 'official' patches, approved and integrated into the evolving software by the publicly recognized maintainers, and 'rogue' patches by third parties. Rogue patches are unusual, and generally not trusted." 
Sensitivity to initial conditions is a hallmark of dynamic, complex systems. From the chance events that found the early volunteers of the Linux project and hence shaped its initial condition, the Linux community has locked itself into increasingly coherent and stable patterns of interaction around maintainers. In effect, Torvalds has found a self-organizing system of collaboration and spontaneous order around himself without consciously intending to build such a system.
We are thus reminded of evolution at two distinct levels in the Linux project. The previous chapter focused mainly on the evolution of Linux into a first-class operating system through massive parallelism. This chapter shifts the focus to the evolutionary processes at the developer level, i.e. how evolution shapes not the source code, but the community and its curious culture.
That much said, the norm of ownership leaves us with fundamental questions. On the one hand, why are open-source developers so keen to sharing and contributing to a public project? What is the incentive for the volunteers? On the other hand, what motivates territorial behaviors in a culture given to selfless sharing? If the norm of ownership is not externally sustained, as Raymond suggests, we must trace its origin to the internal dynamics of the Linux project. And finally, how do these motivations come to terms with each other in shaping a stable developer community even while motivating conflicting behaviors?
These questions might be rephrased in the language of what is known as the public goods dilemma in game theory. A public goods dilemma refers to situations in which individual interests are inconsistent with collective interests. Suppose that people join groups in order to jointly produce a good they cannot produce alone. Yet, once produced, a public good is by definition available equally to those who contributed and those who did not. Each individual might therefore be better off free riding, enjoying the good without shouldering the burden of producing it. But of course, if everyone free rides, the good is never produced.
The public good dilemma poses two challenges. First, the individual members must be motivated to contribute to collective efforts despite the temptation to free ride, and despite the efficacy problem, i.e. the perception that one person cannot make a visible difference to the community. Second, they must somehow coordinate their efforts if they are to produce anything more than a heap of junk . Hence, the fact that people are individually interested in the public good does not necessarily guarantee that the good will be provided. Only selective incentives (money, social sanctions, etc.), according to Mancur Olson, can ensure cooperation. Even then, collective efforts can fail if people fail to coordinate. The idea that they are contributing to a lost cause alone can prompt people to withdraw their commitment.
Noting the centrality of motivation and coordination in collective action, Mancur Olson argued that large groups are less likely to provide public goods. First, In large groups, individual members can more easily free-ride without noticeably undermining collective action or getting caught. Second, large groups pose a greater problem of efficacy for the individual members. Third, large groups are more difficult to coordinate. Although these insights might no longer strike us as particularly innovative, their historical significance should not be overlooked. As Marwell and Oliver explain, "Before Olson, almost all social scientists assumed that people would instinctively or naturally act on common interests, and that inaction needed to be explained. After Olson, most social scientist treat collective action as problematic." 
Linux: "the impossible public good"?
Linux provides an intriguing case of study in collective action for a number of reasons: 1) To this day, the Linux community consists almost entirely of volunteer programmers, none of whom are financially compensated or under binding obligations; 2) despite its burgeoning size, the Linux community has no formal organization to coordinate efforts or central authority vested with decision-making power; and finally, 3) Linux is a public good in the truest sense. On the one hand, its production is dependent on the efforts of a large community of contributors. On the other hand, Linux is available to anyone at no cost, and, as a digital good, it can be reproduced practically infinitely. It is in respect to these qualities that social psychologist Peter Kollock has called Linux an "impossible public good." 
To be sure, Mancur Olson's treatise has become a subject of much debate and criticism in contemporary collective action and social movement theory. As Marwell and Oliver contend, Olson is particularly misleading in assuming that "every individual finds that the cost of action exceeds the benefit to him of the collective good and [...] that individual benefits cannot be increased by coordinating actions with others."  These are assumptions that hold only under particular circumstances. Relaxing these assumptions obtains an entirely different perspective of the public goods dilemma that reverses Olson's conclusions.
A public good is typically characterized by jointness of supply and non-excludability. Jointness of supply refers to marginal interest of each additional consumer or the supply of the public goods to the group. Zero jointness of supply means one consumer can deplete the public good. Perfect jointness of supply means any number of people can consume the public good without depleting it. Non-excludability refers to the extent to which a person cannot be prevented from consuming the good. A classic example of a public good is a lighthouse, which is both non-excludable and has pure jointness of supply, thus benefiting any foreign ship just as well as local sailboats at no added cost of use to the local users.
Under the assumption of rational egoist, non-excludability is a strong dis-incentive to contribute, since the actor is able to enjoy the good regardless of whether she contributes or not. Hence the paradoxical conclusion that self-interest alone does not attain the public good. The logic of this argument, however, holds only so long as we assume a homogeneous group of egoists troubled by the idea of other people free-riding at their expense. It is more likely the case that, given a group of people in collective action, some are more interested in the public good than others, and even for others, the benefit of the good might exceed the cost of action. As long as the good is high in jointness of supply - that is, the value of the good is not diminished by others' consumption - highly interested people may be willing to contribute, even if the good is non-excludable.
For Marwell and Oliver, then, "the problem of collective action is not whether it is possible to mobilize every single person who would benefit from a collective good [...] Rather, the issue is whether there is some social mechanism that connects enough people who have the appropriate interests and resources so that they can act."  In other words, what matters in the public goods dilemma is not the free-rider problem, but what is known as the efficacy problem. Efficacy refers to the feeling that one's contribution makes noticeable differences to group performance. If the actor sees that her contribution has a noticeable effect on the group performance, that her efforts count, she will continue to contribute. Michael Macy  has brought the discourse of efficacy to an evolutionary ground by reversing this logic, claiming that it is not individual efficacy that propels collective action, but group efficacy that motivates individual contribution. That is, the actor contributes if the group appears successful or promising. The process is one of social learning through feedback and reinforcement.
If the problem of collective action is not free-riding but efficacy, larger groups are more likely to succeed. The argument turns around Olson's conclusion. For Marwell and Oliver, however, the main issue is not the size of the group per se, but its resources and social organization. In a heterogeneous population, larger groups are more likely to have 'outliers' who have high levels of interest and resources. If only those people can find each other and be assured that enough of them will pitch in to the collective cause, larger groups are more likely to attain the critical mass resources needed to jump start the collective action.
Considering the Linux project in view of the efficacy discourse of Marwell and Oliver, what appears to be a shapeless collection of part-time volunteers becomes a rich source of interest and resources embedded in a network-rich environment. As Raymond notes, "Linux was the first project to make a conscious and successful effort to use the entire world as its talent pool."  Given the global community, the odds of finding the likes of Torvalds and Alan Cox might still be extremely low. But it takes only a handful of them, perhaps even one, to clear the land. Once the land is cleared, there stands little in the way of people planting the seeds and tending the garden. Quoting Torvalds on open source projects,"Somebody (usually one person) wrote the basic program to the state where it was already usable. The net community then takes over and refines and fixes problems, resulting in a much better program than the original, but the important part is to get it started (and channeling the development some way). The net works a bit like a committee: you'll need a few dedicated persons who do most of the stuff or nothing will get done." 
Mancur Olson, too, was aware of the problem of efficacy. While largely underplaying its importance, Olson has mentioned a special case of efficacy, the privileged group, in which a member is able and willing to provide the entire public good by him/herself. Peter Kollock notes that, while privileged groups are unusual in the offline world, they have become a common phenomenon on the Internet, with vital implications for collective action:"The fact that many digital public goods can be provided by a single individual means that in these cases there are no coordination costs to bear and that there is no danger of being a sucker, in the sense of contributing to a good that requires the efforts of many, only to find that too few have contributed [...]" 
Of course, Linux is hardly a product of a few dedicated volunteers, let alone a single individual. The release of the first Linux kernel less than a decade ago started a continuous process of parallel development and debugging by countless volunteers, whose individual contributions, however, are too minor to make noticeable differences on the scale of an entire operating system. What evades Marwell and Oliver's argument in the end, then, is that, while large group size is an asset in so far as a large group is more likely to find the likes of Torvalds who can launch collective action, it might also become a liability. Providing a public good is not simply a matter of pooling efforts, but also of coordinating efforts. And large groups are obviously more difficult to coordinate.
In Linux, however, the case suggests itself strongly that developing software through evolutionary processes does not require globally coordinated efforts. As we have seen, the power of evolution lies in incremental changes through cumulative selection, not single-step selection. What is important is how agents evolve from generation to generation; from what they evolve is less critical. It is not difficult to see that evolution is not particularly resourceful. It does not anticipate or take short-cuts. Changes are gradual and step wise. But so long as the community can harness enough resources from its members, so long as people can find each other with similar interests, evolution can roll on its own weight without global coordination. By the same logic, Torvalds, by building a buggy but working system and releasing its source code early on, presented what Raymond calls a "plausible promise" that "[Linux] can be evolved into something really neat in the foreseeable future."  Once the project kicked off, it was only a matter of time before a mature operating system came to be in the hands of a growing community.
The paradox of group size resolved for Linux, we are to notice the problem of motivation also beginning to lose its urgency. As we defined the problem in the context of the public goods dilemma, people were by assumption unwilling to contribute to collective efforts without selective incentives. In comparison, the critical implication of Marwell and Oliver's argument suggests that people have different levels of interest in the public good and that, in a large group, somebody is willing to provide the good, even if others are ready to free-ride. The question is no longer why people contribute, but whether they can find each other and pool their resources, for "most public goods will never be provided by individuals acting in independent isolation."
From this perspective, the idea that the Linux project is a dilemma is perhaps misplaced, especially in view of its volunteer labor force. Robert Axelrod identifies "voluntary membership in a group working together for a common goal" as one of eight mechanisms that sustains cultural norms . Given the aim and the philosophy of the Linux project, self-selection might be enough to ensure that the volunteers are willing to cooperate with one other and share their code openly.
Which still begs the question, "Why do the volunteers volunteer?" To be sure, each person presumably has a number of personal reasons for deciding to contribute to Linux. It is certainly senseless to look for a single motivation to account for all Linux developers. As Jon Elster reminds us, "cooperation occurs when and because different motivations reinforce each other."  Nor do I intend to cover the full range and the depth of motivational theories in what remains of our discussion. At the risk of giving a shallow impression of the rich complexity of motivations at work, the remainder of our discussion instead seeks to at least give a feel for how different motivations might reinforce each other to bring about the norms of cooperation and ownership in Linux.
Motivation for hacking
Perhaps the most obvious reason for anyone contributing to the Linux project is enjoyment. Programmers program because they enjoy programming; programming directly satisfies their utility, and utility is its own reward. Michael Hechter refers to such goods as immanent goods .
As intuitively appealing as the explanation might be, however, enjoyment is an inadequate explanation by itself in that it fails to account for the norm of cooperation and the norm of ownership. Given the apparent inconsistency between the norms in the hacker culture, to say that developers enjoy sharing their code or claiming ownership of their projects amounts to an ad hoc explanation at best.
Another motivation commonly heard is that people's responsibilities at their primary jobs require Linux programming. According to reports, there seems to be at least two types of Linux programmers who are paid for their service. The first is the programmer who is hired specifically to develop Linux distributions, such as RedHat Linux and SuSe Linux. Although such developers are primarily concerned with adding new features to the existing kernel for commercial distribution, many of them play active roles in the actual kernel development. The second type of Linux programmer, more common, is not hired specifically to develop Linux products, but his day-to-day responsibilities includes occasional Linux programming. Because these programmers are not volunteers in the true sense, however, the logic of game theory suggests that they are more likely to free-ride than to contribute. The fact that many have become regular contributors, as my interviews found, indicate other motivations at work.
Peter Kollock proposes several motivations as well. "One possibility is that a person is motivated to contribute valuable information to the group in the expectation that one will receive useful help and information in return; that is, the motivation is an anticipated reciprocity."  In the case of an online community mediated by remote (that is, occurring at some distance) and anonymous interactions, reciprocity is embedded in what is known as generalized exchange, in which exchanges occur between a person and the group rather than between two people. One's service to someone in the group is reciprocated by someone else in the group. As Kollock himself notes, however, in the absence of strong regulation, this system of exchange is substantially less stable than offline dyadic exchanges in that the temptation to free-ride looms larger than the assurance of reciprocity from a stranger in the group, making cooperation all the more unlikely .
A second motivation is reputation. The idea is that online communities are driven not by scarcity but by abundance, because digital goods can be reproduced practically infinitely. And in moving from scarcity to abundance, the basis of social relations shift from dyadic exchange to gift giving in which social status is associated not with "what you control but [with] what you give away."  The norm of cooperation is shaped by the reputation game because people compete for peer recognition. For them, reputation is a real payment. Forking, deleting names from the credits file, and other behaviors become taboos to the extent that the community stands against those who undermine the reputation game. This is the gift culture within which Raymond finds the Linux community.At the same time, reputation as a motivation to contribute has certain difficulties also. Granted that one's reputation in an online community might have spill-over effects in real life, managing reputation seems far too difficult to be an end in itself. How does one - and does one ever? - assess the net benefit of doing a certain task in terms of reputation? To speak of "a calculus of reputation," as Ghosh does,  implies a model of human behavior perhaps too rationalist in spirit. Michael Macy's idea of group efficacy presented earlier suggests perhaps a more promising explanation for motivation. In comparison to the concept of generalized exchange or the reputation game suggesting a forward-looking approach to cooperation on the tenuous grounds of rational expectation, group efficacy finds its foothold in the evolutionary dynamics of reinforcement. In particular, the efficacy argument suggests that, in the Linux community, cooperation is a strategy selected against alternative strategies, such as hoarding, because cooperation obtains, through peer reviews and joint hacking, better programs.
The case is strong for the efficacy argument indeed. In The Cathedral and the Bazaar, Eric Raymond notes the unusually short intervals between kernel releases in the Linux project. In the commercial software industry, version upgrades are released every several months, sometimes not for a year or two, as to spare the patience of the users. But in the Linux project, Raymond observes that frequent releases serve to stimulate and reward, rather than frustrate, the contributors in the sight of their efforts realized in new kernel releases. The sense of efficacy is a function of their own satisfaction and peer approval.
The final motivation in consideration is the attachment to the Linux community and the degree to which developers identify themselves with the culture of computer programmers. Unlike the other motivations discussed above, identity represents a collective basis for cooperation. That is, the unit of analysis now is the community rather than the individual; the focus here is on how the individual finds his rationale for cooperation in his association with the larger community. Below I outline one particular approach to the hacker culture in the hope that it will place the Linux project in a context historically and culturally more coherent.
The so-called New Class Theory owes much to the Marxist tradition of sociology. Its basic scenario is the class conflict between those who control the means of production and those who don't. But it is motivated by at least two inadequacies of the Marxist theory. First, in an overwhelming majority of revolutionary struggles in the past century, the proletariat has failed to provide or even represent the core of class struggles . Second, Marx overlooked a category of workers that comprise a growing fragment of the productive force in modern capitalism, namely intellectuals. The term intellectual is applied rather broadly to include a wide range of knowledge workers, or experts - academics, scientists, physicians, technicians, engineers, etc. Despite their occupational diversity, these experts have "attained a uniformity of thought an action" sufficient to warrant them a place separate from both the bourgeoisie and the proletariat . That is, the intellectuals comprise a legitimate class with distinct ideologies directly associated with their productive functions in society.
In particular, the ideologies of the New Class express unique concerns for its peculiar role as the class that belongs to neither the bourgeoisie nor the proletariat. Instead, the New Class finds itself between the other classes, sharing some of their class features while also shouldering their opposing interests. Like workers, intellectuals are employed by capitalists for their service, but like capitalists, intellectuals exercise significant control over issues of production. Yet, whereas capitalists simply own the means of production, intellectuals are directly involved in the actual processes of production. And whereas capitalists are interested primarily with financial efficiency, intellectuals are much more concerned with technological efficiency. The New Class believes that science and technology, rather than profit, should be the basis of productivity. Intellectuals hence resent the financial concerns capitalists impose on decisions of production and lament that only in the world of technocracy will they ever command the respect and authority they feel their knowledge and expertise deserve.
Because of their close association with production, engineers have been particularly active in the discourse of the New Class . Their frustration is generally traced to the practice of Scientific Management by Frederick Taylor, aimed at increasing control over productivity and quality through standardization of manufacturing methods, division of labor, close observation of work place, and such . In effect, scientific management seeks to centralize control over the actual execution of productive processes in the hands of management while stripping the workers of their independent decision-making.
Perhaps more than other types of engineers, computer programmers have found the principles of Scientific Management very much contrary to their own work habits and values. Today, the term "hacker" carries negative connotations - those of mischief and criminality - but the original hackers were researchers, students, engineers, and other computer professionals who formed informal circles in which to exchange ideas, promote knowledge, solve problems, and advance the science and the art of programming. At MIT, where the term "hack" first came into use to denote writing code, usually with a certain level of craftsmanship, students stayed up late writing programs, critiquing what others had written, and competing to write the best code. But the competition was always friendly and in due compliance with the implicit ethical standards hackers espoused firmly from early on. The so-called Hacker Ethic consists of a set of norms and beliefs, as discussed by Steven Levy: 
- All information should be free.
- Mistrust authority: reject hierarchies and promote decentralization.
- Practice is better than theory - always yield to the Hands-On Imperative.
- You can create art and beauty on a computer.
- Hackers should be judged based only on merit, i.e. hacking skills.
- Computers can change your life for the better.
It is debatable whether their particular work habits preceded the Hacker Ethic, or vice versa, but regardless, intellectual freedom has become an immensely important value, morally and pragmatically, for hackers. It is understandable why hackers and programmers so deeply resent the growing proprietary control that place legal restrictions on source codes and disrupt hacker communities through rigid division of labor and specialization under corporate management.
The Linux project is therefore not simply a hobby for hackers. The Linux project also represents an attempt at reaffirming the hacker identity and expressing a legitimate concern at the brutality of capitalism that has undermined the practices of sharing and helping. And in doing so, hackers form a distinct culture of their own, a culture sustained by the Hacker Ethic. The motivations discussed earlier thus begin to converge on a common ground in the New Class argument. Particularly, the Hacker Ethic prescribes conducts of sharing and cooperating, enforcing the norm of reciprocity and providing a meaningful context for the reputation game.Of course, one common problem of such normative explanations is the problem of reification, the tendency to treat a group or an aggregation as an autonomous entity without establishing its microscopic foundations in the individual actors and their interactions. For what does it really mean to claim that a certain culture values freedom? It means that the individuals in the culture value freedom. But without specifying the mechanism of acculturation in the culture, reification only describes, but does not explain, the emergence of macro phenomena.
Once again, efficacy, bound in the processes of memetic evolution, might be one solution out of the problem of reification in the Linux case. I concur with Raymond here in tracing the origin of the norm of cooperation not in the moral ideologies of the hacker community, but in the simple fact that it ensures higher quality in the product:"Perhaps in the end the open-source culture will triumph not because cooperation is morally right or software 'hoarding' is morally wrong [...] but simply because the closed-source world cannot win an evolutionary arms race with open-source communities that can put orders of magnitude more skilled time into a problem." 
The imperatives of the Hacker Ethic does not directly motivate hackers to share code, but rather it provides a meaningful context for the Raymond's reputation game to unfold. What matters to Linux hackers is not reputation for programming skills, but the idea that they are upholding and respecting intellectual freedom as a basic value for the community in doing so. The Hacker Ethic provides the "rules" of the reputation game, and conversely, the reputation game defines and expresses what is valued and expected in the community, thus providing a means for acculturation. Consistent with the learning model of efficacy, then, compliance with the Hacker Ethic, intended or unintended, aggregates into a successful outcome that affirms the underlying values of intellectual freedom, which in turn sustains the norm of sharing.
Thus, the evolutionary model of group efficacy drastically alters the pattern of motivation suggested by rational-choice theory. For rational actors, developing Linux is only a means to an end, be it reputation or the desire to become a Linux programmer. The actors chose to contribute only if they feel that they can make a noticeable effect and derive a net positive return from doing so. The rationalist model might be represented as thus:
In comparison, the evolutionary model of stochastic learning advanced by Macy and others suggests that the pattern of motivation is reversed and recycled through reinforcement. Group efficacy means that the success of the project signals new hackers to join the project and old members to continue contributing. The hackers contribute because they are interested in the operating system for what it is, but they continue to contribute because the Linux project also affirms the value of intellectual freedom and the norm of sharing, which are reinforced in each individual by peer reputation. The community is gradually locked into a recycling pattern of positive feedback.
I am dubious that the rationalist account can be discounted entirely. But the evolutionary model of motivation offers a strong alternative explanation for the growing size of the Linux project as well. Before the end of the chapter, I will offer an analysis of interviews with Linux developers I conducted for a closer look at the issue of motivation.
Copyleft - "All rights reversed"
In the software industry, the New Class ideology finds perhaps its purest expression in the GNU General Public License, written by the Free Software Foundation under Richard Stallman in 1989. Part legal documentation, part political manifesto, Copyleft formulates into concrete and binding terms the norm of cooperation in hacker communities that began to lose its ground against the growing tide of commercialism. While it is not our interest to dissect the license term by term, it is important to realize its social significance. Copyleft, in summary, provides the basic terms of the Open Source Definition presented earlier. That is, a program under Copyleft must allow unrestricted modification and redistribution with the entire source code. In addition, Copyleft requires that derived works from the program must also be covered under its terms. Copyleft ensures that no party can privatize or re-license the code under different terms. Once copylefted, the source code is forever available to the public. It is no surprise that, since Torvalds applied Copyleft to his operating system within half a year from its beginning, the Linux project has never had or needed anybody in charge of ensuring that the source code complies with proper legal terms, for any modified code taken from Linux is automatically placed under the same terms. In effect, Copyleft assures developers much long-term credibility in Linux as an evolving project.
Interviews: Self-organization revisited
From the linux-kernel mailing list archive, people were selected if they had more than three postings in a given week from five randomly selected weeks in the past three years, and contacted for an electronic mail interview. Unfortunately, considering the size of the developer base in the Linux project, the data set is perhaps too small - 32 responses - and heavily confounded by self-selection on the part of the interviewees, to provide an unbiased sample. But methodological issues aside, the responses nonetheless suggest a rich diversity of motivations. Every respondent identified not with a single motivation, but a list of motivations - again reminding us the inadequacy of a single motivation to explain the Linux project.
Of 32 respondents, all reported personal satisfaction and enjoyment as a primary motivation for their involvement in the Linux project. But as Raul Miller elaborates, "Fun means that I can spend my time doing real work, not trying to work around someone else's bugs that I can't address."  In other words, the pragmatic value of Linux, i.e. the fact that Linux allows the user to customize freely, enters one's utility equation. Enjoyment is tightly coupled and associated with the intellectual freedom Linux afford to its users.
Pragmatism is a widely shared sentiment among Linux developers indeed. Only three out of 32 reported that their job requires Linux programming, although many others acknowledge that their involvement has been invaluable for their job. Five more people suggested that they have chosen to use Linux at their job, not necessarily because they identify with the Linux project or the hacker culture, it seems, but out of dissatisfaction with proprietary operating systems that provide no source code. An anonymous interviewee wrote:"I use Linux for my job [...] because I'm fed up of fighting bugs on proprietary software [...]. The most important [motivation] for me is to have been bitten by closed source software often enough to even consider [using] it." 
Another claimed: "I like to be able to go under the hood to fix bugs that affect me. This is only possible if the source code is available."  In Raymond's words, "Linux lets [them] scratch their own itches."  A powerful motivation, indeed.
Twenty-four people admitted, some strongly, that, while they use Linux themselves, they are morally comfortable with other people using proprietary software: "I'm more pro-choice than anti-proprietary-software."  Two people went so far, perhaps in identifying with the pragmatic value of Linux, as to "violently reject the [ideology], knowledge wants to be free"  as a moral sentiment. This sort of statement is "only heard in the context of pirates stealing other people's work. This is about as far as you can get from the philosophy of free software."  The developers thus identify with Linux as a more credible alternative to proprietary software, but not so strongly, it appears, with its moral philosophy or historical roots per se.
The developers interviewed seemed to identify closely with the idea that sharing code helps both the author and the user alike in improving the program, for "the user and the developer are one and the same in an open-source project."  By sharing, they enhance the value of their work in so much as it is exposed to a wider audience for faster debugging and improvement. An anonymous respondent wrote, "If I can't fix the problem myself, I can always ask for help. As long as you share the code, there is always someone willing to help you, or at least, interested in what you are doing."  The credibility of the Linux project thus comes not only from its open source code that allows individuals to "go under the hood," but just as much, if not more, from allowing people to share and cooperate on problems.
Copyleft, the developers consistently reported, is a critical part of their motivation to contribute to the Linux project. The reason also seems more pragmatic than ideological, however: "I feel more comfortable sharing code, knowing that it will not be misused against me later."  Chastain notes explicitly that GPL is "resistant to free riders" in that it ensures pure jointness of supply in all modifications and derived works . Given the cost of maintaining a forked project and the ease of exchanging open source code, GPL effectively discourages casual forking.
The only explicit hint of generalized reciprocity as a moral imperative came from a developer noting that "I feel that if I got something for free I should return something back ... It's my main motivation." 
Reputation proved important for the Linux developers - but not necessarily, it seems, as a motivation for sharing per se. Three people claimed that reputation is a primary motivation, one of them identifying explicitly with the idea of gift culture. Another stated, "I suppose that there is a pride factor involved. It's nice to know that other people are getting value out of something you've done."  From an anonymous developer:"Competition is one of the primary drives for improvement. Had people not for years bragged about BSD having better/faster networking code than Linux, I doubt Dave Miller would have worked so hard improving portions of the Linux networking code in the early 2.0.3x releases." 
What fuels many developers is the idea that they can create beauty and quality in programming. Programming for them is not only an instrumental goal, but an expressive end in itself. Competition directly incites their passion for coding.
Implicit in many of these responses, however, is also the idea that reputation is often an inadvertent benefit of their involvement also. As many as 16 people, while acknowledging reputation as an integral element of their social system, suggested, some rather emphatically, that reputation is secondary to more pressing, technical issues. "It is nice to be recognized as doing a good job on the kernel," wrote Stephen Tweedie, "but that isn't a primary motivation."  From Robert Jefferey Morriss:"My primary motivation, really, for doing things which help Linux is that those things help solve a problem I have[...] My favorite editor isn't regularly packaged in RPM format? Well, I package it. Once I've done that, it's not much work to upload it to the Internet where others can download it and save [them] the trouble of building it. Then other people find something I did useful. That's a plus, too. I've only gotten feedback a couple of times, but just knowing that it's being used makes me happy ... I can't explain why." 
Eric Raymond also notes in Homesteading the Noosphere:"You may not work to get reputation, but the reputation is a real payment with consequences if you do the job well." This is a subtle and important point. The reputation incentives continue to operate whether or not a craftsman is aware of them; thus, ultimately, whether or not a hacker understands his own behavior as part of the reputation game, his behavior will be shaped by that game." 
What seem to emerge from these findings are two faces of reputation. On the one hand, opportunities for reputation motivates programmers to contribute to the Linux project, as suggested by rational-choice theory. On the other hand, reputation also locks people into particular patterns of collaboration and interaction through reinforcement. Robert Axelrod argues that, in communities with frequent and repeated interaction, reputation serves to signal vital information about people . Reputation opens new opportunities and reinforces existing patterns, which further consolidates reputation. And as one gains more presence in the project, one is also fixed into new responsibilities as a maintainer - what at least seven people in my sample mentioned as a primary motivation. Thus, the norm of sharing code emerges to the extent that reputation signals the developer to remain with the project, and the norm of ownership emerges to the extent that reputation signals others that the project already has a recognized maintainer who has publicly proven himself capable of guiding its development.
In this way, Raymond observes, reputation is associated particularly strongly with people starting new projects rather than contributing to existing projects. This is a crucial point for us, as the logic of the public goods dilemma suggests that one of the central problems of a collective action is how to start its engine. And as Marwell and Oliver add, a collective action can roll on its weight once a few interested individuals with resources come together to provide the jump start.
Brian Skyrms notes that the emergence of conventions, including the very notion of property, lies in asymmetries - "broken symmetries," in his term . Social interaction becomes fixed into certain patterns that come into their own self-sustained existence. In the end, I agree with Raymond in claiming that reputation mediates the norms of cooperation and ownership in the Linux project. But our findings have raised a new debate as to whether reputation is a motivation in its own right, and to what extent it is a second-order effect of cooperation that reinforces territorial patterns by signaling equilibria in asymmetry. By "equilibria in asymmetry," I also refer to patterns of social interaction that have reached relative stability through chance events that lead the way for self-reinforcing path-dependence. Hence, people spontaneously starting a small project becomes, by virtue of public recognition, a maintainer with increasing presence and fixed responsibilities. Recall that this is exactly the idea of self-organization discussed at the beginning of the chapter. In reputation, we therefore revisit self-organization, the other face of evolution.
Of course, the difference between reputation as a motivation and as a reinforcement is hardly settled in our limited discussion. This is a subtle issue that requires larger data and more sophisticated research design, perhaps computer simulation, which is beyond the scope of this paper.
Finally, the group efficacy argument seems to have held a case in our findings. People become Linux developers not simply because open source software affords a greater sense of control for the individuals than proprietary software does. Rather, people, in becoming Linux developers, come to understand that a first-class operating system is produced by part-time volunteers, not because they can individually produce a great hack, but because they help each other out by sharing code. In the end, cooperation is much more likely to ensure higher quality and speedier development than individual isolated efforts are. That is the tale of the Blind Watchmaker, and for anyone who has worked with the Linux community, that is self-evident.
Chapter 6"To survive in a variable environment, [a system] must be stable, to be sure, but not so stable that it remains forever static. Nor can it be so unstable that the slightest internal chemical fluctuation causes the whole teetering structure to collapse."
- Stuart Kauffman, At Home in the Universe
The Edge of Chaos
"Phase transition" is a term used in physics to describe the threshold between the gaseous and the fluid, the fluid and the solid, and so on. It is a point of transition, where ice begins to melt, water begins to evaporate, and vapor begins to condense. In phase transition, a system becomes dynamic and unstable, anticipating the beginning of something new.
For complex system theorist Chris Langton, the idea of phase transition also describes the sweet spot of evolution, where an emerging balance between the solid and the gaseous obtains new behaviors. Systems out of this dynamic and fluid balance, on the other hand, "tend to stall in two ways. They either repeat patterns in a crystalline fashion, or else space out into white noise."  But within the sweet spot, one finds perpetual novelty in constant transition - a transition Langton calls the "edge of chaos."
The edge of chaos. Where a frozen order and an ethereal disorder meet in a fluid equilibrium. Where life is in endless flux. Where a system is so adaptive that it is only a breath away from spinning out of control. Another way perhaps of describing Linux and its people. We traced its improbable development to the bottom-up dynamics of the Bazaar model of software engineering. In Chapter 3, we identified essential qualities of a complex system in the Linux project. This discussion germinated the idea that the high levels of quality and performance obtained in Linux are emergent properties of local interactions amounting to evolutionary processes and self-organizing patterns, which in turn motivated the next two chapters. In Chapter 4, we extended the basic premises of the Blind Watchmaker in the light of memetic evolution. We also found arguably the most important element behind the development of Linux in the extent of parallelism afforded in the Bazaar. In Chapter 5, we addressed the issues of motivation and coordination that seemed to hamper the Linux project at first glance. Following Marwell and Oliver, however, we saw that Linux finds an asset, not a liability, in the volunteer efforts of the large and unstructured developer community. Finally, we saw that the reputation game, which, for Raymond, mediates the contradiction between the ideologies of the hacker culture and the norm of ownership in open-source projects, might also provide the feedback mechanism for spontaneous self-organization through path-dependent patterns of collaboration around maintainers.
We thus identified in Linux the complementary forces of complexity at the edge of chaos. Under the terms of GPL, Linux is open, yet restrictively so. And under the terms of the reputation game, the community is at once exploratory and exploiting. Without global control from the top, the Linux project has achieved a fine balance between evolution and self-organization, order and disorder. Stuart Kauffman argues that complex systems come to the edge of chaos through the same processes that guide Darwinian evolution, namely, natural selection . Systems must evolve for a right amount of evolvability; systems that do not find the sweet spot simply die out. But once the system is at the edge of chaos, we are bound to see surprises. Linux, like the eye, is one such system that has come to dazzle us all.
Of course, Linux proved dazzling in so far as it set the precedent as perhaps the most successful open-source project of its scale. Its uniqueness in the short but rich history of software industry thus warrants comparison with other engineering projects in its rank.
One obvious comparison is Microsoft. In the Cathedral-Bazaar dichotomy, Microsoft perhaps represents the most prominent case for the Cathedral. From the early days of "An Open Letter to Hobbysts," in which Bill Gates argued that stealing the source code is nothing but theft and a major deterrent for a blooming software industry,  Microsoft has built its empire upon the principles of tight control and rational optimization. Our discussion focused on the improbable development of Linux, while largely avoiding comparison with the corporate model of software engineering. But the comparison is inevitable if we are to show that Linux is not just a historical accident, but that the Bazaar is a reliable and generally applicable model for the coming digital age.
Another curious case study is the BSD Unix systems. Like Linux, the original BSD Unix was distributed with source code and was improved throughout the 1970s and 1980s in the hands of avid programmers at the University of California. To this day, its advocates continue to hail its technical excellence over Linux in certain areas. Nonetheless, its presence has been increasingly overshadowed by Linux in what perhaps suggests a case of positive feedback similar to the VCR market. What, then, triggered the scale to tip for Linux? And what role, if any, has the BSD played in the development of Linux?
As intriguing as these case studies may be, they must await further research. The scope of the questions they beg are well beyond what our data and analysis allow.
About the Author
Ko Kuwabara graduated summa cum laude in sociology from Cornell University in January 2000, where he studied collective action and evolutionary sociology with Michael Macy. He is currently working as a research assistant to Paul Resnick at the School of Information at the University of Michigan on NSF-funded research on online reputation systems. He is eagerly waiting for notification from graduate schools for Fall 2000.
This paper was a honors thesis in sociology at Cornell University, under the direction of Professor Michael W. Macy. The initial draft was completed in December 1999.
I am truly indebted to Professor Macy for his guidance and encouragement throughout my research. His wit and passion for sociology will surely continue to inspire me as I begin my graduate studies.
A special acknowledgement to Cristian Estan for his technical support, and my first Linux.
I would also like to thank the following individuals who took their time to answer my endless questions with patience and to share their personal experiences as Linux enthusiasts. Their responses were invaluable. They are, of course, not responsible for my idiosyncratic interpretations of their ideas. They are Andrea Arcangeli, Anthony Barbachan, Michael Ballbach, Steve Bergman, Jim Bray, Christopher Brown, Albert D. Cahalan, Michael Elizabeth Chastain, Meino Christian, Alan Cox, Eugene Crosser, Daniel Egger, Jim Gettys, John Gibson, Nathaniel Graham, Mathew Harrell, Andre Hedrick, Richard B. Johnson, Trevor Johnson, Christoph Lameter, Greg Lee, Jon Lewis, Mark Lord, Gerhard Mack, Edward S. Marshall, Raul Miller, Robert Jeffrey Morriss, Gabriel Paubert, Eric Princen, Tracy Reed, Garst R. Reese, Mike Ricketts, Douglas Ridgway, Rik van Riel, Daniel Ryde, Mark Spencer, Manfred Spraul, Mark Stacey, Stephen Suson, Theodore Tso, Stephen C. Tweedie, Geert Uytterhoeven, Khimenko Victor, Tim Waugh, Riley Williams, and Oliver Xymoron.
Finally, a warm hug to my family, Cary, and my friends for all the support you have given me.
7. Today, the popular usage of "hacker" denotes a person, often antisocial and excessively compulsive, involved in mischief or criminal activities using computers. As Hannemyr notes, however, "The 'original' hackers were computer professionals who, in the mid-sixties, adopted the word 'hack' as a synonym for computer work, and particularly for computer work executed with a certain level of craftsmanship. They subsequently started to apply the noun "hacker" to particularly skilled computer workers who took pride in their work and found joy in doing so" [Hannemyr, 1999, "Considering Hacking Constructive"; also Steven Levy, Hackers]. This is the usage of term used by Raymond, and by the author.
146. math-www.uni-paderborn.de/~axel/bliss/kai_message.txt (29 October 1999).
147. math-www.uni-paderborn.de/~axel/bliss/bliss.txt (29 October 1999).
Anonymous interview with Linus Torvalds, May 1999. www.kt.opensrc.org/interviews/ti19990528_fb.html retrieved 14 June 1999.
Robert Axelrod, 1997. Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, N.J.: Princeton University Press.
Richard Barbrook, 1998. "The Hi-Tech Gift Economy," First Monday, volume 3, number 12 (December), at firstmonday.org/issues/issue3_12/barbrook/index.html http://dx.doi.org/10.5210/fm.v3i12.631
Susan Blackmore, 1999. The Meme Machine. Oxford: Oxford University Press.
Zack Brown, "Kernel Traffic," at www.kt.opensrc.org
Byron Burkhalter, 1999. "Reading Race Online: Discovering racial identity in Usenet discussions," In: Marc A. Smith and Peter Kollock (editors). Communities in Cyberspace. London: Routledge, pp. 60-75.
John L. Casti, 1996. Would-Be Worlds: How Simulation is Changing the Frontiers of Science. New York: Wiley.
Richard Dawkins, 1991. "Viruses of the Mind," at www.physics.wisc.edu/~shalizi/Dawkins/viruses-of-the-mind.html
Richard Dawkins, 1987. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. New York: Norton.
Richard Dawkins, 1976. "Memes: the new replicators," In: The Selfish Gene. New York: Oxford University Press, pp. 203-15.
Chris DiBona, Sam Ockman, and Mark Stone, 1999. "Introduction," In: Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly & Associates, pp. 1-17.
Chris Donohue, bbs.msnbc.com/bbs/msnbc-transcripts/posts/qk/276.asp, 25 February 1999.
William H. Durham, 1991. Coevolution: Genes, Culture, and Human Diversity. Stanford, Calif.: Stanford University Press.
Emile Durkheim, 1984. The Division of Labor in Society. W.D Halls (translator). New York: The Free Press.
Jon Elster, 1989. Nuts and Bolts for the Social Sciences. Cambridge: Cambridge University Press.
Jonathan Eunice, "Beyond the Cathedral, Beyond the Bazaar," at www.illuminata.com/public/content/cathedral/intro.htm, retrieved 17 December 1998.
Sigmund Freud, 1959. Group Psychology and the Analysis of the Ego. James Strachey (translator). New York: Norton.
Bill Gates, 1976. "An Open Letter to Hobbysts," (February 3), at www.ispchannel.colm/~xgz/gates_letter, retrieved 6 July 1999.
Rishab Aiyer Ghosh, 1998. "Cooking pot markets: an economic model for the trade in free goods and services on the Internet," First Monday, volume 3, number 3 (March), at firstmonday.org/issues/issue3_3/ghosh/index.html
Alvin W. Gouldner, 1982. The Future of Intellectuals and the Rise of the New Class: A Frame of Reference, Theses, Conjectures, Arguments, and an Historical Perspective on the Role of Intellectuals and Intelligentsia in the International Class Contest of the Modern Era. New York: Oxford University Press.
Galen Gruman and David Orenstein, 1998. "Meet Linus Torvalds," at www.computerworld.com/home/features.nsf/all/980817linus, retrieved 17 February 1999.
Gisle Hannemyr, 1999. "Technology and Pleasure: Considering Hacking Constructive," First Monday, volume 4, number 2 (February), at firstmonday.org/issues/issue4_2/gisle/index.html http://dx.doi.org/10.5210/fm.v4i2.647
Michael Hechter, 1987. Principles of Group Solidarity. Berkeley: University of California Press.
Douglas D. Heckathorn, 1996. "The Dynamics and Dilemmas of Collective Action," American Sociological Review, volume 61, number 2 (April), pp. 250-277. http://dx.doi.org/10.2307/2096334
Francis Heylighen. "Evolution of Memes on the Network: From Chain-Letters to the Global Brain," pespmc1.vub.ac.be/papers/Memesis.html, retrieved 13 January 1999.
John Holland, 1995. Hidden Order: How Adaptation Builds Complexity. Reading, Mass.: Perseus.Michael E. Kanell, 1999. "April 1999 Interview with Linus Torvalds," at ww.austin360.com/technology/stories/1999/04/19qanda.html, retrieved 14 June 1999.
Stuart A. Kauffman, 1995. At Home in the Universe: The Search for Laws of Self-Organization and Complexity. Oxford: Oxford University Press.
Kevin Kelly, 1994. Out of Control: The New Biology of Machines, Social Systems, and the Economic World. Reading, Mass.: Addison-Wesley.
Steven Levy, 1985. Hackers: Heroes of the Computer Revolution. New York: Dell.
Linux-kernel mailing list FAQ, www.tux.org/lkml/
Linux Weekly News, 1998. "LWN interviews Alan Cox," lwn.net/1999/features/ACInterview/, retrieved 14 June 1999.
Michael Macy, forthcoming. "From Factors to Actors: The Third Wave in Computational Social Science."
Michael Macy, 1993. "Social Learning and the Structure of Collective Action," Advances in Group Processes, volume 10, pp. 1-10.
Gerald Marwell and Pamela Oliver, 1993. The Critical Mass in Collective Action: A Micro-Social Theory. Cambridge: Cambridge University Press.
Glyn Moody, 1997. "The Greatest OS That (N)ever was," Wired, volume 5, number 8 (August), pp. 122-125, 154-156, 158, 164, and at www.wired.com/wired/5.08/linux.html, retrieved 11 January 1999.
Bruce Perens, 1999. "The Open Source Definition," In: Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly & Associates, pp. 171-188.
Eric S. Raymond, 1998a. "The Cathedral and the Bazaar," First Monday, volume 3, nummer 3 (March), at firstmonday.org/issues/issue3_3/raymond/index.html, and at www.tuxedo.org/~esr/writings/cathedral-bazaar, retrieved 17 December 1999.
Eric S. Raymond, 1998b. "Homesteading the Noosphere," First Monday, volume 3, nummer 10 (October), at firstmonday.org/issues/issue3_10/raymond/index.html, and at www.tuxedo.org/~esr/writings/homesteading, 17 December 1999.
Mitchel Resnick, 1994. Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds. Cambridge, Mass.: MIT Press.
Herbert Simon, 1996. The Sciences of the Artificial. 3rd edition. Cambridge, Mass.: MIT Press.
David Sims, 1999. "Interview: Eric Raymond, Software Developer," TechWeb, www.techweb.com/internet/profile/eraymond/interview, retrieved 25 February 1999.
Brian Skyrms, 1996. Evolution of the Social Contract. Cambridge: Cambridge University Press.
Hans-Cees Speel, 1996. "Memetics: On a conceptual framework for cultural evolution," at www.sepa.tudelt.nl/webstaf/hanss/einst.htm, retrieved 13 January 1999.
Donald Stabile, 1984. Prophets of Order: The Rise of the New Class and Socialism in America. Boston: South End Press.
Richard Stallman, 1999. "The GNU Operating System and the Free Software Movement," In: Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly & Associates, pp. 53-70.
Linus Torvalds, 1999a. "The Linux Edge," In: Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O'Reilly & Associates, pp. 101-111.
Linus Torvalds, 1999b. "Linux History," Linux International, at www.li.org/li/linuxhistory.shtml, retrieved 6 July 1999.
M. Mitchell Waldrop, 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster.
Hiroo Yamagata, 1997. "The Pragmatist of Free Software: Linus Torvalds Interview," at tlug.linux.or.jp/linus.html,retrieved 2 February 1999.
Paper received 30 January 2000; accepted 18 February 2000.
Copyright ©2000, First Monday
Linux: A Bazaar at the Edge of Chaos by Ko Kuwabara
First Monday, volume 5, number 3 (March 2000),
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2016.