Many thinkers try to tackle issues of economic sustainability for the creative commons, analysing it under present conditions. In contrast, few ask themselves what would a society where many create for many look like. The main drawback of a focus on direct economic sustainability is that it may lead to ignore the present development of societal exchange patterns that are only indirectly coupled with the economy, but outline possible paths of development. This paper analyses the possible structure of a manytomany commonsbased information society from a variety of interdependent viewpoints, each associated with models and quantitative indicators. The paper discusses the distribution of attention and reputation over sets of works; the number of works in a given media; and, the degree of symmetry between creation and reception of contents.
It concludes that many of the present models and estimates are biased by the present economic conditions of media services and that some commonly accepted laws are mistaken regarding diversity in a manytomany information society. It discusses policy issues regarding how to make creative commons sustainable in this light.
What power laws won’t tell you
Back to sustainability issues
Experience teaches us to distrust her lessons.
— John Barth, The Last Voyage of Somebody the Sailor.
The existence of sustainable creative commons carries a number of promises. Particularly prominent among these promises  are:
- the diversity of creators, of creations and of attention,
- the possibility for individuals and groups to move across a continuum of positions ranging from simply accessing contents to becoming critical receptors, prescriptors, practitioners, amateur produceurs up to being recognised as professional producers.
There is evidence that these promises are not wild dreams. More of 10 percent of French individuals (up to 82% of individuals in the 1524 age category) have published a blog at some time and more than 4 percent presently do so (Zilbertin, 2006). The proportion is similar for activities such as Internetbased exchange of digital photographs. Though this access to production is still limited to some segments of the population and extremely unequally distributed over the planet, it is already so massive that it can no longer be depicted as a niche activity.
Underlying the promises listed above is the idea of a more symmetrical, less sharply divided, and culturally diverse information society. In many previous cases, beautiful promises have proved unreachable or even sometimes harmful because one had not sufficiently considered transitions paths from the present situation. It is thus quite natural for most people to look at how sustainable creative commons could develop from the present conditions of the creation, distribution and promotion of works.
However, this reasonable approach may prove a deadend if we fail to understand the key properties of what we are trying to reach, or if we explore only paths that will lock us in trajectories that are incompatible with these target properties. To avoid such mistakes, this paper proposes a critical review of what is actually happening in the present creative commons. It pays special attention to types of contents and communities where the influence of the search for presentday business models has been relatively moderate.
Lets consider how much attention (number of accesses, reading, viewing, listening, we will speak of attention for short) works receive. One can model it as:
In this formula, N is the number of works in a given universe, M is the mean of the attention over all works, and pi is the probability that one access is to work i. This paper is entirely focussed on discussing N, M and the shape of the probability distribution p of which the pi are a realization.
To give the reader a hint of what this leads to, lets consider the fact that some works receive much more attention than others, which is a property of the shape of p. This is partly something that arises from natural properties of how we access and use information, for reasons ranging from a very essential scientific work being referenced by many others to the pleasure of sharing a common cultural experience. This is also in part a builtin property of our information access infrastructure: for example, linking or the way search engines rank query results will automatically amplify differences in popularity and attention. This is so true that many information communities have devised schemes to limit this amplification effect by making visible random works  to ensure that the attention is not abusively concentrated on hits.
The promises of a manytomany continuum information society do not depend upon eradicating hits. They depend upon making sure that the wider variety of works is sufficiently visible and receives in practice enough attention so that novelty, originality, quality or whatever other property one judges essential can be served.
However the concentration of popularity and attention can take many different forms, where the key difference regards how much attention less popular contents will receive. The promises of a manytomany continuum information society do not depend upon eradicating hits. They depend upon making sure that the wider variety of works is sufficiently visible and receives in practice enough attention so that novelty, originality, quality or whatever other property one judges essential can be served. One will see below that there are major differences in this respect between the presentday publishing and broadcast models and the presentday or desirable properties of information commons. It will also be claimed that finding paths towards sustainable information commons requires innovation in the coupling between the monetarybased economy and the development of information commons. From this perspective, the apparently easier paths are not necessarily the most sustainable ones.
What power laws wont tell you
In the past few years, wisdom had it that the distribution of attention to works made accessible on the Internet follows laws that are similar to those that hold in situations such as borrowing books from a library, using words from a language, or the wealth of rich people. According to this wisdom, among a set of N works, the probability p(k) that one access is to the kth most popular work follows a power law, or more precisely a Zipf law  of parameter a close to 1. That is in mathematical terms:
In more mundane terms, this would mean that a very small number of works receive considerably more attention than others, but also that by far the largest share of accesses go to these very few popular works, the less popular work receiving an insignificant share of attention. A detailed account of this type of analysis can be found in Adamic and Huberman (2002).
Figure 1: Zipfs law share of access to the 95% less popular works among 10000 as function of parameter a.
Discussing access to blogs on the basis of links, reputed Internet commentators such as Clay Shirky (2003) went a step further by claiming that this would be even more the case with time  (in mathematical terms that the characteristic parameter a would go closer to 1 or beyond 1). One will see that:
- this is false in the tentative creative commons of our times; and,
- that the error comes from using analysis techniques that underestimate differences between the classical publishing world and information commons.
To get a feeling of what can be wrong with the Zipfs law with parameter close to 1 hypothesis, lets look at a graph (Figure 1) of the share of attention it predicts the 95% less popular works will receive. Zipfs law is a mathematical construct that must be handled with care: for instance it is hard to normalise properly since the distribution is undefined when N (the number of works) tends to the infinite and a is smaller or equal to 1. For the time being lets assume that N = 10000.
Zipfs law is a mathematical construct that must be handled with care ...
If indeed attention follows a Zipfs law and the attention received by the 95% less popular works can range from 95% for a = 0 to close to 0% for a = 2, estimating a and comparing values of a for various domains and situations should be a priority.
Does attention in information commons follow a Zipfs law at all and which one?
One reason for which there are relatively few studies that check rigorously for the Zipfs law hypothesis and reliably estimate its parameter is that it is far from trivial. The ordinary least squares fit techniques give biased results (Nishiyama, et al., 2004). However this bias decreases with sample size and it can be corrected using adequate techniques. Goldstein, et al. (2004) have proposed a maximum likelihood fitting technique, that unfortunately is not applicable when the parameter a < 1. They also provide a solution to the main difficulty: testing the goodness of fit (one cannot rely on sample size for this). The conditions for application of a testing are clearly not met in a ranked distribution, so a KolmogorovSmirnof test to the cumulative distribution is to preferred, that requires specific tables (Goldstein, et al., 2004).
The following technique is applied in this paper:
Computing an initial biased estimate for the best fitting Zipfs law using the leastSquaresFit function of scientific Python applied to a 2parameter Zipfs law.
Refining this estimate by minimising the KS distance on the cumulative observed and estimated distributions (refining the estimate by steps of 0.01 on the parameter). Greater precision could be obtained with the same computational complexity by doing a dichotomic approximation.
Testing goodness of fit through a KS test on the cumulative observed and estimated distributions. Important caveat: the tables provided in Goldstein, et al. (2004) have been generated for random values of the parameter ranging from 1.5 to 4. I use them for a different estimation technique and values of the parameter outside this range. Though conclusions on the goodness of fit are drawn in this paper only in cases where the values of the KS distance are significantly greater than the thresholds in the table in Goldstein, et al. (2004), they should be considered provisional, pending generation of a table specifically tailored for values of the law parameter ranging from 0 to 1.
Finally, when one does not have data on the full distribution, but only data of the form p% of the works receive T% of the attention (with 1 < p < 30), estimating the parameter of a tentative Zipfs law can be done quite reliably using curves such as the one presented in Figure 1 with an adapted value for 1-p. This will be particularly useful for comparisons between information commons and other forms of cultural distribution for which general attention data is seldom available.
The distribution of attention to works in information commons
Information commons introduce a key change compared to traditional publishing models that are based on the a priori selection of a limited number of titles. One can see this change as sampling a much wider universe. Lets give a look at two relatively small information commons communities dealing respectively with books and music. These are interesting examples because books and music take time to read or listen to (and even to browse through prior to making a download decision). Thus one should find here the usual factor that is traditionally invoked for justifying Zipfs law with parameter close to 1 types of distribution, that is the existence of a scarce resource that prevents a more equal distribution while making the highest attention exponentially rarer. Another interesting element is that these information communities do not use advertising as a business model: In Libro Veritas (http://www.inlibroveritas.net) has a business model based on writerpaid publishing of paper versions of books, while Musique Libre (http://www.musique-libre.org) is funded by members and donators. This deserves mention because some advertising business models introduce biases in the distribution of attention. Musique Libre is still predominantly a proam community, where a large share of visitors are musicians themselves, while in Libro Veritas has both writers and pure reader visitors.
Lets first look at In Libro Veritas. There are around 3,600 books accessible online. The number of readings  of books on the site ranges from 3,000 to 6,000 per week. The managers of the site have provided me with the full ranked number of readings per book in a week (unfortunately this is one with a relatively low activity, with only 2,987 readings, 931 books having at least one reading). Does the distribution appear to follow a Zipfs law? The best fitting Zipfs law is for a = 0.62. Comparison with partial data available for another week with more traffic shows stability in this estimate. One can not conclude regarding the Zipfs law hypothesis due to the contribution of discretisation noise when the average number of accesses per work is low.
Lets now look at the Musique Libre community. There also, the community organisers have provided me with exhaustive anonymised data on the number of downloads and streamed listening per work. As of 13 April 2006, 5,565 titles are online under a variety of free and creative commons licenses, and 4,824,499 downloads or streamed listening have proceeded other a little more than a year. With such a high level of access (average 867 per work), the Zipfs law hypothesis can be tested precisely. To remove possible bias from the fact that some works have been accessible under a longer period, I tested the hypothesis with various normalisation strategies to remove the impact of the number of days each work has been accessible. The best Zipfs law fit is obtained in all cases with 0.50 < a < 0.52. Under all of them, the Zipfs law hypothesis can be rejected with probability of error very low (P < 0.001).
When Internet analysts stated that distribution of access to works on the Internet followed a Zipfs law, they probably hadnt in mind a true mathematical fit. It is likely that they meant that the distribution resembled a Zipfs law's. Well it does ... at a distance. However, as illustrated in Figure 2, the moderately popular titles (ranks 350 to 2500) receive more attention that they would under the best fitting Zipfs law.
Does it matter that we have found distributions with an increased attention to intermediate ranks, and such low values for the potential Zipfs law parameter? Yes, it does. Lets compare these figures with a similar estimate for the sales of music titles that are published on compact disc in France. In the last known year, 4% of the references on sale have generated 90% of the sales (Moreau, et al., 2006). This incredibly concentrated distribution corresponds to a Zipfs law with parameter a = 1.32.
Attention diversity indicators
Diversity depends as much on the number N of works (diversity of offerings) as on the diversity of attention for a given value of N. In reality, studies of the diversity of offerings can only be done at the level of the full media or submedia (for instance genre).
Publishing or given specialised information communities are only subsampling the full media. However, they do not do it in the same manner. For instance, the number of CD titles published in France in a year by the four major companies that control distribution has been halved in the last known three years down from 3,314 to 1,611 . Open information communities of course expand the diversity of offer by suppressing the a priori selection of accessible works. As we will see below, they do not do it at the expense of the diversity of attention, nor at the expense of the capability to detect and recognise contents of high relative interest (for a given community).
Figure 2: Detail of best fitted Zipfs law for ranked accesses to Music Libre works
(normalised with respect to number of days they have been online).
In addition to monitoring the diversity of offer (which is what most people have in mind when speaking of cultural diversity) it may be useful to define directly usable indicators of the diversity of attention given to works in a set. This can be done using
where Gini is the classical Gini indicator  computed (or estimated) over the set of titles. The value of this attention diversity indicator for Musique Libre is around 0.56. This value can be compared to 0.11 for the CD music publishing in France (estimated from the Zipfs law equivalent). It is beyond doubt that there is much more diversity of attention in a commonsbased community compared to traditional publishing based on the a priori selection of titles.
This can be furthered confirmed by comparing the diversity indicator for streamed listening on Musique Libre with the diversity indicator for downloads on the same platform. As can be expected, users listen to music prior to making download decisions, and make these decisions based on some form of quality assessment. Due to the consistency in the quality assessment or the taste of the community, the diversity indicator for downloads is only 0.42 instead of 0.57 for streamed listening. But this value remains considerably superior to the diversity of attention to published CDs or other publication media using a priori editorial selection, control on distribution channels and concentrated promotion means.
What about truly large scale communities?
Lets give a look at the larger blog community in France: Skyblog (http://www.skyblog.com). According to data posted on the site, in the period 1 to 25 April, the 100 top blogs received 8,768,000 visits, which is approximately 10% of all visits for only 0.0021739% of the blogs. Unfortunately, this data is insufficient to conclude with precision on the distribution of attention in this community. Some existing studies on large blog sites have used links have concluded to 80% of the attention to the top 20% blogs which corresponds to a Zipfs law with parameter a = 0.94. But the use of links as a proxy for access raises much doubts. My present hypothesis is that despite higher concentration of attention on a few tens or hundred toppopular blogs, mostly due to selfpromotion of hits connected to the search for advertising income, the share of attention that goes to moderately popular works is still much higher than for traditional publishing.
How much symmetry is possible or desirable in a manytomany information society is a multifacetted issue. It can be discussed at the level of the information infrastructure (networks, centralisation or decentralisation of information and processing) as well as regarding the balance between creation and reception of contents. This paper discusses first a few indicators of the fitness of the information infrastructure for sustaining creative commons. However these indicators are quite hard to estimate and do not necessarily bring deep insight into what would make creative commons sustainable. For that, one needs to directly reflect on the symmetry between creation and the attention given to the creations of others.
Symmetry in the information infrastructure
Symmetry in the information infrastructure has several dimensions. The symmetry of protocols or the fact that the design of devices is open to creative activities by users constitute critical conditions of possibility for a manytomany information society. The endtoend properties of the IP protocol, arising from the fact that it is principally symmetrical (assumes that each node is eligible to emit or receive), nondeterministic, and agnostic with regard to what the transported bits represent, were key to its widespread usage for creation and distribution of information and creative works. These properties are endangered by changes in standards  and regulation , but remain as of today a relatively solid asset.
In the practical deployment of networks, in particular for last mile connectivity, the growth of broadband was accompanied by an increased asymmetry between available upload and download bandwidth. We have moved from a symmetrical connectivity for 56 Kbs modems to a 1:8, 1:16 or even greater asymmetry for recent broadband offers. Some defenders of the artificial organisation of scarcity (Bomsel and Le Blanc, 2004) went as far as to propose to reinforce this trend by taxing upload bandwidth. Leaving aside these extreme proposals, the natural asymmetry in upload and download bandwidth depends on factors that are hard to separate: the balance between creation and reception that is the object of the next sections and the extent to which P2P is a predominant platform for exchange and distribution of contents. The issue is further confused by the competition between business models that are based on centralising only metadata and those that try to convince users to have their content hosted directly on their servers.
The situation is also complex regarding devices. There is competition between general purpose IT devices and the search for specialised devices that are more adapted to a given usage or situation (listening to music or viewing film, reading, mobility, etc.). This competition is here to stay and results in practice in a coexistence between general purpose devices and some specialised devices, with creation of synergy between the two, as exemplified by MP3 players, for instance. However, hidden behind what could be a mere diversification process for IT, there is a trend towards the specialised devices being restricted in nature in terms of what can be produced from them. Up to now, users have rejected IT devices that put them in pure reception mode, and installed creative usage even for types of devices that were predominantly designed for pure reception or synchronous communication. This is exemplified by SMS, photography and shooting film on third generation mobile phones. So when users are given a say, there is hope. It is mostly the inclusion of DRMS in devices that risk hindering the usage of IT devices for creation and distribution purposes.
So when users are given a say, there is hope.
One could use some quantitative information infrastructure indicators to monitor the situation on all these fronts, for instance:
the share of traffic to and from endusers that is transported endtoend using normal or privileged status, and the differences in quality of service for both situations,
the degree of asymmetry between download and upload connectivity for broadband subscribers,
the share of works that are made available free of DRM or other technological usage restrictions,
the share of enduser created contents that is stored in various devices.
Symmetry between creation and reception of works
In a way, it is paradoxical to raise the issue of symmetry between creation and reception when one has identified that in a manytomany information society, the distinction between the two is blurred and a continuum of positions and activities develops. However the question remains of a balance between the effort to produce and obtain attention of others and the attention given to others. Skeptics often suggest that the prospect of a manytomany information society will meet of obstacle of insufficient attention to productions of others. They coined the term expressivism to depict a world in which all would talk but nobody would listen to the others. In reaction, information commoners appropriated the term and gave it a positive meaning, stressing that the individual is not a preexisting entity but is constituted in the process of expression and exchange with others (Allard, 2005).
The arbitration between producing expressive works for the attention of others and giving attention to works of others is a complex process, mixing various drives and constraints. It is much beyond the scope of this paper to address the tangle of factors that drive a particular individual at a given time towards creating or experiencing creations of others. Adopting a more modest objective, there are two major ways to understand the aggregate results of such processes: one can look at the input how much time people put in each of the two activities which we will done in the next section, or look at the output our number M, the mean attention works receive (for instance the average number of times texts were read or musical recording were listened to). We have to be careful when interpreting the meaning of a given value of M, since it is dependent on the nature of works and their usage.
French blogs provide an excellent test case for analysing symmetry issues, due to their huge number. According to the already quoted entry by Sandra Albertolli (2005), the ratio of the number of unique site visitors in a month to the number of active blogs ranges from 1 to 12.5 according to platforms. This surprisingly open range is partially explained by the varying definitions for active , and biases connected to the search for audience figures that have an impact on advertising income also certainly play a role. For sites such as Skyblog, there are 1.5 comments per blog entry. One can not derive precise estimates of M for blog entries from the available data, but it is clear that this number is inevitably low with respect to our usual mental definition of an audience for works that are made public. This is an intrinsic property of a manytomany information society, and we must learn to tune our expectations to it. A value of 5 for M in a given media is perfectly compatible with healthy information commons.
A comment on time budget constraints
According to data published by MédiamétrieeStat  for March 2006, the average visitor time in a month for blog sites is between 5 to 10 hours. This figure counts both time spent editing blog entries and time spent accessing or commenting. There is informal data  that suggests that the average time spent on each activity could be similar for intensive bloggers, though one will need real surveys to confirm if it is the case. There is a selfbalancing process at work in each media regarding time budgets for creation and attention. Photography and video (both media that require full attention in production and reception) would deserve detailed studies in this respect. A possible conjecture is that:
(Across media M is proportional to the ratio of the time needed for producing a work to the time needed for accessing it to a significant extent)
Back to sustainability issues
As a conclusion, lets review a number of strategies regarding the coupling between the growth of an information community and the monetary economy.
Carrier and service business associated to specific works or authors
Some type of media (in particular books) provide for associating free noncommercial exchanges of electronic versions of works with commercial sales of addedvalue physical carriers. This has been used to fund information community service providers (such as In Libro Veritas) as well as by individual writers. Such schemes present the advantage of creating a type of synergy that is favourable in particular to moderately popular works (OReilly, 2002), and can possibly prevent reduction of attention diversity when the community grows.
For other media for which carrierbased associated sales are not a sustainable model at least with the presentday carriers, services are of course the major possible business model associated to specific works. These services are of a different form than those associated with software. They provide a direct addedvalue experience connected to a work, either because of the presence or performance of the artist (concerts, visual arts performances) or because of a highquality perceptive environment (film theatres). There is uncertainty on the scalability of such models, and even more on the return channels for information community providers. For pay services can also be marketed in the information sphere when they are associated with tangible connectivity requirements or humandelivered services. An example are prolevel services that guarantee a higher level of upload services on a community such as Flickr.
The advertising lure
Due to the limits or uncertainty surrounding the business models that are directly associated with individual works, it is not surprising that service providers for information communities, and at a lesser extent individual creators have used advertising as a way to directly cash on the attention time generated by a community. Advertising does not generate immediately perceptible transaction costs for the enduser, though there are obvious costs for the quality of perception, often associated with intrusive profiling. Historically, media have struggled to find to the appropriate nature and degree of advertising funding. Recent examples such as television have shown that an early adoption of advertisingbased models can shape the whole future of the medium, for the worse.
Apart from the directly perceptible effects of advertising, one concern here is its impact on the diversity of attention. In general, advertising models are associated with an interest (of media providers) to concentrate attention as to make value of it. Players such as Google have put much ingeniosity in designing schemes such as AdSense that allow for moderately accessed contents to generate advertising revenues. However, the level of generated income is very much dependent on the fit between the words used in the page and the semantic domains used by advertisers. This can be seen as an opaque form of product placement.
In addition, there are strong doubts on the scalability of advertising. Advertising is characterized by brutal displacements of spending from one media to another, but a longterm stability in terms of share of GDP (1 to 2.3% depending on countries, see Galbi (2001) for a detailed treatment of the U.S. and U.K.). How much this lack of scalability is or not a problem depends on the type of coupling between the monetary economy and information commons activities.
If, as I have advocated, one sees the information ecosystem as an essential nonmonetarized sphere that will represent a growing share of all human activities, the schemes that are used to fund mediating organisations and to sustain contributors must be compatible with ecosystem quality. The fact that attention time is scarce makes it marketable, but it does not mean that it should be marketed, nor than it can be done efficiently from an economic viewpoint (Aigrain, 1997). A human ecology approach to attention and a reinvention of the original concept of publicity (as in making public) are alternative promising approaches, if they can be combined with other sources of sustainable funding than advertising.
The fact that attention time is scarce makes it marketable, but it does not mean that it should be marketed ...
Direct community financial contribution
Voluntary funding of communities and contributors by members of a community is a scheme that should not be disregarded. Cheaper and easier to put in place online payments platforms are facilitating its use, and it does not suffer from the same flaws than micropayments (Shirky, 2000). This scheme has a positive synergy with the growth of fair trade and other movements that give back control to citizens (producers, consumers and prosumers) on the organisation of the economy.
Indirect fee or taxbased funding
Mutualization of the funding of a public good is often achieved through tax or fee collection. However, the application of taxation and fees to fund the information commons raises fears of bureaucracy, undue complexity and lack of responsiveness to the public preferences in the allocation of the collected resources. How can we reconcile the fact that these fears have real ground with the fact that the birth of the information commons was often made possible by public money? By taking in account that not all forms of usage of taxation of public money are equivalent. Peer managed allocation of scientific funds is very different from the management of collected funds by collective societies for creative works, for instance. An unemployment insurance for artists can play a key and unplanned role in sustaining creativity (that is freely allocated by individuals). Innovative schemes such as the competitive intermediaries proposed by James Love  plan to make possible for tax or fee payers to allocate part of the collected resources to intermediaries of their choice.
There are many paths to address creative commons sustainability provided that they empower users to move across the continuum, enable them to recognize and encourage quality, and that they respect and enhance diversity of attention.
About the author
Dr. Philippe Aigrain is the founder and CEO of Sopinspace, Society for Public Information Spaces, a small company providing free software solutions for public debate and collaboration over the Internet. He acts at the international and national level for the promotion and the sustainable development of the information commons. He is the author of Cause commune: linformation entre bien commun et propriété (Paris: Fayard, 2005; see also http://www.causecommune.org) a contribution to the political philosophy of the information commons and intellectual rights, and one of the first books by a mainstream publisher in France to be distributed under a Creative Commons license. He has authored around a hundred papers on technology, mathematics, sociological or economic issues.
Additional research material related to this paper is downloadable from the authors site at http://www.debatpublic.net/Members/paigrain/pha-FM10-source-images-code-datasets.zip/download.
1. For a more detailed treatment of these promises and obstacles to their fulfilment see Aigrain (2005a) and Aigrain (2005b).
2. Most sites also display the most recently posted works.
3. Wikipedia. Zipfs law, at http://en.wikipedia.org/wiki/Zipf%27s_law.
4. With his usual sagacity, Clay Shirky also noted that the super popular blogs would through their shear popularity lose their conversational nature and thus stop being blogs and start resembling broadcast works.
5. A reading is counted when at least two different pages of a book are accessed beyond the table of contents. The average reading time is 15 minutes. Access by bots is discounted.
6. Syndicat National de lÉdition Phonographique. Reproduced in Le Monde (17 February 2006).
7. Gini indicators have several flaws: in particular, they give a very strong weight to low values in the tail of the ranked distribution. When Gini indicators are used to measure inequality of income, this property is useful as it expresses our dislike for a society that keeps some people in extreme poverty. However, it is probably not as bad that a few works receive very little attention, as a result of them being in practice inaccessible or of very low interest to others. One might try to devise adapted indicators that correct this effect.
8. Some aspects of the IPv6 protocol.
9. The end of network neutrality in the U.S., DRMrelated legislation.
10. Skyblog considers a blog active when its writer has logged in the past six months or a comment was posted in the same period.
12. See for instance this discussion thread: http://www.flickr.com/groups/35468158437@N01/discuss/72057594092233199/.
13. To be discussed at the TACD Conference on New relations between creative individuals and communities, consumers and citizens, June 2006; see http://www.tacd.org/docs/?id=296.
Lada A. Adamic and Bernardo A. Huberman, 2002. Zipfs law and the Internet, Glottometrics, volume 3, pp. 143150, and at http://www.hpl.hp.com/research/idl/papers/ranking/adamicglottometrics.pdf.
Philippe Aigrain, 2005a. Capabilities in the information era, paper presented at the Conference on the Politics and Ideology of Intellectual Property, TACD, Brussels (March), at http://www.debatpublic.net/Members/paigrain/texts/TACD200306.pdf.
Philippe Aigrain, 2005b. Reaching out: Cultural and social challenges for the information commons, keynote speech at the Creative Commons Italia 2005 Conference, Torino, Italy, at http://www.creativecommons.it/ccit2005/Materiale/Aigrain_CCIT2005.pdf.
Philippe Aigrain, 1997. Attention, media, value and economics, First Monday, volume 2, number 9 (September), at http://www.firstmonday.org/issues/issue2_9/aigrain/. http://dx.doi.org/10.5210/fm.v2i9.549
Sandra Albertolli, 2005. 12 millions de lecteurs de blogs en france, on sen fout, non ?, (20 December), at http://www.heaven.fr/archives/2005/12/12-millions-de-lecteurs-de-blogs-en-france-on-sen-fout-non/.
Laurence Allard, 2005. Express yourself 2.0 Blogs, podcasts, fansubbing, mashups... : de quelques agrégats technoculturels à lâge de lexpressivisme généralisé, at http://www.freescape.eu.org//biblio/article.php3?id_article=233.
Olivier Bomsel and Gilles Le Blanc, 2004. Nouvelle économie des contenus, nouvelle utopie, Libération (17 February).
Douglas A. Galbi, 2001. Some economics of personal activity and implications for the digital economy, First Monday, volume 6, number 7 (July), at http://firstmonday.org/issues/issue6_7/galbi/. http://dx.doi.org/10.5210/fm.v6i7.870
Michel L. Goldstein, Steven A. Morris, and Gary G. Yen, 2004. Problems with fitting to the powerlaw distribution, European Physical Journal B Condensed Matter, volume 41, number 2 (September), pp. 255258; preprint at http://arxiv.org/abs/cond-mat/0402322.
François Moreau, Marc Bourreau, and Michel Gensollen, 2006. Quel avenir pour la distribution numérique des oeuvres culturelles InternetActu.net (29 March), at http://www.internetactu.net/?p=6401.
Y. Nishiyama, S. Osada, and K. Morimune, 2004. Estimation and testing for rank size rule regression under pareto distribution, Proceedings of the International Environmental Modelling and Software Society iEMSs 2004 International Conference (June), at http://www.iemss.org/iemss2004/pdf/econometric/nishesti.pdf.
Tim OReilly, 2002. Piracy is progressive taxation, and other thoughts on the evolution of online distribution (November), at http://www.openp2p.com/pub/a/p2p/2002/12/11/piracy.html.
Clay Shirky, 2003. Power laws, weblogs, and inequality, at http://www.shirky.com/writings/powerlaw_weblog.html.
Clay Shirky, 2000. The case against micropayments, at http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html.
Wikipedia. Zipfs law, at http://en.wikipedia.org/wiki/Zipf%27s_law.
Olivier Zilbertin, 2006. Un francais sur dix a créé son blog sur internet, Le Monde (4 January).
Paper received 1 May 2006; accepted 17 May 2006.
Copyright ©2006, First Monday.
Copyright ©2006, Philippe Aigrain.
Diversity, attention and symmetry in a manytomany information society by Philippe Aigrain
First Monday, volume 11, number 6 (June 2006),