First Monday

Cosmopolitical composition of distributed architectures by Dominique Boullier



Abstract
This paper tells the story of several distributed architectures “from within” and from personal experience. Using a “cosmopolitical compass”, we shall describe four types of architectures — including P2P — that are currently being debated, and find four other types in P2P as such, from centralized P2P to encrypted P2P. We argue that this approach is helpful to unveil the plurality of choices needed for obtaining a network that works and the composition work required for this purpose (as opposed to any “purist” view of distribution). It is also helpful to understand the agency of these networks, which distribute competencies as much as they are distributed.

Contents

Introduction
Minitel: Against distributed architectures ... or not?
BitTorrent: The experience of distribution where the whole is lost
Metaverse, maintaining a persistent virtual world requires to quit purely distributed architectures
Bitcoin: Moral requirements make the difference
What did we learn from these experiences?
The distributed and the whole: How to share a common view and to maintain the sense of a socio-technical structure
Conclusion

 


 

Introduction

Distributed architectures are not merely technically designed, but they encapsulate political choices (Boullier, 2012). This is what we learned from sociology of innovation (Akrich, et al., 2006), and from a tradition of research work originated by Lessig (1999) and his famous “code is law” motto.

However, these distributed architectures are not so much “political” in the sense derived from expressions such as “distribution means democratization”. When using the concept of “democratization”, we are moving from one social world (or topic, or order of worth; see Boltanski and Thévenot, 2006) to another one without the required caution.

First, the extension of contributions, one of the core meanings of democratization, is not so clearly demonstrated in these kinds of architectures. And second, it would imply the restriction of democratization to a question of numbers (which it is, for sure) at the cost of escaping the debate, the controversy about the design itself that is at the core of the democratic principle. Whatever the choice of architecture that is made, there are always many different ways to implement it, and this is what I will try to demonstrate, using a “cosmopolitical” framework (Boullier, 2015), in which the variety of assemblages is always preserved for a thorough political debate.

Placing the emphasis on the political stakes of architectures, and especially on distributed ones, challenges the trend towards a rather basic “efficiency” rationale that dismisses democratization discussions in order to frame distributed architectures within a strictly functional view. This move is rather familiar for modernists, as Westerners are trained to remain the ones who designed a scientific revolution, starting from Francis Bacon in the sixteenth century, based on purification (facts against values, sciences against politics). Sometimes this modern faith in a linear Progress will even produce “bigots” of distributed architectures that will not endure thorough examination, even on the efficiency aspect. These militants tend to ignore the very work of composition that is always required to make real systems work, as Francesca Musiani (2015) clearly demonstrated in the case of distributed architectures.

This is why discussing a cosmopolitical vision of distributed architectures may become very tricky, or at least challenging, because it would require to acknowledge both positions — the militant one and the exclusively technical, purist one — as part of the composition process, but in a larger framework that will give room to the innovation people make in real settings.

Some experiences drawn from my encounters with these “architectures of the third kind” will shed light on the complex task of cosmopolitical composition, and they will demonstrate the powerful political potential of these architectures against the centralized turn that networks and services took in the last 15 years.

 

++++++++++

Minitel: Against distributed architectures ... or not?

My first experience of a distributed architecture can be traced back to the Minitel, during the 1980s. Or not? Of course, the Minitel is exactly the contrary of what a distributed architecture should be. But this is why it is interesting because it was designed during a controversial time when distributed architectures were already available, i.e., the Internet. Minitel was developed by the French telecommunications administration at the beginning of the 1980s, and relied on a videotext format of encoding and display, which was acceptable for the copper land line that equipped the telephone network worldwide. The terminals did not have any computing capacity, and were distributed for free to the general public in France (while similar systems were developed in England and Germany). We must recall the old ages of the Internet, whose principles and architecture were designed in the 1970s by Cerf, Kahn and Pouzin, among others. The Internet architecture was a distributed one indeed, as Paul Baran envisioned it. The X25 protocol was its challenger at that time, developed and imposed by PTTs all over the world (against private networks as well, such as CompuServe, AOL and IBM Prodigy). At the end of the 1970s and during the 1980s, with the generous funding of governments and PTTs, the X25 protocol was gaining ground and it resulted in the implementation of the videotext in France and elsewhere, where all the connectivity (based on circuit switching, not packet switching) was handled by switching centers, called PAVI (Point d’accès videotexte in French or Point of presence, POP).

Here is the first lesson to be learned: distributed architectures emerge in a conflictual environment, already populated and controversial (Yaneva, 2012), and they always challenge well-organized centralized powers of various kinds. And the winner is not known in advance, although when looking back at these times, it seems obvious. The social, legal and technical components of the situation are not a mere context, since some of them may become very powerful depending on the assemblage in the making.

Minitel was very successful and profitable at that time, more than any service of the Internet not dependent on advertising. This is because customers were charged according to the duration of the connection to services, developed by private companies, which received revenues from the PTT administration according to their audience. For a very strange and paradoxical reason, Minitel was repurposed (Rogers, 2013) by users through a hack of a service protocol (at Gretel, in Strasbourg). It later became the famous messaging service “messageries” including “messageries roses”, and generated a frenzy among French users, who spent huge amounts of money ... and lust on them (and paved the way for dating services which we are familiar with today) (Boullier, 1989). Could this be considered a first step in distributed networks, since users self-connected through their terminals and did not ask for any super node to manage services? Unfortunately, this was not the case at all, since self-organized services quickly disappeared and were redesigned to become very profitable. From a technical standpoint, they always needed a switching center to be operated.

Second lesson to be learned: users’ ability to design self-centered services — the kind of “participatory design” we talked about previously — should not be confused with the distributed architecture as such. For specific groups, e.g., for those engaged in activism, there is often some room to repurpose even centralized networks.

 

++++++++++

BitTorrent: The experience of distribution where the whole is lost

In the years 2000, my first experience of BitTorrent was one of ‘visual revelation’ of some aspects of the underlying distributed architecture. The interface displayed a dashboard that allowed the user to follow the download of chunks of the requested file. This information could work both ways in order to know how much of the local resources were used, because the basic feature of BitTorrent is that downloaders cannot but instantly offer seeds for others. The dashboard is fascinating due to the completely unstructured appearance of the file, broken down into chunks that are emerging at random. It may create a lot of uncertainty because there is no reference to a rationale of truncation.

On the contrary, the interfaces that display a single continuous bar extending during the downloading of files are misleading. It is not a continuous process, and a failure at the last second of the download does not mean that a large part of the file is already stored, as Norman (1988) used to signal. However, this deceptive display of a very structured process has the great comforting power of hiding the messy arrangement of the distribution process. By comparison, the BitTorrent display provokes some kind of ‘phenomenological’ experience, in which the taken-for-granted status of a file gets dissolved in small chunks coming from various seeds. Indeed, this experience seems a good introduction to what packets are all about, a key to the understanding of distributed architecture, even though it is not cognitively easy to grasp.

This is a third lesson that helps understand that there is no continuity anymore in the “packets world” and even less in the distributed architecture world. The “whole” (Latour, et al., 2012) that we would like to take for granted, in this case the file or the content, does not exist as such. Distributed architectures require a clear understanding of the uncertain status of files, created by the protocols themselves. But this uncertainty and the variable geometry of entities become very useful to prepare the composition work that we shall advocate here.

This is true from the computer science point of view as well. Files are not a very clear entity. Packets are easy to define but this assemblage of packets, the phenomenological “whole” of the file, is not guaranteed any more when it is divided and computed into many parts without any apparent logical order — a situation provoked by the “best effort” rule of telecommunications networks. This is the less modernist rule ever: no excellence but, in a very opportunistic mode, only the best effort, which is not excellence or full reliability, but a rather approximate and probabilistic use of the distribution properties of the network to reconstitute these wholes (files, content) in the end.

 

++++++++++

Metaverse, maintaining a persistent virtual world requires to quit purely distributed architectures

Some years later, about 2007–2008, Second Life was such a fad that no one could escape inventing his or her own version of a virtual world. However, one project I contributed to, called Solipsis, a peer-to-peer metaverse (Keller and Simon, 2003), was shifting radically this perspective, by getting rid of the server farms that Linden Labs were developing in order to maintain the persistence of a world while users go in and out of the world. The project had to find ways of creating and maintaining the same kind of virtual world as Linden Labs by using only peer-to-peer connections. The storage should be done on every machine of the peers and the controls were to be processed through the protocol and not through any central server that could act as a dispatcher or a controller. The challenge in maintaining this feeling of a persistent world was tremendous. This virtual world is always changing due to the actions of the users and, at the same time, is distributed among users’ machines, through an unstable network, because users could disconnect themselves at any moment. Of course, this is a common challenge for distributed architectures, but retrieving a file in an index (which is the common task for downloading networks) is not exactly as critical as producing in real time the perceptive effect of a naturally changing world which a user is immersed in.

In order to survive, the whole project became a game of compromises, it could not rely on a purist view of complete distribution for all functions of the network. Ambitious P2P technologies were swapped for a reliable central server for some features of the activity. This is something Francesca Musiani (2015) describes quite well when she lists which features can be distributed in a distributed architecture: index, storage, search, ranking, updates, and so on. This specific design will produce very different “styles” of distribution, accounting for the sociotechnical composition as much as idioms account for socio-linguistic patterns (Gagnepain, 1990). When speaking of distribution, stakeholders must always be explicit about which features are really distributed. And there is no use underestimating the virtue of developers who adopt compromises by mixing solutions; indeed, this is the only way to allow the network some chance of survival in the innovation process.

Many other flaws and compromises were made in the Solipsis project, to the extent that it did not make its way to a full implementation. But the developers did their best to push it further by adopting a non-dogmatic approach.

This is our fourth lesson, challenging discourses about purification and rigid principles. There is no such a thing as a “fully” distributed architecture. It might be the case that some domains, some fields, services or applications are much more difficult to address from a distributed approach than others. These layers of distribution can be assembled in a much more heterogeneous way depending on the issue one wants to solve. This composition (Latour, 2010) skill is a prominent feature of cosmopolitical design (Boullier, 2015) and it goes far beyond the weakness often attributed to “compromise”. Distributed architectures design a cosmos and not a “taxis”, i.e., in ancient Greek, a world made of heterogeneous ties and not one of preset categories.

 

++++++++++

Bitcoin: Moral requirements make the difference

Bitcoin may give us a final insight into distributed political challenges. In order to become a miner, i.e., to contribute to the block chain by computing complex equations and get revenue from this contribution, I had to install the Bitcoin client on my PC and my contribution to the distributed network appeared quite simply and immediately. However, the consequence was soon very visible: making the CPU (central processing unit) available for the P2P network created conflicts in the allocation of computing resources and brought other processes to a halt.

This experience taught me that there is no such a thing as a win-win effect, where sharing computing resources does not affect other activities. Choices have to be made, and the availability of bitcoins comes to a certain cost. Distributed systems are, indeed, exchange systems with some moral requirements, that must be considered in the design of the architecture itself because not everyone can act as a leecher or free rider in the community. When CPU resources or bandwidth become a currency, there is no longer any central bank governing these resources and producing a “general equivalent”; the situation is akin to a barter between stakeholders. This implies, however, that motivations and benefits must exist as well. This explains the concentration of miners who invest in very powerful computing systems dedicated to generating revenue, while the long tail of other miners must solve the resources allocation dilemma I experienced.

This is our fifth and final lesson: the design of these architectures still projects a user’s model (Eco, 1985; Norman, 1988), made of concerns, passions and interests (Hirschman, 1977) which can be very different according to the technical choices.

 

++++++++++

What did we learn from these experiences?

  1. Distributed architectures emerge in a technical, commercial, legal space, something we are used to call “context” overlooking the fact that it needs to be made explicit each time to know whether it has an agency of its own. They cannot avoid being involved in conflicts, controversies, negotiations and so on.

  2. Entities that fill these cosmos are not as clearly defined as we used to think: a file is not a file, a server is not a server, a network is not a network, and so on. Learning conceptual flexibility and creativity is one requirement or one output of getting involved in projects based upon distributed architectures.

  3. The distributed model may be useful for some layers of a network, for some domains or services and must not be considered as a universal reference: its own survival relies in its ability to compose with different styles or types of architectures.

  4. Distribution does not automatically trigger a contributive behavior: moral requirements have to be assessed and designed, since these architectures need contributions, but there are very different ways to bring them about.

All these lessons could have been just as well extracted from an “ecological field work”, the one conducted by ecology scientists, provided that we translate some terms into the field of distributed architectures. The emergence pattern that is revealed in these accounts can become a virtue-as-such, and may generate a feeling of uncertainty that make the extension of the model to distribution more difficult. It would leave, for instance, social scientists astray in an uncertain world without any reference to any structure, laws and explanations (Boullier, 2016).

However, actors themselves produce some equivalent of a common framework in order to maintain some sense of a “whole” (Latour, et al., 2012), i.e., of a socio-technical structure, while all elements become fluid and exchangeable. Schutz (1967) used to say that members of society do a tremendous amount of work “to maintain the sense of social structure” whenever it seems weak or fragile, as in undecided situations of everyday life. Ethnomethodology (Garfinkel, 1987; Cicourel, 1974) became famous, and very challenging, when it described the methods the layman uses to make sense of blurred boundaries, of equivocal status or greetings, of failed coherence in situations. Distributed architecture also require these methods to maintain some sense of a “whole” in the Latourian sense, which in our examples, may be either:

 

++++++++++

The distributed and the whole: How to share a common view and to maintain the sense of a socio-technical structure

However, distributed architectures are not so easy to tackle when it comes to generate this kind of “sense of social structure” (Schutz, 1967) or a sense of “imagined community” (Anderson, 1991), which are the basic requirements for political design. Let us observe how difficult it is to share a common visual representation of a distributed architecture, in the case of BitTorrent, by simply comparing the images proposed by Wikipedia in different languages. The English speaking page displays the image of a dispatcher which appears equivalent to any client server architecture (although computers at the periphery seem to interact).

 

Illustration of the BitTorrent network in the English language Wikipedia
 
Figure 1: Illustration of the BitTorrent network in the English language Wikipedia.
Source: https://en.wikipedia.org/wiki/BitTorrent#/media/File:BitTorrent_network.svg
Note: Larger version of figure available here.

 

The Spanish page proposes a more user-oriented visualization, because it shows the files being partly downloaded and uploaded from the identified seed, but it still looks like the network is a smoothly carried out division of labor, not more than that, the nodes and the flows being set up once and for all.

 

Illustration of the BitTorrent network in the Spanish language Wikipedia
 
Figure 2: Illustration of the BitTorrent network in the Spanish language Wikipedia.
Source: https://es.wikipedia.org/wiki/BitTorrent#/media/File:Red_bittorrent.jpg

 

The French version is much more challenging and, significantly, it uses a dynamic visualization, where nodes are exchanging types and parts of data but also stopping and resuming the exchange, and in very unpredictable ways, pairing at random and designing an unconventional “Web”.

 

Dynamic visualization of the BitTorrent network in the French language Wikipedia
 
Figure 3: Dynamic visualization of the BitTorrent network in the French language Wikipedia.
Source: https://fr.wikipedia.org/wiki/BitTorrent_(protocole)#/media/File:Torrentcomp_small.gif

 

This comparison is relevant to point how difficult it is to obtain an account of a distributed architecture from the top, from an overhanging position, in which all elements can be assembled in a well-designed and explicit architecture. These modernist visions of networks should be challenged even when they are used to describing a distributed framework. In this specific case, only an insider’s point of view can understand from experience how the network is “enacted” (Varela, 1991), because no map is available. This would be a digital translation of the “becoming within” imperative I designed (Boullier, 2015) for cosmopolitical purposes, deriving elements of this approach from Haraway (2003) and from Sloterdijk (2005).

From Haraway, we learned that becoming (Deleuze’s “devenir”) is not a matter of facts, of progress, of control but a capacity to be affected by the companion species — that might be animals or cyborgs — to “become with”. A distributed architecture requires that all participants accept to be affected by the flows and exchanges among peers, as it is the case for the peers who let their machine become a server, a computational resource, or anything else that the networked collective experience needs in order to survive.

From Sloterdijk, we learned that the design of “envelopes” (i.e., technical and institutional containers to be collectively experienced from within) is the only way to maintain a climate, a collective climate, from atmosphere to political life. But no one can ever pretend to monitor this climate and these envelopes from outside, contrary to the modernist saga (“the modernist is the one who believes he never was within”). It must be maintained from within, and we must accept, by doing so, to be affected by the very phenomenon that is being built. Distributed architectures cannot be understood but from within, from the activity that makes them work, because their versatility remains too difficult to represent in traditional (modern) ways.

The cosmopolitical challenge of “becoming within” encompasses many issues that are intrinsically political, provided we move out of the traditional framework of politics. Technical architectures do politics: they co-produce the conditions and the principles of our common world, either in “bricks and mortar” or in “bits and packets”. But there are many policies competing at the same time to solve the problem at stake for a specific service. Some may try to escape this “becoming within” imperative. This is why some centralization is useful by becoming the dispatcher and avoid remaining trapped within the uncertainty loop.

My “cosmopolitical” compass tries to account for these choices and for the political work of “composition” which is at the core of distributed architecture development. This compass uses Latour’s (1993) insights about modernism as a detachment process and Stengers’ account of sciences (2011, 2010) where uncertainty is gaining a status as such.

Latour explained how the process of modernization is a kind of permanent detachment from nature and from traditions, in order to achieve emancipation — an ongoing process which costs the loss of ties with cosmos, with history and so on. Positioning stakeholders and solutions in architecture design along a line of detachment proves to be useful to understand the degree of estrangement produced for each specific policy.

Stengers demonstrated how sciences, especially quantum physics, developed during the twentieth century ways to consider uncertainty as a condition for scientific discovery and no longer as a flaw. Learning from this tolerance to uncertainty in sciences, the compass manages to describe positions and policies around a variety of issues in terms of their acceptance of — or attraction towards — uncertainty, or their need for stable references of any kind. By positioning every stance in a dispute or in a controversy along these axes, the cosmopolitical compass enables to accept the pluralism of positions and attitudes, and reveals the need for a composition work that cannot discard any of the positions a priori. The cosmopolitical compass for network architectures presented here (Figure 4) is quite clear in its telling a story of compositions, each with its flaws and need for a new design.

 

Cosmopolitical compass for network architectures
 
Figure 4: Cosmopolitical compass for network architectures.

 

The traditional opposition between mainframes, client server, and P2P, immediately reveals which one is more prone to experience “from within” the versatility of the network. Mainframes and client-server architectures offer guarantees that someone is in the driver’s seat, that some entity, technical, legal, commercial will be accountable for the flaws, for the whole service. Peer-to-peer architectures cannot display such cognitive level of comfort, because the user must be within the network to make it work, without any external dispatcher in control. But the compass is a fractal process, each quadrant needing a new exploration to detect the diversity of policies available within each solution.

So, let us elaborate a second compass for P2P systems (here limited to the ones that exchange content) and notice the design choices that create a protection from the network and the uncertainty of the distribution, i.e., centralized P2P such as Napster and hierarchical P2P such as eMule.

 

Cosmopolitical compass for peer-to-peer networks
 
Figure 5: Cosmopolitical compass for peer-to-peer networks.

 

Any central index, or even a permanent status of seeds, produce these certainties, which mean extracting some functions, some layers from the influence of the distributed architecture. When the principles of distribution are fully applied, as it is in BitTorrent, it creates a powerful flow of content, which is why this solution is still the most used one. But uncertainty again raises its head in this solution, because only some contents are made available, the ones that attract collective attention, for a short time, such as blockbusters for movies. The distribution process produces some unwanted effects of convergence, of attractiveness focused on some content, and we must explain why in order to clarify our point about the cost of uncertainty in distributed architectures.

When many peers look for the same content, each of them has no choice but to give access to their computer and to the packets they have already downloaded and to become a server for other peers, enabling more points of access and facilitating the speed of the process. Users are attracted by this speed, and the concentration of collective attention towards some specific (and recent) contents increases. The more distributed and reactive is the network, the more the attention gets focused on a small amount of ‘successful’ content (which accounts for the complaints of right owners). The peer looking for a classic, old or non-famous movie, will have problem finding seeds and will eventually drop her search. This would not have been the case with indexed peer-to peer systems, such as Napster or eMule.

These are topological effects that neutralize some important qualities of distributed networks (resulting in BitTorrent being a non-permanent resource system). Another opportunity is created for reinventing the private network with Friend-to-friend networks (F2F), exemplified by WASTE (http://waste.sourceforge.net/). Encryption of all exchanges reintroduces some certainties, some stable “whole,” since all members must be selected by the previous ones. However, the cost is the uncertainty produced by these invisible networks of peers that cannot advertise themselves. This reduces drastically the number of members and, consequently, the available content. This may help understanding how the universal vision of distributed networks, supposed to be effortlessly extensible to a large scale, is of a modernist kind, and usually subject to many drawbacks. In any architecture such as WASTE, members belong to a selected group, they live “within” this group. It looks like a rather limited world for sure, however that is where distribution appears at its most “livable”. The existence of a cosmos does not mean total extension of a network at the largest possible scale, but the careful composition of connections among entities — and this can be arranged in very different ways.

This is an exploratory compass, which can be extended to any quadrant, and in encrypted darknets, for instance, we would surely find many other differences by using the compass one more time. No cosmopolitical design of a distributed network can escape the requirement of composition (Latour, 2010), selecting and combining the qualities of other designs. To come back to the example of F2F, users can have a centralized index without any fear of attack, since it is a private and restricted network. It will be permanent (as was eMule), due to this maintenance efforts distributed among members, and it will offer a very high-speed download and upload network because of this limitation in numbers. However, since the network loses the openness and the reach to numerous members, there is room for innovation that would compose a “more open network architecture” while still keeping the protection of the community. When attachments (connections) are considered as a value (and friends to friends networks are designed for that reason), this might limit the available content and therefore push the whole community back to a very stable storage of content, unable to address, for instance, the need for novelty.

 

++++++++++

Conclusion

The cosmopolitical architecture cannot be a “purely” distributed one. Rather, it is one that accounts in a more accurate way for the various composition of qualities offered by all types of architectures. Such an architecture cannot be achieved from an overhanging position that would use pure optimization techniques. What we just stated can, of course, be considered as a mere “pragmatic” attitude but in fact, it is a call for demanding work towards a real and careful cosmopolitical composition. The personal experiences we recollect in the first part of the article should be considered as an anthropological method to account for these architectures that require a view from within. Such a method is useful to reconsider and “resize” purely technical or purely militant approaches of distributed architectures, and emphasize the strong coupling between communities and architectures and the need for a double-sided design of both. End of article

 

About the author

Dominique Boullier is Professor and Head, Social Media Lab, Collège des Humanités, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He is a sociologist of digital networks and urban life as well as a linguist. Throughout his career, he has analyzed — and contributed to — a large variety of digital innovations, from the 1980s Minitel to today’s Big Data. He was the director of many academic labs, user labs, and a company, and after eight years at Sciences Po médialab in Paris with Bruno Latour, he became the head of the Social Media Lab at EPFL (2015). He is currently closely involved in the reinvention of social sciences challenged by Big Data and Machine Learning.
E-mail: dominique [dot] boullier [at] epfl [dot] ch

 

References

Madeleine Akrich, Michel Callon and Bruno Latour, 2006. Sociologie de la traduction: Textes fondateurs. Paris: Mines Paris, les Presses.

Benedict Anderson, 1991. Imagined communities: Reflections on the origin and spread of nationalism. Revised and extended edition. London: Verso.

Luc Boltanski and Laurent Thévenot, 2006. On justification: The economies of worth. Translated by Catherine Porter. Princeton, N.J.: Princeton University Press.

Dominique Boullier, 2016. “Big data challenges for the social sciences: From society and opinion to replications,” arXiv.org (18 July), at https://arxiv.org/abs/1607.05034, accessed 10 November 2016.

Dominique Boullier, 2015. “Cosmopolitics: ‘To become within’ — From cosmos to urban life,” In: Albena Yaneva and Alejandro Zaera-Polo (editors). What is cosmopolitical design? Design, nature and the built environment. Farhnam: Ashgate Publishing.

Dominique Boullier, 2012. “Preserving diversity in social networks architectures,” In: Françoise Massit-Folléa, Cécile Méadel and Laurence Monnoyer-Smith (editors). Normative experience in Internet politics. Paris: Transvalor-Presses des Mines.

Dominique Boullier, 1989. “Archéologie des messageries,” Réseaux, volume 7, number 38, pp. 9–29.

Aaron Cicourel, 1974. Cognitive sociology: Language and meaning in social interaction. New York: Free Press.

Umberto Eco, 1985. Lector in fabula: Ou, La cooperation interpretative dans les textes narratifs. Paris: B. Grasset.

Jean Gagnepain, 1990. Du vouloir dire: Traité d’épistémologie des sciences humaines. Tome 1: Du signe, de l'outil. Paris: Livre & Communication.

Harold Garfinkel, 1967. Studies in ethnomethodology. Englewood Cliffs, N.J.: Prentice-Hall.

Donna Haraway, 2003. The companion species manifesto: Dogs, people, and significant otherness. Chicago: Prickly Paradigm Press.

Albert Hirschman, 1977. The passions and the interests: Political arguments for capitalism before its triumph. Princeton, N.J.: Princeton University Press.

Joaquin Keller and Gwendal Simon, 2003. “Solipsis: A massively multi-participant virtual world,” Proceedings of PDPTA ’03: Conference on parallel and distributed processing techniques, at http://www.irisa.fr/adept/membres/gwendal/pdpta03-kellerSimon.pdf, accessed 10 November 2016.

Bruno Latour, 2010. “Steps toward the writing of a compositionist manifesto,” New Literary History, volume 41, pp. 471–490.

Bruno Latour, 1993. We have never been modern. Translated by Catherine Porter. Cambridge, Mass.: Harvard University Press.

Bruno Latour, Pablo Jensen, Tommaso Venturini, Sébastien Grauwin and Dominique Boullier, 2012. “‘The whole is always smaller than its parts’ —A digital test of Gabriel Tarde’s monads,” British Journal of Sociology, volume 63, number 4, pp. 590–615.
doi: http://dx.doi.org/10.1111/j.1468-4446.2012.01428.x, accessed 10 November 2016.

Lawrence Lessig, 1999. Code and other laws in cyberspace. New York: Basic Books.

Francesca Musiani, 2015. Nains sans géants: Architecture décentralisée et services Internet. Second edition. Paris: Presses des Mines-Transvalor.

Donald Norman, 1988. The psychology of everyday things. New York: Basic Books.

Richard Rogers, 2013. Digital methods Cambridge Mass.: MIT Press.

Alfred Schutz, 1967. The phenomenology of the social world. Translated by George Walsh and Frederick Lehnert. Evanston, Ill.: Northwestern University Press.

Peter Sloterdijk, 2005. Sphères: III, Ecumes. Sphérologie plurielle. Paris: Maren Sell.

Isabelle Stengers, 2010–2011. Cosmopolitics. Translated by Robert Bononno. Minneapolis: University of Minnesota Press.

Francesco Varela, Evan Thompson and Eleanor Rosch., 1991. The embodied mind: Cognitive science and human experience. Cambridge Mass.: MIT Press.

Albena Yaneva, 2012. Mapping controversies in architecture. Farnham: Ashgate Publishing.

 


Editorial history

Received 6 November 2016; accepted 10 November 2016.


Copyright © 2016, Dominique Boullier.

Cosmopolitical composition of distributed architectures
by Dominique Boullier.
First Monday, Volume 21, Number 12 - 5 December 2016
https://firstmonday.org/ojs/index.php/fm/article/download/7128/5662
doi: http://dx.doi.org/10.5210/fm.v21i12.7128