First Monday

Heteromation and its (dis)contents: The invisible division of labor between humans and machines by Hamid Ekbia and Bonnie Nardi



Abstract
The division of labor between humans and computer systems has changed along both technical and human dimensions. Technically, there has been a shift from technologies of automation, the aim of which was to disallow human intervention at nearly all points in the system, to technologies of “heteromation” that push critical tasks to end users as indispensable mediators. As this has happened, the large population of human beings who have been driven out by the first type of technology are drawn back into the computational fold by the second type. Turning artificial intelligence on its head, one technology fills the gap created by the other, but with a vengeance that unsettles established mechanisms of reward, fulfillment, and compensation. In this fashion, replacement of human beings and their irrelevance to technological systems has given way to new “modes of engagement” with remarkable social, economic, and ethical implications. In this paper we provide a historical backdrop for heteromation and explore and explicate some of these displacements through analysis of a number of cases, including Mechanical Turk, the video games FoldIt and League of Legends, and social media.

Contents

1. Introduction
2. Historical backdrop
3. Heteromation: The machine calls for help
4. Discussion: A sparse reward space
5. Conclusion: High–frequency low–visibility work

 


 

1. Introduction

The relatively short but vibrant history of computing is marked by a number of distinct phases. Historians of technology usually identify four such phases: the eras of main frames, personal computing, the Internet, and Web 2.0. Those who study this history from the perspective of human–computer interaction, on the other hand, differentiate various “paradigms” on the basis of the modalities of interaction, the experience of users, or the human cognitive skills that are brought to bear in our dealings with computers (Harrison, et al., 2007). A third perspective comes from the area of information systems that studies the development of computing from an organizational perspective, examining its impact on organizational practices having to do with innovation, productivity, coordination, knowledge management, and skill development (Zuboff, 1998; Kallinikos, 2010). There is, however, a fourth way of looking at this history — that is, from the perspective of political economy.

1.1. The political economy of computing

The perspective of political economy takes into account the broader socioeconomic drivers of change in the relationship between humans and machines. Rather than focusing on the characteristics of the technology, on individual dealings with machines, or on organizational implications of technology, this perspective views technological change within the purview of sociohistorical developments, socioeconomic systems, legal and regulatory frameworks, environmental impacts, governance structures, and large–scale government policies and agendas. Taking the vantage point of these sweeping historical forces and conditions allows us to consider issues of social welfare, employment, economic rewards and incentives, power and politics, safety and security, and the technologically mediated relationship between life, labor, and leisure. Such concerns populate the Marxist literature, and provide invaluable analyses, but are often sidelined for sociopolitical reasons. Legal scholars such as Lawrence Lessig, Pamela Samuelson, Dan Burk, and Jonathan Zittrain have considerable influence, and they work at the technohistorical level, as do commentators like Evgeny Morozov, social scientists such as Langdon Winner and computer scientists such as Rob Kling, and others who examine the social, cultural, and political aspects of computing. We explore wider issues of economy, technology and culture that we believe should be engaged and developed in our research communities. The ubiquitous penetration of computing into all aspects of contemporary life makes the introduction of such a perspective all the more vital. The paper provides an analysis of the “division of labor” between humans and machines from the perspective of political economy.

1.2. What is heteromation?

We aim to understand the shifting dynamics of work and labor mediated by digital technologies in contemporary capitalist societies. The division of labor between humans and computer systems has significantly, but not quite visibly, changed along both technical and human dimensions. Technically, there has been a shift from technologies of automation, such as banking, retail, and manufacturing, the aim of which was to disallow human intervention at nearly all points in the system, to technologies of what we call heteromation that push critical tasks to end users as indispensable mediators. We contrast heteromation, which creates technical systems that function through the actions of heterogeneous actors, with automation, a paradigm oriented to the actions of machines. Heteromated systems include video games, social media, certain crowdsourced applications, systems of microwork such as Mechanical Turk, personal health records, devices that require intermediation for some users (such as cell phones), and the quantified self inasmuch as insurance companies and others will use the data. As this shift has happened, the large population of human beings who have been driven out by the first type of technology, or never engaged it, are drawn (back) into the computational fold by the second type.

Turning artificial intelligence on its head, heteromated technology fills the gap created by automation, but with a vengeance that unsettles established mechanisms of reward, fulfillment, and compensation. It alters social relations by fashioning humans as computational components, raising remarkable social, economic, and ethical questions. On the one hand, we observe the drawing of cheap labor from the so–called “precariat” — a pool of unemployed, underemployed, or unemployable workers, compensating them well below minimum wage (see Standing, 2011). On the other hand, the same arrangement leverages uncompensated labor from a range of participants, some highly educated, by offering participants totalized engagement — i.e., strong affective rewards (see Ekbia and Nardi, 2012). In this fashion, heteromated systems reward enterprises as well as participants who receive affective rewards. In so doing, they hide the asymmetric labor relations in which benefits are significantly greater for enterprises. We emphasize that an essential characteristic of heteromation is that someone (typically an enterprise) benefits from the labor of others.

In previous work on “inverse instrumentality” we observed that “certain large, complex technologies ... move to strategically insert human beings within technological systems in order to allow the systems to operate and function in intended ways” [1]. We analyzed gaps in systems such as video games and personal health records filled by human labor. For example, complex video games typically provide little or no training and documentation. Yet players engage them with zest; the learning gap is filled by players themselves who write guides, produce YouTube videos, populate forums with useful information, and supply multiple forms of in–game informal peer learning (see Nardi, 2010).

Inverse instrumentality accounts for some but not all cases in which we are currently interested. Heteromation, a closely related concept, inherits the core idea of gaps filled by human labor in computational systems that we theorized in inverse instrumentality, but provides a more general formulation in three ways. First, heteromation can apply to smaller technological systems, such as the Foldit game. Second, inverse instrumentality posits that gaps are intentional. But in some systems, designers may not be aware of gaps during design. Third, inverse instrumentality conceives subjects in the system as both “users” in the classic sense, and “subjects bound into the system as necessary functional components” [2]. Heteromation includes systems that may have functionaries who are not “users” (as in Mechanical Turk); that is, subjects who take action within a software system but have no interest in the outcomes produced by their actions within the system. These subjects’ motives are derived from outside the system [3].

1.2. What heteromation is not

The introduction of the neologism “heteromation” is based on the belief that we are facing a novel phenomenon in the organization and regulation of human labor. This phenomenon is partly related to the computational “regime of work,” which has emerged in the current cognitive and cultural environment (Kallinikos and Hasselbladh, 2009), but it goes beyond the formal contexts of work. The phenomenon is also associated with recent displacements in the organization of human activity captured by terms such as “crowdsourcing” and “human computation.” It is, however, broader in its scope and implications than either of these. Crowdsourcing includes small, repetitive tasks often called microwork, and more skilled labor, done at scale, managed through computers. Although many heteromated systems use crowdsourcing, they may also extract value from labor relations that are not mediated computationally, or not within the same computational system. Conversely, some systems employing microwork do not extract value for outside enterprises, and therefore we do not consider them heteromated. This particular labor relation (wherein enterprises do not extract value) is important; heteromation concerns only arrangements mediated by digital technology in which value can be gained by enterprises (or other outside entities). An example of a system that deploys microwork but is not heteromated is that of André, et al. which collects and analyzes end user preferences for conference paper scheduling. The data collection, which requires users to input their preferences, is conceived as “distribute[d] micro–tasks ... an expert crowd [completes] to aid in the scheduling process” (André, et al., 2013). The labor of reporting preferences benefits only the community itself, resulting in efficiently scheduled paper sessions cognizant of user preferences. No extraction of surplus value for an outside enterprise occurs.

Human computation asserts a functionalist view, largely uninterested in considerations of history, social relations, labor relations, and economy. The functionalist viewpoint does not ask questions such as “Why now?”, “What happens to people when they become components of computational systems?”, and “What are the impacts of heteromated labor on larger communities and on polities?” [4] In our case studies, these questions arose, yet we found little in the human computation/crowdsourcing literature to address them. Outside a few studies such as Irani and Silberman’s analysis of Mechanical Turk (2013), “crowdsourcing” and “human computation” have been normalized in the research community in optimistic evocations of efficiency, smart use of available resources, and the inevitable march of technological progress. We hope the term heteromation will evoke a different, broader, more nuanced, and critical set of concerns.

1.3. Humans and machines: What are they “good at”?

We argue that the turn to heteromation does not derive, as we might expect, from essential qualities of humans and machines — what each is “good at” (whether relatively or absolutely). The list of things machines are good at continually expands, and assertions about the things humans are said to be good at generally consider only whether a human can physically or cognitively accomplish a task, rather than whether the task is morally and ethically defensible or desirable. Our stance counters common assumptions in the literature — for example, Langlois’ assertion that “the quite different cognitive structures of humans and machines (including computers) ... explain and predict the tasks to which each will be most suited[5]. Drawing on evolutionary psychology, Langlois refers to “the kinds of cognition for which humans have been equipped by biological evolution,” and declares that “... the human machine is also in many respects a response to economic forces. The human brain (and the human being more generally) is an evolved product, and therefore one whose design reflects tradeoffs motivated by resource constraints” [6].

We concur that the human condition is largely shaped by economic forces. Unlike Langlois, who views this reality from an evolutionary and essentialist perspective, however, we see the development of humans and technologies as the outcome of socioeconomic forces operating throughout history. We reject innatist assumptions of the kind Langlois puts forward, and we hope to perturb notions about what is “suitable“ for humans by contextualizing our empirical examples within their particular historical socioeconomic conditions. Assuming a timeless, natural division of labor in which we divvy up the work for humans and the work for machines each according to their abilities, deflects attention from the specific conditions under which humans labor, and the changing systems of compensation and reward in which they contribute value to the projects of others. We examine heteromation as an outcome of particular socioeconomic conditions, not as the outcome of fixed attributes of humans and machines.

We argue that heteromation is a product of current conditions including: (1) historically high levels of profit in which no realm of human activity is too private, repetitive, or objectifying to escape capital’s grip; (2) the “race to the bottom” of increasing economic disparity resulting in the precariat; (3) a cultural need for high levels of stimulation fed by decades of quality popular entertainment and the generalized anxiety of the neoliberal subject which renders such stimulation particularly desirable; and, (4) feelings of uselessness or futility experienced by growing segments of the population such as the elderly, the chronically ill, the impoverished, and healthy retired people faced with 20–30 years without the social rewards of employment (an unintended outcome of increasing life spans). While these are broad claims, they are supported by diverse literatures in economics, political economy, history, anthropology, and sociology (e.g., Gorz, 1985; Beniger, 1986; Boltanski and Chiapello, 2005; Harvey, 2005; Sennett, 2006; Brynjolfsson and McAfee, 2011; Standing, 2011; Suarez–Villa, 2012).

1.4. Case examples

Our cases examine heteromated systems according to their functionality and reward structure. We categorize systems in terms of who benefits from the heteromated labor relation, whether participant compensation (monetary) is offered, and whether the system produces affective rewards. Beneficiaries are organizations that reap the major benefits of heteromated labor. Participants may benefit from affective rewards.

 

Table 1: Varieties of heteromated systems.
SystemHeteromated functionalityBeneficiariesParticipant compensationAffective rewards for participants
Mechanical Turkrepetitive microtasksemployers+ (small)- + (minor)
Folditproblem–solving laborscientists-+
video gamestraininggame companies-+
League of Legendsdisciplining playersRiot Games-+
Facebook and Googleinput of personal datacorporations-+

 

Table 1 shows heteromated systems in different domains supporting diverse functions. Even this small sample reveals heteromation to be a flexible strategy exhibiting the breadth of application characteristic of automation. In what follows, we provide a historical backdrop for heteromation, and then explore and explicate some of the socioeconomic developments and displacements that have led to its current uptake. Our work has both a theoretical and empirical component. Theoretically, it draws on our previous theory of inverse instrumentality and its relation to systems of control (Ekbia and Nardi, 2012). Empirically, the investigation is grounded in the details of a wide range of heteromated systems emerging in contemporary capitalism.

 

++++++++++

2. Historical backdrop

In the early days of computing, Herbert Simon, whose work spanned such broad areas as economics, psychology, computer science, artificial intelligence, management, and decision science, envisioned the following:

Within the very near future — much less than twenty–five years — we shall have the technical capability of substituting machines for any and all human functions in organizations. [7]

More than a decade later, in the introduction to The new science of management decision, Simon reiterated the vision:

... that computers and automation will contribute to a continuing, but not greatly accelerated, rise in productivity, that full employment will be maintained in the face of that rise, and that mankind will not find life of production and consumption in a more automated world greatly different from what it has been in the past. [8]

This vision was based on the conviction that “in our times computers will be able to perform any cognitive task that a person can perform” [9]. Being an economist at the same time, Simon was aware that what will ultimately determine the relationship between humans and computers are not their respective technical capacities but the socio–economic system that embeds and enables those capacities. Accordingly, Simon discerns three dimensions of disagreement on the impact of computer technology on the processes of management: a technological dimension, a philosophical dimension, and a socioeconomic dimension. Roughly speaking, according to Simon, expert opinions vary according to how small or large an impact they envision. Aligning the first two dimensions, Simon recognizes four possible schools of thought in this respect (see Table 2), and characterizes his own position as “fairly extreme along all dimensions” — namely as a technological radical, an economic conservative, and a philosophic pragmatist [10]. This is a position that Simon maintained, with typical high–flown intellectual doggedness, throughout his career, facts on the ground notwithstanding. In an article published in Interface in 1987, for instance, when he complains about the “present status of regrettable isolation” between AI and operations research [11], he advocates a “two–heads” approach, where, “joining hands with AI, management science and operations research can aspire to tackle every kind of problem–solving and decision–making task the human mind confronts” [12].

Simon’s vision, self–characterized as technologically radical but socio–economically conservative, does not seem to have materialized many years and decades after its proclaimed forecast. What has happened instead is that humans are crowded into tasks that put them at the service of machines and the organizations that own and operate them, inverting both heads of Simon’s vision in ways that lend more support to a technologically conservative and socioeconomically radical perspective in Simon’s classification, with the added caveat that computers are still limited in their capabilities despite their amazing computational power. The fact that Simon focuses only on computational power is based on a quantitative view of human cognition that takes on brute speed and computing capacity as the measure of intelligence. The other caveat is that “the plentitude of goods and services” that Simon refers to are, indeed, available but at high financial, emotional, or labor cost to the average user. To see this, we need to examine more closely the shifts that have taken place in the division of labor between humans and machines in the last century or so. To that end, we identify three distinct phases of automation, augmentation, and heteromation, which we briefly examine below.

 

Table 2
 
Table 2: Simon’s classification of views on the impact of computers on management.

 

2.1. Automation: The machine takes center stage

In the past, the man has been first; in the future the system must be first.
   — Fredrick Taylor, Principles of scientific management (1911).

The idea of automation has a long history, having to do with labor–saving mechanisms that relegate some of the tasks performed by human beings to machines. According to Beniger (1986), the term was first introduced in 1936 by General Motors executive Delmar S. Harder to refer to “the automatic handling of parts between progressive production processes” [13]. By 1948, the term was used in the popular press as a synonym for “automatic control” [14]. Beniger traces the history of automatic control as far back as the third century B.C., when the waterlock represented the first instance of feedback control. In modern times, innovations such as the fantail to control the direction of a windmill (1746) and the centrifugal pendulum to control its speed (1787), the Jacquard loom to automate weaving (1801), the thermostat to control temperature (1830), the pneumatic proportional controller (late 1920s), the colorscope for precise color identification (1930), and pneumatic transmitters for industrial process control (mid–1930s) provide examples of early mechanisms of automation. The later innovations were a response to what Beniger calls a “the crisis of control,” which derived from the mismatch between the speedy cross–continental flow of material enabled by the steam engine, on the one hand, and the capacity to contain and control the flow, on the other. As a whole, the innovations promised a “control revolution” that transformed “an entire social processing system, from extraction and production to distribution and consumption” [15].

Although automatic control could be implemented through any number of mechanisms (mechanical, hydraulic, pneumatic, or electronic), wide–ranging automation with the goal of relegating human labor to machines only came about with the advent of modern digital computers in the years after World War II. The computer scientist Dertouzos defined automation as “the process of replacing human tasks by machines’ functions” [16]. The process applied not only to manual labor but, as we saw in Simon’s ideas, it was intended to incorporate significant elements of human cognitive tasks such as data processing, decision–making, organizational management, and so forth. This goal was largely pursued through AI projects that sought to mimic and implement the most abstract forms of human thinking such as solving mathematical problems, proving geometric theorems, and playing chess. There were early eye–catching successes in some areas, generating great enthusiasm for the prospects of computing, and leading AI advocates such as Simon to the conclusion that: “[M]an’s comparative advantage in energy production has been greatly reduced in most situations — to the point where he is no longer a significant source of power in our economy“ [17]. The failure of AI projects to deliver robust systems that could operate outside of narrowly defined micro–domains, however, led to declining interest in AI in the 1970s — a period referred in the literature as the “AI Winter” (Crevier, 1993; Ekbia, 2008).

2.2. Augmentation: The machine comes to the rescue

The obstacles encountered by automation and particularly by AI can, in principle, be avoided by an alternative approach that was referred to as “intelligence amplification.” In analogy with “physical power,” the idea was that “intellectual power” can also be amplified through the introduction of “appropriate selection” rules [18]. This idea was taken to its logical conclusion by figures such as J.C.R. Licklider and Douglas Engelbart through the concepts of “man–computer symbiosis” and “augmenting human intellect.” Licklider envisioned a future where “human brains and computing machines will be coupled together very tightly, and ... the resulting partnership will think as no human brain has ever thought” (Licklider, 1960). The goal was to help human beings in “finding solutions to problems that before seemed insolvable, [including] the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers — whether the problem situation exists for twenty minutes or twenty years” [19]. The dream, in other words, that computers can support and amplify human capacities in almost all areas of professional activity.

The emergence of personal computing changed perceptions and visions in regards to the division of labor between humans and machines. In AI, this was accompanied by the development of “expert systems” — technologies intended to provide expert knowledge and support to complex tasks such as medical diagnosis, mass spectrometry, and, of course, management problems. It was during this same period that Simon saw a space for synergy between AI and OR: “The almost simultaneous appearance of microcomputer and personal computers greatly broadened public awareness of computers in general and the possibilities for artificial intelligence in particular” [20]. In reality, however, expert systems and augmented intelligence proved brittle because of their narrow domain–specific knowledge.

 

++++++++++

3. Heteromation: The machine calls for help

The emergence of the Internet, along with the vast array of services and applications promulgated by the World Wide Web, turned the computing tide once more, diffusing digital technology around the globe. The Web 2.0 agenda pursued in the 2000s pushed this diffusion even further, enveloping a much broader set of human activity. Heteromated systems, such as the ones we discuss below, can be largely considered an outcome of these developments.

3.1. Mechanical Turk: Micropayments for microtasks

Mechanical Turk, operated by Amazon.com, is a clearinghouse for employers seeking labor to fulfill microtasks, usually repetitive actions such as tagging images, transcribing snippets of audio, or filling out forms. This repetitive labor generally offers limited compensation, well below minimum wage (see Horton and Chilton, 2010; Irani and Silberman, 2013). Jeff Bezos launched Mechanical Turk in 2006, announcing, “You’ve heard of software–as–a–service. Now this is human–as–a–service” (Bezos, 2006). Irani and Silberman note that Mechanical Turk establishes “workers as a form of infrastructure, rendering employees into reliable sources of computational processing” [21]. Mechanical Turk allows capital to deploy digital technology to efficiently locate and organize underutilized pockets of the labor force. Profits are increased through low wage labor and avoidance of costs such as health and retirement benefits.

In the current economy, labor supplied by human workers is cheaper than comparable automated systems. Certain tasks that humans can perform are not impossible for computers, but would require expensive research and programming labor to be realized. In the long run, it might be more cost–effective for enterprises to automate labor performed by human workers in heteromated systems, but capitalists are driven by near–term profits. For example, systems such as automated audio transcription — built on decades of government–funded research not subject to the same constraints as enterprises — function reasonably well, but, at this time, not as reliably as humans. Under current conditions, hiring people through short–term, benefits–free contracts that typically max out at a few dollars per hour (Horton and Chilton, 2010) is less expensive than investing in research and development. Mechanical Turk fulfills Lyotard’s prophecy that “the temporary contract” would increasingly dominate in employment (as well as other arenas of life) [22]. Lobo, et al. (2012) argue that innovation (which would be required to automate image recognition, etc.) is increasingly expensive. We thus expect the trend to heteromated labor to continue for some time.

Mechanical Turk would appear to argue for the commonsense idea that we should get machines to do what they are good at, and leave the rest for people. Humans are said to be good at image recognition, for example, and such a task is “easy” for them. But this attribution skirts the reality that repeatedly looking at images in which a person has no interest is not really all that “easy.” In fact, such work seems profoundly alienating. Some workers in Irani and Silberman’s (2013) study reported that microwork helped them “kill time,” a kind of affective reward. If killing time is an affective reward, it is an immiserated one, and the very idea of killing time through repetitive action is itself a symptom of alienation or anxiety. Rather than performing work suited to humans, Mechanical Turk’s microlaborers perform for the token wages deemed suitable for the precariat. Such economic analysis is essential for understanding Mechanical Turk and why it has become a viable business for Amazon. Mechanical Turk has proven commentators such as Langlois wrong; they assume that computers will be better than humans at “routine and well–defined activities” (Langlois, 2003), and thus such tasks will fall to computers. But that is not how things have shaken out at Mechanical Turk where humans get the routine and well–defined tasks [23]. Essentialist dichotomies don’t tell the story of the contemporary division of labor between humans and machines which hinges on political economy and the current swollen labor force of the economically desperate facing chronic predicaments such as disability, responsibility for impaired relatives, lack of marketable job skills, loss of jobs to automation or outsourcing, and underemployment [24].

Irani and Silberman (2013) observe that microlaborers have no representation, regulatory redress, or trade associations. They are at the mercy of employers. Employers are, of course, concerned with profits, and they refuse even to answer workers’ e–mail messages. Irani and Silberman report:

[A] large–scale [employer] explained a logic [we] heard from several Turk employers: “You cannot spend time exchanging email. The time you spent looking at the email costs more than what you paid them. This has to function on autopilot as an algorithmic system ... integrated with your business processes.”

Employers, therefore, must consider employees as functionaries in “an algorithmic system,” forcing the labor relation even further along the path of ruthless objectification than Ford or Taylor could have imagined. Those humans rendered as bits of algorithmic function disappear into relations with oblivious employers “on autopilot.” Workers are largely “invisible,” as Irani and Silberman remark.

3.2. Citizen science: Problem solving by the crowd

We turn now to citizen science projects such as Foldit — a free–to–play, multiplayer video game that harnesses sophisticated human problem solving labor to solve complex protein folding problems. The object of the game is to find “well–folded protein structures” using spatial reasoning and other cognitive skills (Cooper, et al., 2011). The system seeks human labor because computationally, “biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space” [25]. Players find the game’s puzzles challenging and stimulating, and they have discovered significant protein structures. Players are not paid, but reap significant affective rewards.

Foldit is produced by a team of computer scientists and biochemists at the University of Washington. Player labor provides value to the scientists, helping them advance their fields. Capital is not the chief instigator behind heteromation in Foldit — it is scientists who cannot afford to hire programmers to construct complex algorithms to solve protein folding problems. Cooper observed that, “Trying to develop an algorithm to capture all the players’ spatial reasoning would be ... complex, and so I think that applying [player] reasoning through a game is a good approach” [26]. It is more cost effective to engage players to play a game than to pay programmers to develop “complex” algorithms.

But, interestingly, players themselves have turned to automation. As part of play, they automate their own routines, creating algorithms that speed repetitive sequences of actions (Cooper, et al., 2011). Cooper noted, “Some of the work that the players are doing is coming up with algorithms to automate some parts of their strategies” [27]. This development indicates that automation could in fact be very useful in Foldit, and that the heteromated system need not necessarily rely so heavily on human problem solving. There is no natural human–machine division of labor at work; rather, progress hinges on finding someone to develop the automation for free. Rather than deploying human labor because “humans are good at puzzles,” Foldit demonstrates that the economics of automation explain a good deal about when automation comes into play. Now that players (who work for free) are developing the algorithms, Foldit is evolving toward increased automation.

Mechanical Turk has proved its cost effectiveness within the context of capital’s demands. Are systems such as Foldit exemplars for future for–profit systems, or will they be relegated to subsidized university environments? The answer is not clear. Generating affective labor by designing entertaining games is far more difficult than paying a small wage for work acknowledged at the outset to be boring. Foldit is constructed on the Rosetta system developed by the research community (and hence subsidized), and is available for free. A small number of graduate and undergraduate students do most of the Foldit programming, providing a base of low cost/free labor [28]. Its profit–making potential (or that of a comparable system) seems low.

But, on the other hand, Foldit could be a bellwether of a “technical turn” in heteromation, defeating the increasingly high cost of innovation which Lobo, et al. (2012) have documented. Perhaps heteromation will move toward sophisticated technical work, as the Foldit players have begun doing, looping us back toward automation. Recent Foldit research stresses “maximiz[ing] [collaborative] engagement” (Cooper, et al., 2010), and providing tools for designing, programming, and sharing algorithms (Cooper, et al., 2011). These developments signal that harnessing affective labor by promoting sociality and supporting challenging technical work might be a path to fostering automation at low cost.

3.3 Video games: An exercise in self–management

Imagine you are a video game company and receive tens of thousands of aggrieved e–mail messages every day from your player base of 67 million players worldwide. This was the case in 2011 for Riot Games, one of the most successful video game companies in the world. Most League of Legends (LoL) players are young men and boys, the segment of the population most likely to act outside conventional social norms, or even to be unaware of social norms. To play League of Legends, players are computer–matched in small teams with others they generally do not know to engage in short games of about 30 minutes. This constellation of demographic, social, and ludic factors affords an environment in which players often exhibit what Riot Games calls “toxic behavior”: using crude or hostile language, deliberately sabotaging matches, and displaying negative attitudes (in an environment that is supposed to be fun). The e–mail messages contained complaints about other players’ toxic behavior, which many players declared was ruining their game. Because multiplayer games demand the presence of other players as functioning parts of the system, if players leave the game, there is no game. Not surprisingly, a heteromated system that brings people together in online spaces for shared activity creates some social problems, as has been true since the earliest days of the Internet (see, e.g., Kiesler, 1984). People are readily induced to participate online, but they are not always immediately docile and compliant. Game companies must find ways to govern the unruly, or their heteromated systems will fail.

In games such as World of Warcraft, bad behavior is managed by “game masters” to whom players can report the use of racist language, inappropriate commercial ventures intruded into the game, and so on (see Nardi, 2010). But game masters must be paid professional wages (even though, as customer support, such workers are on the lower end of the scale). Riot Games’ innovative solution to the problem of behavior regulation was to create a double layer of heteromated labor drawing on its own players to regulate other players. Players were asked to participate in the “Tribunal” in which they would report and judge other players (see Kou and Nardi, 2014; 2013). Players readily agreed to participate because they wanted to improve their own gaming experience.

The heteromated labor of the Tribunal is organized into a report phase followed by a judgment phase. Players report disruptive players immediately after a match. If a player has been reported frequently, a case is prepared by the computerized system. Players can log into the Tribunal and judge cases. Players will not (usually) be judging cases they reported, but whatever comes up in the docket at the time they log in. The Tribunal allows players to judge only if they meet certain criteria showing that they themselves are experienced, well–behaved players. The players read through the actual chat logs in which the reported behavior took place, and several players judge each case. Judges choose “Punish” or “Pardon” based on their assessment of the case chat logs. If the majority of judges vote to punish, the Tribunal assigns an account suspension to the reported player (possibly ending in a complete ban from the game in the case of persistently toxic behavior).

Riot Games considers the Tribunal successful, and has maintained it since its inception in 2011 (Kou and Nardi, 2014). The Tribunal demonstrates that heteromated labor can be a designed solution to unanticipated problems of large computational systems. A designer we interviewed said, “One of the core philosophies of the Tribunal is to engage and collaborate with the community to try to solve player behavior together” (see Kou and Nardi, 2014). While paid agents of the corporation could intervene when players behave badly (as in World of Warcraft), drawing free labor from players serves both to reduce costs and to bind players even more closely to the game through the increased investment of their active participation.

3.4. Facebook and Google: User–generated content

Social media sites such as Facebook and meta–search engines such as Google provide another prominent case of heteromation. This is where the flow of data on the Internet is largely concentrated, along with gaming and, to a lesser degree, science (www.alexa.com/topsites). These same sites, it turns out, also represent a large proportion of the flow of capital in the so–called information economy. Facebook, for instance, increased its ad revenue from US$300 million in 2008 to US$4.27 billion in 2012. The observation of these trends reinforces the World Economic Forum’s (2011) recognition of data as a new “asset class” and the notion of data as the “new oil.” The caveat is that data, unlike oil, is not a natural resource, which means that its economic value cannot derive from what economists call “rent.” What is the source of the value, then?

This question is at the center of an ongoing debate that involves commentators from a broad spectrum of social and political perspectives, including those who highlight the role of financial market mechanisms such as branding and valuation (Arvidsson and Colleoni, 2012), time–based exploitation of free labor (Fuchs, 2014), or exploitation based the immobility inducing mechanisms of network capitalism (Ekbia, in press). Despite their differences, the views of many of these commentators, including those from the right end of the political spectrum, converge on a single source: “users.” According to the VP for Research of the technology consulting firm Gartner, Inc., for instance, “Facebook’s nearly one billion users have become the largest unpaid workforce in history” (Laney, 2012). From December 2008 to December 2013, the number of users on Facebook went from 140 million to more than one billion. During this same period of time, Facebook’s revenue rose by about 1,300 percent. Facebook’s filing with the U.S. Securities and Exchange Commission in February 2012 indicated that “the increase in ads delivered was driven primarily by user growth” [29].

Our own view is that wealth is created through “expectant organizing” — a kind of organization that is structured with built–in holes and gaps that are intended to be bridged and filled through the activities of end users (Ekbia and Nardi, 2012). Rather than simply leaving structural holes for users to fill, new social media are empty containers (black structural holes, if you wish; what is Facebook without its membership?) that pull in and appropriate user content in a piecemeal process that renders the contribution unrecognizable. Companies such as Google that run on a different business model of targeted advertisement also exploit user activities for part of their operations. In addition to advertisement income and the “search cost” involved in the acquisition of information and taken up in searching, users contribute to the creation of wealth in other ways as well. The Completely Automated Public Turing Test(s) to Tell Computers and Humans Apart, or CAPTCHAs, for instance, has allowed Google to deal with the problem of non–standard fonts and formats. This mechanism was originally developed to distinguish between humans and bots (intelligent software agents), with the intention of preventing the latter to pose as human users. Now, through an innovative inversion, the same mechanism is put to use to reap benefit from user activity. To gain entry into a system as real humans, users have to correctly recognize distorted strings of alphanumeric characters, and reenter them on the screen — a process that involves some cognitive and manual labor on the part of the user, slowing them down as well. Recently, Google has embarked on a project called reCAPTCHA where this labor is used to parse a scanned image of a word from a book (for Google Books’ digitization initiative) or a photographed image of a street name or traffic sign (coming from the Street View imagery used on Google Maps) (Gizmodo, 2012).

 

++++++++++

4. Discussion: A sparse reward space

The cases of heteromation we discuss illustrate the variety of situations in which human labor, skill, and affect are brought to bear in order to make a broad array of technological systems work. The invisible and sometimes trivialized contribution of humans to these systems takes different shapes, from the taxing and repetitive microtasks of Mechanical Turk to problem solving in FoldIt, user training and behavior regulation in video games, and the generation of content by the users of social media, search engines, and other Web outlets. This diversity of forms provides novel examples of “creative destruction” that are emblematic of the capitalist system (Schumpeter, 1942).

The mechanisms of human engagement with these systems include rote microtasks, but also cognitive and skill–based labor, and labor that is affective and stimulating, representing different modes of objectification of the human subject, but also the incorporation of earlier technical mechanisms. With fiendish ingenuity, heteromated systems are designed so as to subsume earlier mechanisms of automation and augmentation in complex and composite assemblages. Augmented by the new layers of information provided through electronic gadgets (e.g., cell phones, GPS, activity and health monitors), and supported by a fast global communication infrastructure, heteromated labor can be made available 24/7 at almost any point on the globe. The automatic capture, delivery, and analysis of data created by this augmented labor, and pushed back to it in explicit form (ads, commercials, e–mail messages, etc.) or in more implicit and indirect fashion (marketing and political campaigns, business and government surveillance), generate a self–sustaining cycle of engagement that is almost impossible to break from. The capacity of software systems to insert human subjects, deriving labor from them as evidenced in our empirical examples, speaks to the systems’ growing regulative power (Kallinikos, 2010).

Finally, the cases also vary in terms of the type and magnitude of the reward that they provide to heteromated laborers. The space defined by these reward structures is complex and multidimensional, but it can be represented on a two–dimensional space of material and non–material rewards (see Figure 1). Material rewards include monetary compensation, as well as any form of convenience or facility provided to the human participant that reduces a relevant “transaction cost” — e.g., cost of skill learning (as in video games), cost of communication (on Facebook), cost of information access (on Google). Non–material rewards, on the other hand, often take the form of affective returns, a key function of which is to maintain sustained engagement with the system (see Terranova, 2003).

What is remarkable about the distribution of cases in this abstract space is the paucity of rewards on the material dimension, which directly speaks to a major displacement in the capitalist economy and its mechanisms of profit–making. These displacements have an immediate manifestation in the form of an increasingly polarized economy, where income disparity between the upper and lower social strata is growing constantly. Examples such as Facebook and Google discussed above illustrate the correlation between the number of “users” and the amount of wealth created for major high–tech corporations. Turning the problem of free riding on its head, these developments introduce a novel phenomenon, where instead of costly resources being under–produced because they can be cheaply duplicated, user data generated at almost no cost are overproduced, giving rise to vast amounts of wealth concentrated in the hands of proprietors of technology platforms.

Celebratory accounts of “participatory culture,” “peer production,” and the like valorize labor relations in which enterprises extract free or low cost labor for their own benefit (e.g., Jenkins, 2006). Such accounts are perhaps too narrowly pointed at the short term affective rewards of the heteromated labor on which peer production depends. Taking a wider, longer view, the economic displacements of heteromation show a trend toward “diminishing returns” on human labor. Economists have traditionally focused on how returns on capital investment might follow a diminishing path, but the same now applies more strongly to labor. With little or no contribution coming from heteromated labor to Social Security, Medicare, unemployment insurance, and other future–oriented investments, these protective resources will be steadily and surely depleted. Deeper in its long–term implications than simple wage contraction, this trend takes away the last residues of security (e.g., jobs, savings, pensions), essentially foreclosing the future for a large portion of the population. We understand the deeply gratifying affective rewards of many heteromated systems and have engaged them ourselves (e.g., Nardi, 2010), but it is also necessary to consider the socioeconomic trajectory that unpaid labor or poorly paid microwork entails for our collective future.

 

distribution of rewards in heteromated space
 
Figure 1: The distribution of rewards in heteromated space.

 

 

++++++++++

5. Conclusion: High–frequency low–visibility work

The relationship between humans and computers has cognitive, technical, and social dimensions, but it also has a strong economic dimension that is often lost in discussions of technology. In their urge to counter the alleged economic reductionism of Marxist theory, social analysts have gone too far in the opposite direction, ignoring the decisive role of economic forces in driving technological change. Although drivers of change cannot be conceptually reduced to any single dimension, it would be preposterous to dismiss economic drivers altogether. In its pursuit to commodify all aspects of human life, capitalism has found in digital technologies an effective and omnipresent instrument. In fact, there is a close parallel in the development of computer technology and economic thinking, as epitomized in the work (and character) of Herbert Simon.

Simon’s key contribution to the field of economics is his concept of “bounded rationality” — the idea that, rather than optimizing the outcomes of their actions, humans satisfice by making good–enough realistic decisions. Simon’s purpose in introducing this concept was “to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist” [30]. While this agenda promised a major correction to the (neo–)classical economic models of human action, it remained committed to some of the fundamental premises of those models. In particular, the model of bounded rationality adhered to methodological individualism, according to which social phenomena are understood as mere aggregations of loosely coupled individuals [31]. In fact, Simon sought to ground this classical idea in psychology and evolutionary biology, endowing it with more “scientific” legitimacy, as we saw in Langlois’ commentary earlier, and in Simon’s own words: “Acceptance of the narrow view that economics is concerned only with the aggregative phenomena of political economy defines away a whole rich domain of rational human behavior as inappropriate for economic research” [32]. By psychologizing economics, Simon sought to depoliticize it, taking human concerns out of it.

One example of this is Simon’s credo that “nature loves hierarchies” [33]. This credo influenced different areas of his work, including his work in AI, where he simulated the mid–range level of “physical symbols,” as opposed to the neuronal structures of the brain or the behavioral patterns of individuals. In his work on organizations he also focused on middle layers of the hierarchy, dismissing the “high frequency modes” of the lower layers as irrelevant to the overall dynamics of the system [34]. These “high–frequency modes” of activity apply even more accurately to the repetitive work of microworkers, gamers, users of social media, and many others engaging digital technologies these days. By putting these groups at the margins of his theoretical interest, Simon conceptually erased these layers, making their roles invisible.

Our aim in introducing the concept of “heteromation” is partly to correct this state of affairs — that is, to make visible the contributions of those whose labor is not acknowledged, rewarded, and conceptualized in discussions of technology or economy. This acknowledgment is all the more critical given the broad and expanding range of such contributions in the heavily computerized environments of contemporary societies, where widespread attempts are made by the successors of Herbert Simon, Fredrick Taylor, Gary Becker, and others of their ilk to marginalize humans and bring the powerful but needy machine back to center stage. End of article

 

About the authors

Hamid Ekbia is Associate Professor of Information Science, Cognitive Science, and International Studies in the Department of Information and Library Science, School of Informatics and Computing, Indiana University.
E–mail: hekbia [at] indiana [dot] edu

Bonnie Nardi is Professor in the Department of Informatics in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine.
E–mail: nardi [at] uci [dot] edu

 

Acknowledgements

We would like to thank Daniel Pargman and Six Silberman for their careful reading of an earlier version of the paper, and for their helpful suggestions.

 

Notes

1. Ekbia and Nardi, 2012, p. 157.

2. Nardi, 2010, p. 169.

3. The deflection of motive toward an object other than the one the subject is working on is specified in activity theory which holds that the motivating objective of activity can be different than the object toward which activity is oriented (Kaptelinin and Nardi, 2006).

4. A new journal, Human Computation, may open the field to wider questions in the future.

5. Langlois, 2003, p. 167, emphasis added.

6. Langlois, 2003, p. 178.

7. Simon, 1960, p. 22.

8. Simon, 1977, pp. 6–7; emphasis added.

9. Ibid.

10. Simon, 1977, p. 6.

11. OR (operations research) is an area of management science, which, according to Simon, deals with “the application of optimization techniques to the solution of complex problems that can be expressed in real numbers” (Simon, 1987, p. 10). Simon finds this definition too narrow but useful because of its emphasis on formal mathematical models and optimization, which he contrasts with AI methods of “heuristic search.” These methods, according to Simon, are more applicable to complex problems that defy optimization and quantification because they “involve large knowledge bases, ... incorporate the discovery and design of alternative choices, and admit ill–specified goals and constraints” (Simon, 1987, p. 11).

12. Simon, 1987, p. 15.

13. Beniger, 1986, p. 295. According to other sources, the word “automation” came into vogue in late nineteenth century, when the Strand magazine first used the term in 1890 (http://www.pearcedesign.com/ahof/timeline_06.html).

14. Beniger, 1986, p. 295.

15. Beniger, 1986, p. 219.

16. Dertouzos, 1979, p. 38.

17. Simon, 1960, p. 31.

18. Ashby, 1956, p. 272.

19. Engelbart, 1962, Introduction.

20. Simon, 1987, p. 12.

21. Irani and Silberman, 2013, p. 17.

22. Lyotard, 1984, p. 66.

23. Langlois makes an economic argument about the “human machine” suggesting that the economics of hunter–gather times shaped who we are today: “What may be less obvious is that the human machine is also in many respects a response to economic forces. The human brain (and the human being more generally) is an evolved product, and therefore one whose design reflects tradeoffs motivated by resource constraints.” Current, specific socioeconomic conditions do not pertain to the “evolved product.”

24. For example, in most states the minimun wage for tipped employees in US$2.13 per hour. But many tipped workers cannot expect to consistently bring this wage up to the current federally mandated minimum wage of US$7.25 per hour (itself a grim throwback to Dickensian morality).

25. Cooper, et al., 2010, p. 756.

26. Cooper, personal communication, 2014.

27. Op cit.

28. Cooper, personal communication, 2014.

29. Facebook, 2012, p. 50; Andrejevic, forthcoming.

30. Simon, 1955, p. 99.

31. Mirowski, 2002, p. 470.

32. Simon, 1978, p. 346.

33. Simon, 1981; cf., Mirowski, 2002, p. 465.

34. Simon, 1973, pp. 10–11.

 

References

P. André, H. Zhang, J. Kim, L. Chilton, S. Dow, and R. Miller, 2013. “Community clustering: Leveraging an academic crowd to form coherent conference sessions,” First AAAI Conference on Human Computation and Crowdsourcing, at http://www.aaai.org/ocs/index.php/HCOMP/HCOMP13/paper/view/7512, accessed 18 May 2014.

M. Andrejevic, forthcoming. “Personal data: Blindspot of the ‘affective law of value’?” Information Society.

A. Arvidsson and E. Colleoni, 2012. “Value in informational capitalism and on the Internet,” Information Society, volume 28, number 3, pp. 135–150.
doi: http://dx.doi.org/10.1080/01972243.2012.669449, accessed 18 May 2014.

W.R. Ashby, 1956. An introduction to cybernetics. London: Chapman & Hall.

J. Beniger, 1986. The control revolution: Technological and economic origins of the information society. Cambridge, Mass.: Harvard University Press.

J. Bezos, 2006. “Opening keynote,” MIT Emerging Technologies Conference (27 September), at http://video.mit.edu/watch/openingkeynote-and-keynote-interview-with-jeff-bezos-9197/, accessed 18 May 2014.

L. Boltanski and È. Chiapello, 2005. The new spirit of capitalism. Translated by G. Elliott. New York: Verso.

E. Brynjolfsson and A. McAfee. 2011. Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington, Mass.: Digital Frontier Press.

S. Cooper, F. Khatib, I. Makedon, H. Lu, J. Barbero, D. Baker, J. Fogarty, Z. Popović, and Foldit players, 2011. “Analysis of social gameplay macros in the Foldit cookbook,” FDG ’11: Proceedings of the Sixth International Conference on Foundations of Digital Games, pp. 9–14.
doi: http://dx.doi.org/10.1145/2159365.2159367, accessed 18 May 2014.

S. Cooper, F. Khatib, A. Treuille, J. Barbero, J. Lee, M. Beenen, A. Leaver–Fay, D. Baker, D., Z. Popović, and Foldit players, 2010. “Predicting protein structures with a multiplayer online game,” Nature, volume 466, number 7307, pp. 756–760.
doi: http://dx.doi.org/10.1038/nature09304, accessed 18 May 2014.

D. Crevier, 1993. AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books.

M. Dertouzos, 1979. “Individualized automation,” In: M. Dertouzos and J. Moses (editors). The computer age: A twenty–year view. Cambridge, Mass.: MIT Press, pp. 38–55.

H. Ekbia, in press. “The political economy of exploitation in a networked world,” Information Society.

H. Ekbia, 2008. Artificial dreams: The quest for non–biological intelligence. New York: Cambridge University Press.

H. Ekbia and B. Nardi. 2012. “Inverse instrumentality: How technologies objectify patients and players,” In: P. Leonardi, B. Nardi, and J. Kallinikos (editors). Materiality and organizing: Social interaction in a technological world. Oxford: Oxford University Press, pp. 157–176.

D. Engelbart, 1962. “Augmenting human intellect: A conceptual framework, Section D: Regenerative feature,” Summary Report, AFOSR–3233 (October). Menlo Park, Calif.: Stanford Research Institute, and at http://sloan.stanford.edu/mousesite/EngelbartPapers/B5_F18_ConceptFrameworkInd.html, accessed 18 May 2014.

Facebook, 2012. “Quarterly report pursuant to section 13 or 15(d) of the Securities Exchange Act of 1934,” at http://www.sec.gov/Archives/edgar/data/1326801/000119312512325997/d371464d10q.htm, accessed 18 May 2014.

C. Fuchs, 2014. “Digital prosumption labour on social media in the context of the capitalist regime of time,” Time & Society, volume 23, number 1, pp. 97–123.
doi: http://dx.doi.org/10.1177/0961463X13502117, accessed 18 May 2014.

Gizmodo, 2012. “Google finally puts CAPTCHAS to good use” (29 March), at http://gizmodo.com/5897661/google-finally-puts-captchas-to-good-use, accessed 17 June 2013.

A. Gorz, 1985. Paths to paradise: On the liberation from work. Boston: South End Press.

S. Harrison, D. Tatar, and P. Sengers, 2007. “The three paradigms of HCI,” CHI 2007, at http://people.cs.vt.edu/~srh/Downloads/TheThreeParadigmsofHCI.pdf, accessed 18 May 2014.

D. Harvey, 2005. A brief history of neoliberalism. Oxford: Oxford University Press.

J. Horton and L. Chilton, 2010. “The labor economics of paid crowdsourcing,” EC ’10: Proceedings of the 11th ACM Conference on Electronic Commerce, pp. 209–218.
doi: http://dx.doi.org/10.1145/1807342.1807376, accessed 18 May 2014.

L. Irani and M.S. Silberman, 2013. “Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk,” CHI ’13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 611–620.
doi: http://dx.doi.org/10.1145/2470654.2470742, accessed 18 May 2014.

H. Jenkins, 2006. Convergence culture: Where old and new media collide. New York: New York University Press.

J. Kallinikos, 2010. Governing through technology: Information artefacts and social practice. London: Palgrave Macmillan.

J. Kalinikos and H. Hasselbladh, 2009. “Work, control and computation: Rethinking the legacy of neo–institutionalism,” Institutions and Ideology. Research in the Sociology of Organizations, volume 27, pp. 257–282.
doi: http://dx.doi.org/10.1108/S0733-558X(2009)0000027012, accessed 18 May 2014.

V. Kaptelinin and B. Nardi, 2006. Acting with technology: Activity theory and interaction design. Cambridge, Mass.: MIT Press.

S. Kiesler, J. Siegel, and T.W. McGuire, 1984. “Social psychological aspects of computer–mediated communication,” American Psychologist, volume 39, number 10, pp. 1,123–1,134.

Y. Kou and B. Nardi, 2014. “Governance in League of Legends: A hybrid system,” Proceedings, Foundations of Digital Games, and at http://fdg2014.org/papers/fdg2014_paper_16.pdf, accessed 18 May 2014.

Y. Kou and B. Nardi, 2013. “Regulating anti–social behavior on the Internet: The example of League of Legends,” iConference 2013 Proceedings, pp. 616–622.

R. Langlois, 2003. “Cognitive comparative advantage and the organization of work: Lessons from Herbert Simon’s vision of the future,” Journal of Economic Psychology, volume 24, number 2, pp. 167–187.
doi: http://dx.doi.org/10.1016/S0167-4870(02)00201-5, accessed 18 May 2014.

J.C.R. Licklider, 1960. “Man–computer symbiosis,” IRE Transactions on Human Factors in Electronics, volume HFE–1, pp. 4–11, and at http://groups.csail.mit.edu/medg/people/psz/Licklider.html, accessed 18 May 2014.

J. Lobo, J. Tainter, and D. Strumsky, 2012. “Productivity of invention,” In: W.S. Bainbridge (editor). Leadership in science and technology: A reference handbook. Los Angeles: Sage, pp. 289–297.

J.–F. Lyotard, 1984. The postmodern condition: A report on knowledge. Translation from the French by G. Bennington and B. Massumi. Manchester: Manchester University Press.

P. Mirowski, 2002. Machine dreams: Economics becomes a cyborg science. Cambridge: Cambridge University Press.

B. Nardi, 2010. My life as a night elf priest: An anthropological account of World of Warcraft. Ann Arbor: University of Michigan Press.

J. Schumpeter, 1942. Capitalism, socialism, and democracy. London: Harper & Brothers.

R. Sennett, 2006. The culture of the new capitalism. New Haven, Conn.: Yale University Press.

H.A. Simon, 1987. “Two heads are better than one: The collaboration between AI and OR,” Interfaces, volume 17, number 4, pp. 8–15.
doi: http://dx.doi.org/10.1287/inte.17.4.8, accessed 18 May 2014.

H.A. Simon, 1981. The sciences of the artificial. Second edition, revised and enlarged. Cambridge, Mass.: MIT Press.

H.A. Simon, 1978. “Rational decision–making in business organizations,” Nobel Memorial Lecture (8 December), at http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/1978/simon-lecture.pdf, accessed 18 May 2014.

H.A. Simon, 1973. “The organization of complex systems,” In: H.H. Pattee (editor). Hierarchy theory: The challenge of complex systems. New York: Braziller, pp. 3–27.

H.A. Simon, 1960. The new science of management decision. New York: Harper.

H.A. Simon, 1955. “A behavioral model of rational choice,” Quarterly Journal of Economics, volume 69, number 1, pp. 99–118.
doi: http://dx.doi.org/10.2307/1884852, accessed 18 May 2014.

G. Standing, 2011. The precariat: The new dangerous class. London: Bloomsbury Academic.

T. Terranova, 2003. “Free labor: Producing culture for the digital economy,” Electronic Book Review (20 June), at http://www.electronicbookreview.com/thread/technocapitalism/voluntary, accessed 18 May 2014.

World Economic Forum, 2011. “Unlocking the value of personal data: From collection usage,” at http://www3.weforum.org/docs/WEF_IT_UnlockingValuePersonalData_CollectionUsage_Report_2013.pdf, accessed 18 May 2014.

S. Zuboff, 1988. In the age of the smart machine: The future of work And power. New York: Basic Books.

 


Editorial history

Received 23 April 2014; accepted 22 May 2014.


Copyright © 2014, First Monday.
Copyright © 2014, Hamid Ekbia and Bonnie Nardi.

Heteromation and its (dis)contents: The invisible division of labor between humans and machines
by Hamid Ekbia and Bonnie Nardi.
First Monday, Volume 19, Number 6 - 2 June 2014
https://firstmonday.org/ojs/index.php/fm/article/download/5331/4090
doi: http://dx.doi.org/10.5210/fm.v19i6.5331