Critical analysis of interactive media with software affordances
First Monday

Critical analysis of interactive media with software affordances by Matthew X. Curinga



Abstract
There is a long standing, and unsettled debate surrounding the ways that technology influences society. There is strong scholarship supporting the social construction perspective, arguing that the effects of technology are wholly socially and politically determined. This paper argues that the social constructivist position needs to be expanded if it can be useful for more than observing the ways technologies are designed and used. We need to develop better ways to talk about software, computer hardware, and networks, so that we can describe the social interpretations of these systems while accounting for their unique characteristics. We suggest using software affordances as a way to understand the semantics of software as interactive systems. Using Facebook privacy concerns as a case study, we argue that software affordances offer a useful lens for considering the social and political implications of interactive software systems, providing us more analytical tools to interpret, and not just describe, new technologies.

Contents

Introduction
Studying software
Software as material
Software as text
Re–connecting the social and the material with affordances
Affordances are relational
Affordances are discrete
Facebook privacy: Coercive defaults

 


 

Introduction

When journalist Nicholas Carr (2008) wonders, “Is Google making us stupid?”, we are pressed to find adequate critical tools to provide satisfying answers. Carr worries that the, “quick clicks on hyperlinks” and reliance on, “intellectual technologies” are reducing our collective capacity to focus on deep problems and difficult texts. Carr presents a direct correlation between the technological design of the internet and cognitive (biological) and social changes.

Never has a communications system played so many roles in our lives — or exerted such broad influence over our thoughts — as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. [1]

The social construction of technology framework — one of the dominant and most developed critical approaches to technology studies — argues that we should dismiss Carr’s view of technology, contending that:

technology is constructed through human interpretation. Just as ‘facts’ neither speak for themselves nor exist independently of some agency which constructs them, technologies neither speak for themselves nor exist independently of human interpretation. [2]

The social constructivist school resists “essentialist” understanding of technology; arguing that there is not one essential or natural understanding of a given technology. Carr’s argument belies this notion, that, “the Net seems to be (...) chipping away [his] capacity for concentration and contemplation,” [3], while the sequential pages of printed books promotes deep reading and intense reflection. But Google (or Twitter or Instagram or Snapchat) cannot make us stupid. Carr is left opposing one communications technology against another; implying that, if we only read books instead of text messages, we would be better equipped to tackle pressing problems or live more fulfilling lives. By refusing to dig deeper, to investigate the myriad of effects that using the Internet has on diverse groups, or uncovering the various and conflicting interests that have formed the Web into what it is today, Carr risks mistaking the effect for the cause and generalizing where no generality can be made. A criticism of Carr’s thesis would be that Google and hyperlinks are a symptom of cultural changes that predate widespread use of the Internet. In fact, there are reasonable arguments that suggest that the technology of hypertext specifically supports deep reading because it challenges readers to fully explore topics, to dig deeper into areas they don’t understand, and to exert greater control of their reading (e.g., Dwight and Garrison, 1995).

The social constructivist dismissal of Carr’s argument is compelling, but it feels restrictive. Under what regime can we be critical of technology or make a claim that a technology works counter to the common interest? How can we understand the Internet and interactive software in a way that is compatible with the social constructivist approach without losing our ability to make normative claims and take collective action? Donna Haraway formulates this challenge as follows:

“our” problem, is how to have simultaneously an account of radical historical contingency for all knowledge claims and knowing subjects, a critical practice for recognizing our own “semiotic technologies” for making meanings, and a no–nonsense commitment to faithful accounts of a “real” world, one that can be partially shared and that is friendly to earthwide projects of finite freedom, adequate material abundance, modest meaning in suffering, and limited happiness. [4]

Without abandoning the position that technologies are socially constructed, this paper explores the “‘real’ word” of software by considering both sides of the screen: what is happening inside the computer and the digital network, and how it is interpreted and shaped by the people interacting with it. We believe we can overcome some of the inadequacies of the social constructivist framework, especially in providing political analysis of interactive media, by building an approach on the claim that software affordances provide an important bridge between the semantic and material worlds of interactive software because affordances represent possible performative interactions between the user and the world, mediated by digital computers. From this bridge, we can connect the computational and algorithmic side to the social inputs and consequences. We articulate these ideas in the context of analyzing the privacy and sharing affordances of Facebook.

 

++++++++++

Studying software

Melvin Kranzberg, historian of technology, writes that, “All history is relevant, but the history of technology is the most relevant” [5], because, “technology is a very human activity” [6]. Tools and technology are inextricably human; purposeful making of tools has shaped humanity throughout our history — not because the technologies determined the course of history per se, but because technology is one of the fundamental expressions of human cultural activity. Technologies reflect artistic beliefs, political ambitions, philosophical concerns, and more. Studying technology in context helps us to uncover these ideas; studying technology today often means digital technology. Software, in particular, emerges as a rich area for study, but one fraught with challenges. David Berry writes, “Software is a tangle, a knot, which ties together the physical and the ephemeral, the material and the ethereal, into a multilinear ensemble that can be controlled and directed” [7]. As we untangle the knot of software, we gain insight into technology as a point of interaction between the social and material world.

According to Paul Dourish, “Every piece of software reflects an uncountable number of philosophical commitments and perspective” [8]. The emerging field of software studies which, “aims to map a rich seam of conjunctions in which the speed and rationality, or slowness and irrationality, of computation meets with its ostensible outside (users, culture, aesthetics)” [9], can provide critical tools to investigate these philosophical commitments. Several approaches to studying the materiality of software have been developed. Manovich (2001) suggests that we turn to computer science to study software; that we should employ terms and categories such as “database” and “interface” in our analysis. When Kittler (1995) declares, “there is no software,” though, we are reminded of the immateriality of software. This leads Adrian MacKenzie to consider code as material and practice, resisting the tendency to cast software programs as monolithic, writing “It is tempting to say there is no ‘program’ as such, only programmings or codings.” [10] Wendy Chun (2005) is even more wary of attributing too much agency to software, as she shows how source code can become fetishized, obscuring both the physical operation of digital technologies and the human activities that surround them.

Accordingly, to make critical statements about software, we must account for its dual nature as a communications channel and as a material force with the ability to affect change in the real world. The structure of code matters. No matter how urgent my message, I cannot post it directly to the billion people who use Facebook. To get my message out, I must work within the boundaries and channels established by the software and hardware, or choose a different path altogether. From this perspective, complex technological systems, like social media, express an ideology of what is important, what is “normal”, and what is not. We can interpret software by closely looking at which actions it highlights and which it discourages. The only reason that I can’t send my message to a billion people on Facebook, at the click of a button, is that the designers of Facebook decided that I can’t. This (lack of) design feature is entirely social and political. Technically, it would be trivial to enable a broadcast feature: a simple “broadcast” checkbox on status updates would suffice. Facebook certainly has the technical capacity to serve the messages. In all honesty, though, adding such a feature would most likely degrade Facebook in a matter of hours (or faster) to the point where it is unusable. The spam, ads, and puerile junk would overwhelm any pleasure or utility people find in Facebook. Again, though, this is a social and political problem, not a technical one.

 

++++++++++

Software as material

Software’s materiality surfaces when it presents us, “a problem of agency, as a problem of who or what does what to whom or what” [11]. With software, the “who” includes designers and programmers as well as end users, the what include other software systems and hardware systems that read and interact with software. The idea that technology design affects human agency has roots in Internet and software development culture. Programmer and entrepreneur Mitch Kapor famously claimed that, “architecture is politics,” writing, “the structure of a network itself, more than the regulations which govern its use, significantly determines what people can and cannot do” [12]. Kapor’s aphorism captures a current understanding of how the Internet can spread (or inhibit) freedom, and in turn influence social, economic, and political relationships in our society. For Kapor, “freedom, participation, creativity, and openness are better fostered by a decentralized but coordinated architecture” [13].

Kapor expresses a contemporary form of technological determinism, the belief:

(1) that the technical base of a society is the fundamental condition affecting all patterns of social existence and (2) that changes in technology are the single most important source of change in society. [14]

Constitutional legal scholar, digital rights activist, and founder of the Creative Commons movement Lawrence Lessig also extols this view of architecture in his writing on digital culture. Lessig argues that legal power is enhanced by software architecture, because technology can “supplement” laws’ control and restrict political speech and free markets [15]. In Code: Version 2.0, Lessig (2006) expands on the peculiar danger software architectures pose to liberal society. He writes,

We can build, or architect, or code cyberspace to protect values that we believe are fundamental. Or we can build, or architect, or code cyberspace to allow those values to disappear. There is no middle ground. There is no choice that does not include some kind of building. Code is never found; it is only ever made, and only ever made by us. [16]

Lessig is clear that the way we build or “code” our technology has real consequences on society and behavior. His code either protects or undermines social values, but, the same code is not equally suited for either purpose. Once it has been architected, it either enhances creativity and cultural exchange, or enhances the wealth and power of corporate copyright holders.

This hybrid, or “weak technical determinism,” [17] evident in the world of digital technology predates Silicon Valley. The Marxist tradition of historical materialism is concerned with configurations of technology that reify unequal power relations and tend towards exploitation. Marx and Engels observed swift and harsh changes as machines were introduced into the production process. Marx (1887) noted the transiency of the benefits created by mechanized labor when he wrote, “machinery, considered alone, shortens the hours of labour, but, when in the service of capital, lengthens them” [18]. According to Marx and Engels, the nature of the machine’s influence depends on the interpreter’s frame of reference. From the worker’s perspective, machinery should provide more of the, “means of subsistence,” while reducing necessary labor time. From the capitalist’s perspective, mechanization allows the same amount of labor time to provide greater surplus and profit.

Lukács (1973) elucidates the marxist position, claiming that, “technique is the consummation,” of the political economy of modern capitalism, “not its initial cause” [19]. Technology cannot be considered a neutral player, because it is employed socially and politically, becoming a powerful ideological and political tool. Once established, a technology reciprocally accelerates the social conditions that brought it into existence.

 

++++++++++

Software as text

This attention to framing, context, and perspective forms a central base for the social constructivist view that technology is constructed through human interpretation. Since, “facts never speak for themselves,” technology, like other texts, must be interpreted to have any meaning. The interpretist view of technology brings new tools to bear in studying software. If technology is completely open for interpretation, political meaning shifts from the technical architecture to the cultural narrative framing the artifact. As in literature, “strong” readings are supported by the text and connect with the participants in the discussion. Accordingly, we must learn how to read the text of technology.

Grint and Woolgar (1997) forward a strong argument against technological determinism and essentialism, and systematically show that many instances of essentialism are in fact interpretations. They highlight how technologies hold contradictory meanings for different audience and the unintended and unexpected uses of technologies that are the norm rather than exception in studies of technology. Looking at Marx and Engels’ view of mechanization, for example, they show that both the capitalist and Marxist view of industrialization are interpretations: machines can never be considered alone, so whether they are labor saving or profit generating devices has nothing to do with the actual technology.

Grint and Woolgar (1997) bump against some of the limits of constructivism when they confront the most difficult case, asking, “What’s social about being shot?” [20]. It turns out that many things are, including: who is doing the shooting and being shot, what type of gun and ammunition are used, the medical treatment the wounded receive, the intensity of the pain experienced, and even the border between life and death. Unfortunately, they make their case so thoroughly, casting doubt even on death as the final arbiter, that they lose sight of the other half of the question. What’s not social about being shot? Doing so, they fail to meet their stated goal, “to avoid (...) the impression that either the technical or the social has a discrete impact” [21], and they do little to address Haraway’s problem of embracing a social constructive perspective without denying the real world. Technology influences human behavior through the interaction of social and technical factors. The single–minded focus on technology as a text, “constructed through human interpretation,” has unfortunate consequences for political and social theories of technology. In some cases, the social constructivist method leads proponents to become unwitting apologists for unequal power, or shifts the focus of political struggle from core issues to the margins.

Consider, as an example of this, the argument that youth media researcher dana boyd makes concerning privacy in Facebook and other social media. She claims we should move away from polemics about legal and technical “control” of information and towards, “a model that focuses on usage and interpretation” [22]. The strength of her claim rests on the observation that only total control of our data can be sufficient in a digitally networked world. Small lapses in privacy can and do lead to data being irrevocably shared with the entire world, a point manifested in the recent “heartbleed” OpenSSL exploit. boyd concludes that post hoc meaning–making is more central to privacy concerns than software configurations, technical architecture, policy, or law.

Drawing on her ethnographic studies of youth and social media she observes

[youth] use pronouns and in–jokes, cultural references and implicit links to unmediated events to share encoded messages that are for all intents and purposes wholly inaccessible to outsiders. I call this practice “social steganography.” Only those who are in the know have the necessary information to look for and interpret the information provided. [23]

boyd herself notes that this social version of security through obscurity does not guard against data mining and algorithmic analysis, but she clearly locates the battle over privacy in the social, not technical realm. Her reading of privacy and social networks results from a strong social constructivist perspective. The design of the software and the actions that users take is much less meaningful than the way it is understood by the community of users.

Legal scholar and Software Freedom Law Center founder Eben Moglen (2010) offers an alternative view with his “Freedom in the cloud” lecture. Moglen contends that the most effective way to improve security and privacy is to move away from corporate owned “free” software services, a.k.a. “the cloud.” When we use services like Gmail and Facebook, according to Moglen, we get “spying for free.” Moglen proposes a new hardware and software solution that he calls the “freedom box,” a device that allows peer–to–peer social networking, communication, backups, etc. Effectively cutting out the corporate middleman, providing greater security and privacy through a technically distributed network.

It would be a mistake to condemn Moglen for offering a technical solution to a political problem. Moglen points to the material structure of hard drives, networking devices, data networks, and software to ask us to re–interpret the property relationship between people, their data, and the mediating digital network. To pursue his “freedom box,” we need some new technology, but more importantly, we need to open our eyes to new interpretations of the technology we already have. It is boyd who presupposes that personal communication will be mediated by corporate entities. The singular focus on context and meaning–making precludes substantive, structural changes. Ironically, it is the strong social constructivist reading that casts the technology (of social networks in this case) as intransient. Social constructivism has made us aware that the ways we understand technologies and their implications are always contingent. The movement reminds anyone who studies technology to be wary of essentialism and fetishism. Without further development or careful application, though, this theory of technology has significant shortcomings.

 

++++++++++

Re–connecting the social and the material with affordances

In her essay, “On software, or the persistence of visual knowledge,” Wendy Chun (2005) explores the relation between software and ideology. Software is ideological when it obscures the material world with the imaginary, as when the “desktop” and “folder” metaphors in a user interface replace silicon and transistors. The “user” knows that there is no desktop, but chooses the “reality” presented by the software. Chun believes that software also opens avenues for critique. Software can reveal ideology, where, “unveiling depends on our own actions, on us manipulating objects in order to see, on us thinking like object–oriented programmers” [24]. We encounter the possibility for critique when we are made aware of the dissonance between the idealized world of software and the material reality of how software is designed and how computers and networks operate. Alexander Galloway (2012), extending Chun in his discussion of ideology and software, writes,

Riven to the core, software is split between language and machine, even if the machinic is primary. And, more importantly, there is a process of mystification or distancing at work which ensures that the linguistic and the machinic are most definitely the same thing. [25]

Software and ideology, then, must be studied through both the linguistic and technical sides of software. In particular, we want to study software as sites of action and performance.

Software affordances, briefly, are elements of software systems that allow users to take action. Software affordances offer a point of entry that bridges the real and contingent world of software, giving us terms and idioms we can use to describe how software operates. Mapping affordances helps us determine the contours of software as functional and communicative systems. Borrowing from the field of social semiotics, we must understand the “register” of technology before we can interpret it.

We are never selecting with complete freedom from all the resources of our linguistic system. If we were, there would be no communication; we understand each other only because we are able to make predictions, subconscious guesses about what the other person is going to say. [26]

When we interpret a literary text, we do so within certain boundaries. By its nature, an interpretation is a conversation, an act of communication between two or more parties. To be understood, we must find points of commonality. These shared points form a loose perimeter of meaning that can be communicated between participants, in a given context. Often, the most powerful interpretations push the boundaries of these rules, but these interpretations are powerful precisely because they oppose the rules, not because they are unrelated. I could argue that Moby Dick is a Christian allegory or a treatise on race and class relations in a globalized economy. I might be able to support a far–out thesis that it’s “really” about human interaction with extraterrestrial life. I cannot (meaningfully) say that Moby Dick is a ham sandwich and that I ate it for lunch. At least, not if I want to participate in the game of interpretation. The rules of literary interpretation do not support it.

Technical affordances — opportunities to use tools to change our environment — contribute to the semiotic system that allows us to build interpretations of the digitally mediated spaces created with software. When we account for affordances, we build more powerful interpretations of technologies. Affordances are part of the register of a technological system. Hutchby (2001) attempts to redeem the social constructivist perspective in a way that is very useful for this paper. He introduces affordances as an important concept for understanding technology.

Rather than restricting the analytic gaze to the construction of accounts and representations of the technology, we need to pay more attention to the material substratum which underpins the very possibility of different courses of action in relation to an artefact; and which frames the practices through which technologies come to be involved in the weave of ordinary conduct. [27]

As Hutchby argues, no technical artifact is a tabula rasa open to any interpretation. Affordances offer insight into the range of actions that we can take within a given software system.

Affordance theory provides a robust framework for understanding the capabilities of software because it considers several interesting, inter–related components:

  • user goals and capacities
  • designer/system goals
  • design features
  • user perception and understanding
  • system potentials, whether actualized or not

Affordances describe the actions something makes available to a particular agent. A chair affords sitting and stairs afford climbing. Before we can usefully employ affordance theory to analyze software, though, we need to piece together a coherent theory from literature which is often ambiguous or contradictory (Chemero, 2003; McGrenere and Ho, 2000; Kaptelinin and Nardi, 2012).

As developed by Gibson (1979), affordances were used in psychology and ecology to describe how animals interact with their environment in terms of available actions. Affordances exist in the relationship between an actor and the environment (Chemero, 2003). Thus, the same environment holds different affordances for different actors. Affordances imply, “that the physical attributes of the thing to be acted upon are compatible with those of the actor” [28], and that the action is relevant and significant to the actor. Stairs do not afford climbing to a person in a wheel chair.

Norman (1988) broke with Gibson’s original understanding of an affordance by emphasizing the perception of affordances. Affordances, in the study of human–computer interaction (HCI), are commonly understood in Norman’s terms — an affordance is a design element, perceived by the user, that enables an action. The tie to relevance and perceivability distinguishes affordances from the full range of actions available with a tool or object. Norman’s account of British Rail’s passenger shelters [29] captures this distinction. The glass panels in British Rail’s shelters were constantly smashed by vandals. When British Rail changed the glass panels to plywood, vandals covered them with graffiti. The key here is that the plywood was not stronger than the glass and could have been smashed just as easily. Both materials can be smashed, but glass affords smashing where wood does not. The actors, i.e., the vandals, did not perceive wood as smashable, but as writable.

In terms of HCI, affordances are important for several reasons. A system’s affordances are they keys to the actions that users can perform. Because of their immediacy in the user’s perception, they also tell the story of what the software is suitable for. Affordances help build a conceptual model [30] of what the software does and how we should use it. In more recent work, Norman (2007) concedes that affordances are part of a semiotic system of communication between the designer and the user. In this sense, affordances also carry the ideological views of the system’s designers, abstracting the assemblage of material functions of a system through a coherent interface.

The following principles express the understanding of affordances that guide and motivate the analysis of social software in this paper. While affordance theory applies to a range of environments, from understanding animals in the natural world to humans and their relation to architectural design, our concern is with the affordances between software and human actors, so our principles stem from this more specific case.

 

++++++++++

Affordances are relational

The affordances of a software system are not embedded in the design, nor do they reside solely in the practices of the user. Rather, they describe the relation between the user and the system. To be an affordance, the relation must be both significant and accessible to the user. Like in the British Rail example, the culture, needs, and desires of the user determine a feature’s value as an affordance. Further, affordances vary in intensity; they do not describe a binary relationship [31]. Following this, the same software feature affords more power to some users than others, depending on the background of the user, or, contextually, offer different power to the same user.

Design elements and affordance are not the same thing

Initially, when Norman (1988) introduced affordances to the study of design, he conflated design features and affordances. Later work on affordances and design (Gaver, 1991; Norman, 1999; McGrenere and Ho, 2000) shows the utility of distinguishing the design feature from the affordance. This distinction is important for affordances to be useful for analyzing the design of software. Accordingly, some affordances are hidden and some are apparent. Software can also contain “misinformation” — the apparent existence of an affordance that is not there. Such misinformation may be intentional or result from poor design. We must also keep in mind that not all affordances are designed, i.e., they are not necessarily purposefully placed by designers and acted upon by users, but follow a much less linear path, full of unexpected actions.

Not all affordances are good

Typically, affordances imply that they are useful. They afford something the user wants. Maier, et al. (2009) remind us that not all affordances are beneficial to the user. In their example, windows afford seeing outside, natural light, but also, defenestration. Maier’s system of affordance analysis assigns either positive or negative values to affordances. This system is not very nuanced and runs counter to our principle that affordances are relational and vary in strength. Even defenestration represents a positive affordance, at least for the protagonists in many Hollywood movies who use open windows to dispose of their foes.

Affordances can be complex

Gaver (1991) critically expands the understanding of affordances in design by adding several layers of complexity. While early work was primarily concerned with the physical world, the software’s built environment demands a slightly different conceptualization of an affordance. Specifically, Gaver indicates that affordances can be sequential, occurring consecutively in time, and nested, being grouped in space. Design then, can take advantage of affordances, even if the final outcome of a sequential or nested performance is not apparent. In terms of computer interfaces, the initial design element may afford “clicking”, but this may lead to another affordance, such as displaying a different web page to read.

Affordances exist at a level of abstraction above the physical world

Much of the theory of affordances explicitly opposes the idea that affordances are interpreted. The literature would connect affordances to some kind of perceptual functionalism arrived at through evolution. In this way, affordances represent an unmediated perception between the actor and the world. Norman in particular, argues against the idea that affordances can be learned. For him, there are almost no affordances in computer software interfaces: “the real, physical manipulation of objects (...) is where the power of real and perceived affordances lies” [32]. According to Norman, the typical computer interface only affords clicking the mouse button and pressing keys on the keyboard. Such a limited view of affordances — as relating to direct physical manipulation — greatly reduces the utility of affordance theory in terms of software design and interactive media.

Luckily Norman does not have the final word on affordance theory. Gaver, for one, believes that software interfaces contain what might be considered virtual affordances. In his example, an interface element that resembles a raised button indicates a click–affordance, but does not signify a drag–affordance. There is also support that affordances in computer systems develop through learning and experience. McGrenere and Ho argue that the Unix command–line program vi affords text–editing, even though it, “gives the user no visual information about whether text entry is possible” [33]. In fact, vi is all but completely devoid of any perceivable clues that it affords text entry. Given our requirement for relevance to the user, it affords text entry only for users that have learned the idiosyncrasies of modal text editing.

 

vi text editor
 
Figure 1: vi text editor.

 

Maier, et al.’s consideration of affordances in terms of architectural design directly counters Norman’s dismissal of cultural norms and conventions as (signifiers of) affordances. The authors specifically write that, “meaning may be considered as another category of affordance” [34], using the example that marble floors afford support, but also can be seen as a sign of wealth, affording the appearance of luxury to a room. Meaning here is both relational to the individual and culturally relevant — there is nothing physically inherent in marble that affords the meaning of wealth.

Incorporating an affordance of meaning is critical to their view of architectural design. They recall Gibson’s initial conception that human tools and artifacts are created to change our affordances. Thus, the role of architectural design is

neither creating artifacts to do certain things, as a functional view of design would hold, nor creating artifacts solely on the basis of creating a beautiful form, but rather to create artifacts that can be used and that have meaning. [35]

Kaptelinin and Nardi (2012) similarly argue for a regrounding of affordance theory when it comes to human–computer interaction. They advocate a Vygotskian socio–cultural approach to understanding affordances.

According to the approach, the most characteristic feature of human beings, differentiating them from other animals, is that their activities and minds are mediated by culturally developed tools, including technology. [36]

Likewise, Manovich (2001), who does not speak specifically of affordances, claims “cultural interfaces” are a key feature of new media. Cultural interfaces draw on norms established in earlier media such as film and print. Manovich notes that, “One of the main principles of modern HCI is the consistency principle. It dictates that menus, icons, dialogue boxes, and other interface elements should be the same in different applications.” [37] The established grammars and metaphors of software interfaces allow software to convey semantics. As conventions become cultural interfaces, they take on the characteristics of affordances, and make up an important component of the language of new media. Rather than relegating affordances to the cell of unmediated perception, we argue that human affordances rely on the cultural context of the actor. We also must commit to an understanding of affordances as a mediation between the real world and the actor if we want to use them in a manner compatible with social constructivist readings of technology.

 

++++++++++

Affordances are discrete

We remain sympathetic to Norman’s concern, that, interpreted too broadly and used too loosely, affordances lose all meaning. Take, for example, the claim that, “blogging entails typing and editing posts, which are not affordances, but which enable the affordances of idea sharing and interaction.” [38] The implication here — that “sharing” and “interaction” are affordances — substitutes an interpretation for the affordance. It also encompasses a direct and naive technological determinism that obscures rather than illuminates the ways that blogging may or may not enable sharing and interaction.

In the end affordances operate at a more immediate level than rational understanding. They are tied to our perceptual system and our learned experience, connecting with our innate knowledge. If the affordance is perceived, we conceive of it immediately, without having to parse the alternatives or potential consequences. In this sense, affordances relate to Dourish’s phenomenological understanding of HCI. Affordances are ready–to–hand, as, “the technology itself disappears from our immediate concerns. We are caught up in the performance of the work” [39]. Accordingly, affordances do not cover every contingency or possible action. They account for the possible actions a specific actor can take, without requiring extensive planning or alterations to the environment (i.e., creation of new affordances). Affordances are limited in scope and directly related to the use where they are employed. While they can be learned, and develop over time, they are most relevant and powerful in the hands of experts or when the learned meaning is deeply embedded in the social practices of the user.

 

++++++++++

Facebook privacy: Coercive defaults

We have already, briefly, looked at the different ways two prominent intellectuals and technologists, dana boyd and Eben Moglen, view privacy in networked software. boyd, in all but entirely disregarding the design of the software, fails to account for the ways that software structures the interactions of many users. Moglen’s rhetorical critique of Facebook and “the cloud” casts too wide a net, does too little to connect his interpretation to the material and social practices of Facebook, and cedes too much agency to the technology. To elaborate our understanding of affordances as a means of analysis for interactive software, let’s return to Facebook 2010, the scene of the crime, which focused greater public scrutiny on privacy in social media.

In 2010 Facebook made some important and controversial changes to its service, including altering its default privacy settings governing how user–contributed information is shared. Default settings in software have a framing effect, which is particularly influential for users of new media. By definition, most people do not have a depth of experience with new media. They are less certain, in this domain, what to expect or how to evaluate the consequences of their choices. Further, design choices can be misread as technical necessities (Chun, 2005). If you are not an expert in ICT, it is difficult to see which choices are intentional and which are necessitated by technical concerns. Default settings in software are legitimated as either the choices of technical experts or as the choices that most people want. In terms of software, evidence supports the stickiness of initial/default settings (Kesan and Shah, 2006).

The significant change Facebook made to its defaults in 2010 opened almost all user posts to the public. Some personal information (such as e–mail and street address) remained private. Facebook CEO Mark Zuckerberg defended these changes by arguing that, “the default is social” (Schonfeld, 2010).

 

Facebook privacy defaults, 2010
 
Figure 2: Facebook privacy defaults, 2010.

 

Zuckerberg is correct, of course, that the result of the new defaults ensured that more Facebook users would share more information. It is almost certainly true, as well, that many of them share more information than they intend.

The organization of these settings is misleading, as well. The very first item on the list, “My status, photos, and posts”, encompasses just about everything that a user adds to Facebook. “Family and relationships” includes the most unique and valuable data Facebook contains: the social graph of a user’s friends, along with meta–information describing these social relationships. Facebook’s leadership argued that confusion from privacy settings stems from its desire to offer fine–grained controls. Examination of the affordances offered in the privacy settings shown above contradicts this claim of fine grained control. The new affordances lump the vast majority of a user’s content into two radio buttons, defaulting to the most permissive sharing. The rest of the “fine–grained” controls cover single pieces of information (e.g., such as address or birthday) that are not central to the experience of using Facebook.

Why would Facebook use the power of defaults to steer its users into sharing more information than they might be comfortable with? As a social networking site, there is little value to the user in sharing photos, videos, status updates, etc. with people they do not know. The ostensible “point” of Facebook is to help people keep in touch with their friends, family, and acquaintances (i.e., their social network). There are other, and better venues for publishing to the world, including Twitter, YouTube, and various blog services (e.g., Tumblr or WordPress.com). So, the changes do little to improve the core service for the billion users of the site. The changes, instead, increase the value of the user base for Facebook and its advertisers. Facebook makes its money from Internet advertising. Facebook respects its users’ privacy to the point that it does not directly share private information with advertisers. Like Google and other online advertising companies, Facebook’s ads are valuable because they are demographically targeted. Facebook already has a rich data set to mine for targeting ads, whether this information is shared publicly or not.

The changes to the default privacy settings need to be seen in relation to other changes released at the same time. Facebook made a major push to increase the number of connections to “pages.” Pages look like regular user profiles, but actually belong to businesses, organizations, sports teams, celebrities, etc. With the May 2010 changes pages became a much more prominent feature of the Facebook network when Facebook removed one affordance and replaced it with another. Previously, users were able to list (in text) some of their favorite books, movies, television shows, and music on their profile page. This unstructured data was converted to links, creating a hypertext affordance of clicking on the text in a user’s profile to see other people who had listed the same items. For example, I could declare Moby Dick as one of my favorite books just by typing in the words. On my profile, Moby Dick would link to a search of other users who listed Moby Dick as a favorite book. This feature was entirely removed, to be replaced by the affordance to become the “fan” of a “page.” By default, all of the old text links were automatically converted to links to pages. Users had to specifically opt out to avoid this conversion.

Additionally, Facebook changed the marker of its affordance to connect to a page. Previously, to connect to a page, users were presented with a link that said, “become a fan of”. Facebook replaced this link with the simple “like” button. This deceptive move built on an existing affordance and Facebook convention. Prior (and after) the 2010 changes, the like button indicated a well–known affordance. Let’s call it the like–post affordance. Like–post allows one friend to indicate that they like something a friend posts. The like–post affordance increments the “like count” of a post, and associates the clicker’s name as someone who liked the post. Here, like–post is a lightweight way to indicate that you read a post and support the poster (in some way), without incurring the time and cost of writing a full comment of your own.

When Facebook added the like–page affordance, they chose to use one design element to signal two different affordances. Clicking the like–page affordance does not increment the like count of a friend’s post, but adds the clicker to the list of fans of the page, and adds the page to the clicker’s public profile. A third, correlated, change to Facebook’s privacy settings made all page connections fully public. This setting could not be changed.

In–site banner ads drive Facebook’s revenue. Facebook also makes it clear that internal page administrators are a key audience for purchasing ads. In their instructions on how to create a page, the final instruction step encouraged administrators to “Find New Fans” by purchasing Facebook ads.

To review the changes implemented in 2010:

  1. New privacy settings retro– and pro–actively made more user information public, by default;
  2. Facebook made certain information, including pages, irrevocably public;
  3. Existing profile information was converted from unstructured text to connections to pages;
  4. The “like” button and link changed from only signaling a mostly private affordance (like–post) to a fully public affordance (like–page);
  5. Facebook sells ad space to page administrators; and,
  6. Pages become more valuable because they have more fans.

In the final analysis, the new defaults coupled with new affordances predictably lead to more fully public user information and more connections to pages. Facebook, true to their policy, does not reveal private information to advertisers. Instead, they re–designed their system to make as much information public as possible, even when semi-public sufficiently meets the needs of the users. Page administrators receive the list of Facebook users who “like” their page. The more public information available about these users, the more valuable this list becomes. Hence, the more valuable ads on Facebook become, as the most efficient way to encourage more users to like a page.

The changes to the privacy policy initially required users to navigate more than 50 privacy buttons with 170 different options (Bilton, 2010). Facing stiff criticism, Facebook developers have continuously revised the interface to make it more straightforward. Controlling privacy on Facebook still remains a challenge. For instance, even if you choose “friends only” many people will not realize that their friends can share this information, or, even that the decision to allow friends to re–share restricted content is a design choice governed by affordances.

 

Facebook friends sharing through apps, 2014
 
Figure 3: Facebook friends sharing through apps, 2014.

 

Figure 3 shows the default settings (current as of June 2014) for how your friends can share information with third–party app developers. The above settings have only seen minor changes between 2010 and 2014. By default, Facebook friends can share almost everything with the apps installed on their accounts. You will not know when, if, or with whom your information is shared, or which friend may have shared it. In the end, having information shared through friends with third parties is particularly dangerous. The third party sites do not have as much to lose as Facebook. Facebook is a publicly traded, multi–billion dollar corporation that relies on its reputation for continued market dominance. Third parties — often developers of frivolous games, quizzes, and polls — have much less to lose and are more likely to play fast and loose with data they collect. By decoupling the regular content–sharing affordances and the app sharing affordances, Facebook software raises the costs of switching away from the default “public” settings.

In addition to examining the designed affordances encountered while using Facebook, we should also consider which affordances have been omitted. The like–page affordance publicly connects a user to a page. Pages — brands, celebrities, businesses — benefit by learning about and advertising to consumers of their products and services. Facebook users benefit, too, from a social filter that might alert them to new opportunities. In terms of the user experience, though, the obvious corollary to the like–page affordance would be a dislike–page affordance which would allow users to share with their Friends the products, corporations, brands, bands, and celebrities they do not like. Affordances do not strictly determine user behavior and meaning on Facebook. Despite the missing dislike affordance, there are numerous Facebook protest pages that humorously and powerfully take advantage of the social network to effectively express “dislikes.” However, as designed, the Facebook software, with its given affordances, becomes an ally of corporate pages. There is no affordance which offers a systematic and lightweight way for users to voice their displeasure with a company. At the time of this writing, the official Walmart page has 34 million likes [40], the “Wal–Mart Sucks!” page has only 8,700 likes [41].

There is no bright line that separates “good” software from “bad.” Accounts of the social construction of technology relentlessly point to social, material, and historical context to inform our understanding of technologies. Software, for all its immateriality, must be regarded as part of this context. In the case of Facebook, for example, the stakes are high: its affordances affect half a billion daily users, and we ignore them at our own peril. Software, always an abstraction to some degree, becomes ideological when design elements, such as affordances, mask underlying tensions and conflict. Particularly, the inclusion of corporations into Facebook’s social graph, combined with the unidirectional like–page affordance elevate businesses and brands to the same status as people, while denying that people can have (in the world of Facebook’s software) anything but a positive relationship with a corporation. When we point to the decisions to include, omit, and alter affordances such as these, we highlight the antagonism between Facebook and its users. Paying proper attention to how software works can add credence to arguments like Moglen’s, that we should neither sue for incremental improvements to privacy controls, nor satisfy ourselves with hacking and re–interpreting the tools we are given. Once we articulate a vocabulary to describe how software affects our world, we can fully face political questions of how software is designed and how we might design tools that are more equitable. We open ourselves to radically different political arrangements between users and the networked technologies that touch almost every aspect of our lives in the twenty–first century. End of article

 

About the author

Matthew X. Curinga is a software developer and digital media researcher. He is Assistant Professor and Director of the graduate program in Educational Technology at the Ruth S. Ammon School of Education at Adelphi University in New York. His interests include educational philosophy, political theory, and the study of networked and interactive software systems. He draws on his extensive work as a software developer and educational media designer to study digital media. He holds a B.A. in English literature from Colby College, M.A. in Computing and Education, and Ed.D. in Instructional Technology and Media from Teachers College, Columbia University.
Web: http://matt.curinga.com.
E–mail: mcuringa [at] adelphi [dot] edu

 

Notes

1. Emphasis added; Carr, 2008, at http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/2/, accessed 29 August 2014.

2. Grint and Woolgar, 1997, p. 10.

3. Carr, 2008, at http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/, accessed 29 August 2014.

4. Haraway, 1988, p. 579.

5. Kranzberg, 1986, p. 553.

6. Kranzberg, 1986, p. 557.

7. Berry, 2011, p. 3.

8. Dourish, 2004, p. viii.

9. Fuller, 2008, p. 5.

10. MacKenzie, 2006, p. 5.

11. Mackenzie, 2006, p. 7.

12. Kapor, 2006, at http://blog.kapor.com/index9cd7.html?p=29, accessed 29 August 2014.

13. Ibid.

14. Winner, 1977, p 76.

15. Lessig, 2004, p. 169.

16. Lessig, 2006, p. 6.

17. Winner, 1980, p. 30.

18. Marx, 1887, Capital I, Part IV, § 6; see https://www.marxists.org/archive/marx/works/1867-c1/, accessed 29 August 2014.

19. Lukács, 1973, p. 56.

20. Grint and Woolgar, 1997, pp. 141–163.

21. Grint and Woolgar, 1997, p. 25.

22. boyd, 2012, p. 349.

23. Ibid.

24. Chun, 2005, p. 42.

25. Galloway, 2012, p. 73.

26. Halliday and Hasan, 1985, p 40.

27. Hutchby, 2001, p. 450.

28. Gaver, 1991, p. 81.

29. Norman, 1988, p. 9.

30. Norman, 1988, pp. 12–13.

31. McGrenere and Ho, 2000, p. 182.

32. Norman, 1999, p. 41.

33. McGrenere and Ho, 2000, p. 185.

34. Maier, et al., 2009, p. 403.

35. Maier, et al., 2009, p. 404.

36. Kaptelinin and Nardi, 2012, p. 971.

37. Manovich, 2001, p. 91.

38. McLoughlin and Lee, 2007, p. 666.

39. Dourish, 2004, p. 109.

40. https://www.facebook.com/walmart, accessed 19 June 2014.

41. https://www.facebook.com/pages/Wal-Mart-Sucks/132353516805608, accessed 19 June 2014.

 

References

D.M. Berry, 2011. The philosophy of software: Code and mediation in the digital age. New York: Palgrave Macmillan.

N. Bilton, 2010. “Price of Facebook privacy? Start clicking,” New York Times (12 May), at http://www.nytimes.com/2010/05/13/technology/personaltech/13basics.html, accessed on 29 August 2014.

d. boyd, 2012. “Networked privacy,” Surveillance & Society, volume 10, numbers 3–4, pp. 348–350, and at http://library.queensu.ca/ojs/index.php/surveillance-and-society/article/view/networked/networked, accessed on 29 August 2014.

N. Carr, 2008. “Is Google making us stupid?” Atlantic (August), at http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/, accessed on 29 August 2014.

A. Chemero, 2003. “An outline of a theory of affordances,” Ecological Psychology, volume 15, number 2, pp. 181–195.
doi: http://dx.doi.org/10.1207/S15326969ECO1502_5, accessed on 29 August 2014.

W.H.K. Chun, 2005. “On software, or the persistence of visual knowledge,” Grey Room, number 18, pp. 26–51.
doi: http://dx.doi.org/10.1162/1526381043320741, accessed on 29 August 2014.

P. Dourish, 2001. Where the action is: The foundations of embodied interaction. Cambridge, Mass.: MIT Press.

J. Dwight and J. Garrison, 2003. “A manifesto for instructional technology: Hyperpedagogy,” Teachers College Record, volume 105, number 5, pp. 628–699.

M. Fuller (editor), 2008. Software studies: A lexicon. Cambridge, Mass.: MIT Press.

A.R. Galloway, 2012. The interface effect. Cambridge: Polity.

W.W. Gaver, 1991. “Technology affordances,” CHI ’91: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 79–84.
doi: http://dx.doi.org/10.1145/108844.108856, accessed on 29 August 2014.

J.J. Gibson, 1979. The ecological approach to visual perception. Boston: Houghton Mifflin.

K. Grint and S. Woolgar, 1997. The machine at work: Technology, work, and organization. Malden, Mass.: Blackwell.

I. Hutchby, 2001. “Technologies, texts and affordances,” Sociology, volume 35, number 2, pp. 441–456.
doi: http://dx.doi.org/10.1017/S0038038501000219, accessed on 29 August 2014.

M. Kapor, 2006. “Architecture is politics (and politics is architecture),” Mitch Kapor’s Blog (23 April), at http://blog.kapor.com/index9cd7.html?p=29, accessed 29 August 2014.

V. Kaptelinin and B. Nardi, 2012. “Affordances in HCI: Toward a mediated action perspective,” CHI ’12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 967–976.
doi: http://dx.doi.org/10.1145/2207676.2208541, accessed 29 August 2014.

J.P. Kesan and R.C. Shah, 2006. “Setting software defaults: Perspectives from law, computer science and behavioral economics,” Notre Dame Law Review, volume 82, pp. 583–634.

F. Kittler, 1995. “There is no software,” CTheory.net, at http://www.realtechsupport.org/UB/MC/Kittler_NoSoftware_1995.pdf, accessed 29 August 2014.

M. Kranzberg, 1986. “Technology and history: ‘Kranzberg’s laws’,” Technology and Culture, volume 27, number 3, pp. 544–560.
doi: http://dx.doi.org/10.2307/3105385, accessed 29 August 2014.

L. Lessig, 2006. Code: Version 2.0. Second edition. New York: Basic Books.

L. Lessig, 2004. Free culture: How big media uses technology and the law to lock down culture and control creativity. New York: Penguin Press; version at http://www.free-culture.cc/freeculture.pdf, accessed 29 August 2014.

A. Mackenzie, 2006. Cutting code: Software and sociality. New York: Peter Lang.

J.R.A. Maier, G.M. Fadel, and D.G. Battisto, 2009. “An affordance–based approach to architectural theory, design, and practice,” Design Studies, volume 30, number 4, pp. 393–414.
doi: http://dx.doi.org/10.1016/j.destud.2009.01.002, accessed 29 August 2014.

J. McGrenere and W. Ho, 2000. “Affordances: Clarifying and evolving a concept,” Graphics Interface 2000, pp. 179–186, and at http://www.graphicsinterface.org/proceedings/2000/177/, accessed 29 August 2014.

C. McLoughlin and M. Lee, 2007. “Social software and participatory learning: Pedagogical choices with technology affordances in the Web 2.0 era,” Australasian Society for Computers in Learning in Tertiary Education (ASCILITE) Singapore 2007; version at http://dlc-ubc.ca/dlc2_wp/educ500/files/2011/07/mcloughlin.pdf, accessed 29 August 2014.

E. Moglen, 2010. “Freedom in the cloud: Software freedom, privacy, and security for Web 2.0 and cloud computing,” at http://www.softwarefreedom.org/events/2010/isoc-ny/FreedomInTheCloud-transcript.html, accessed 29 August 2014.

D. Norman, 2007. The design of future things. New York: Basic Books.

D. Norman, 1999. “Affordance, conventions, and design,” Interactions, volume 6, number 3, pp. 38–43.
doi: http://dx.doi.org/10.1145/301153.301168, accessed 29 August 2014.

D. Norman, 1988. The psychology of everyday things. New York: Basic Books.

E. Schonfeld, 2010. “Zuckerberg: ‘We are building a Web where the default is social’,” TechCrunch (21 April), at http://techcrunch.com/2010/04/21/zuckerbergs-buildin-web-default-social/, accessed 29 August 2014.

L. Winner, 1980. “Do artifacts have politics?” Daedalus, volume 109, number 1, pp. 121–136.

L. Winner, 1977. Autonomous technology: Technics–out–of–control as a theme in political thought. Cambridge Mass.: MIT Press.

 


Editorial history

Received 5 June 2013; revised 19 June 2014; accepted 26 August 2014.


Creative Commons License
This paper is licensed under a Creative Commons Attribution–NoDerivatives 4.0 International License.

Critical analysis of interactive media with software affordances
by Matthew X. Curinga.
First Monday, Volume 19, Number 9 - 1 September 2014
http://firstmonday.org/ojs/index.php/fm/article/view/4757/4116
doi: http://dx.doi.org/10.5210/fm.v19i9.4757





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.