First Monday

Beyond accessibility: Design ethics, edge users, and the role of active proxies in unwinding the spiral of exclusion by Julian Kilker



Abstract
The ethics and responsibilities of technology companies are under increased scrutiny over the power to design the User Experiences (UXs) embedded in their products. Researchers advocating a Rawlsian “just and fair” design process have suggested a “veil of ignorance” thought experiment in which designers adopt the standpoint of unspecified hypothetical users to ensure designers are not biasing their own perspectives at the expense of others. This article examines including and excluding such standpoints through the lens of edge users—a term based on extreme “edge cases” in which systems are more likely to break down. Edge users are particularly marginalized and subject to a spiral of exclusion when interacting via Internet and Web resources whose design disregards them because their ability to research and voice their experiences is further limited. Active proxies, those already helping or standing in for marginalized users, can be enlisted as design allies to develop a deeper understanding of such edge groups and contexts. Design ethics, in short, needs to move beyond making technologies accessible to all people, to making all types of people accessible to designers.

Contents

Introduction
Ethical design
Edge users
Proxies and active proxies
Conclusion

 


 

Introduction

When companies create new systems, whose experiences and concerns do they consider? How do they access them? How can designers treat marginalized people more fairly? These are key ethical questions of our age, in which it is important to consider how technologies are shaped, what guides the shaping, and the relationship between the technology shapers and technology users. These questions are particularly relevant for the information and communication technologies at the core of our modern lives. We can draw a line from the small technical challenges many of us face to the major challenges people face every day as they struggle to cope with constant change in digital technology design.

This paper explores how designers have incorporated user perspectives, and the practical role that active proxies who assist users can contribute to ethical user experience (UX) design. UX, the planning and implementation of how people interact with services and technologies, often via the Internet and Web, is an extension and conceptual descendent of earlier disciplines studying how people and technology can interact more effectively; these include human factor, user interface, and human-computer interaction fields. In this paper, I use the term “designers” to group individuals in the many people — in management, technical design, engineering, testing, and marketing, among others—responsible for planning and implementing design decisions. The goals and ethics of these people may be different and they might sometimes disagree. They also have differing levels of power to influence the shaping. But together these designers rely on information about users for their design goals and together they produce systems that can be analyzed in terms of overall UXs.

In contrast to core (targeted) and mainstream (general) users, “edge” users — a term based on rare and extreme “edge cases” in which systems are more likely to break down — face the challenge of being acknowledged in design processes either directly as participants or indirectly as imagined users. To explore this topic, first I describe the ethical challenges of technology design. Second, I explore evolving user involvement and ethical implications as design practices transition from observational to participatory to crowdsourcing approaches. Finally, based on the history of excluding edge users, I propose designers access edge-user experiences via “active proxies” who are personally familiar with the user contexts.

 

++++++++++

Ethical design

Historically, technology design has shifted between designing for users and with users. Each approach involves gathering information to inform design decisions, but importantly in different ways. In 1999, computer scientists Penny Duquenoy and Harold Thimbleby proposed adapting John Rawls’ (1971) carefully-reasoned Theory of justice, which focused on politics, to the ethical context of designing human-computer interactions, swapping politics for interactive systems, institutional builders for software and hardware designers, and contracts for design characteristics. They argued that interactive “systems become embedded within the users’ world, and constrain what those users can and cannot do. Such systems are social institutions, not enforced by law or convention (as Rawls conceives it) but enforced by design” [1] and that “creating systems for other people to use, which is the concern of [human-computer interaction], can be conceived as an act of justice” [2].

To achieve the ethical principles of liberty (against discrimination) and equality (same chance of success) among the multiple parties in technology design, the authors adopt Rawls’ concept of “veil of ignorance” in which designers are encouraged to perceive the world without their preconceptions and “adopt the standpoint of unspecified [emphasis in original] potential users” [3]. In short, those creating a technological system should imagine themselves from the standpoint of other users—not themselves—and preferably as a variety of possible users. Rawls modestly admits the “veil of ignorance is so natural a condition that something like it must have occurred to many” [4], but he explores it in depth as a technique for ensuring justice through fairness.

Duquenoy and Thimbleby (1999) identify several shortcomings of their proposal — most notably, mass production could not target small groups, the “veil of ignorance” can’t realistically imagine all possible users, and it might not be desirable or possible to design for all users. But these shortcomings are moderated in our software-focused, digitally distributed world. Systems can now be (and are) designed and reconfigured for many groups of users and contexts through localization, internationalization, accessibility standards, and responsive design processes. Widespread Internet connectivity enables the repeated revision and distribution of design decisions based on voluntary and automated collection from many user contexts. For example, analytic tools such as FullStory track detailed interactions between users and Web services, including mouse movements, and analyze interactions for evidence of “pain points” such as “rage clicks .... moments of excessive clicking or tapping by users on your web site (or app)” (Owings, 2019) that can then be rapidly addressed.

It is worth revisiting and extending Duquenoy and Thimbleby’s application of Rawlsian ethics in light of the current ethical practices in technology design and the efforts to better understand how designers and users engage with the design of digital systems (Shilton, 2018); these concerns are amplified by rushed design cycles, pressures to include commercially lucrative but sometimes deceptive “nudges” in UX design, and digital distribution that insulates consumer contexts from designers.

This is a complex problem because of the deep integration of digital technologies in daily life: On a daily basis, as we move from home to vehicle to work, we interact with a wide variety of user interfaces for domestic, transportation, entertainment, financial, and communication technologies, resulting in a holistic “compositional experience” that “escapes the boundaries of design practice” [5].

This compositional UX varies by individual and context, but usually combines the experiences of using multiple, overlaid software applications and Web sites on smart phone or computer hardware, embedded themselves within interactions with the physical world. The systems include household appliances; transportation, media, and navigation interfaces; automatic teller and cashier machines; landline and mobile telephones; and essential institutional services, including education, healthcare, human resources, and taxation. Even people minimizing their interactions with digital systems are likely to depend on multiple user interfaces, each of which changes periodically.

Brief history of the user in UX design

The user experiences comprising our daily technological life are inconsistent and confusing — and, intentionally or not, their design counters established research and ethical guidelines. Ergonomics and human factors research emerged from early twentieth century concerns about efficiency and mass marketing of consumer products, the effective operation of military equipment and, later, critical safety infrastructure such as nuclear power plant and aircraft control panels.

In 1949, Alphonse Chapanis and two Johns Hopkins colleagues noted in a compilation of lectures that has become the foundational human factors text, Applied experimental psychology, that the recent “war needed, and produced, many complex machines, and it taxed the resources of both the designer and operator in making them practical for human use” [6]. Researchers, they wrote, addressed this problem from many disciplinary perspectives — biomechanics, psychoacoustics, engineering psychology, engineering psychology, and systems analysis, among others — and incorporated methods from time-and-motion studies, personnel selection, and experimental psychology. Looking back at the human factors field this text inspired, Alphonse Chapanis defined it as one with specific values:

[Human factors] discovers and applies information about human abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable and effective human use. (Chapanis, 1985)

The notions of who is being designed for (summarized as “users”), what tasks these people are performing (summarized as “usage scenarios”), and how designers can access both are persistent questions with evolving strategies historically from early human factors to current UX practice, and also as projects move from initial conceptualization to polished products for a general, heterogenous public. Designers’ ability to learn about users and scenarios shifts from often informal observations of prototypes used by their own developers to researching specified users and tasks (as in the human factors field, which studied controlled institutional settings) to participatory design methods with targeted groups to gathering field data from the general public through networked analytics and problem reports.

At each stage, designers risk prioritizing user perspectives that are the most simple to access, just as social-psychological research has been biased towards easily accessible western, educated, industrialized, rich and democratic research participants (Henrich, et al., 2010). Edge users, who are far from technology centers, poor, or less accessible physically, socially, or technologically are more likely to be excluded in design planning and participation. Omission from early design processes potentially leads to a “spiral of exclusion” in which such users are further marginalized when design participation and representation takes place using technologies that already exclude them.

During initial design stages, “dogfooding” (using one’s own product) and “bootstrapping” (starting small and rapidly building up technologies based on earlier experiences) both place insiders at the center of the design process. Bootstrapping is associated with the development of personal augmentation computer systems based on the needs of skilled knowledge workers, as seen in Doug Engelbart’s NLS at the Stanford Research Institute (SRI) in the late 1960s (Bardini, 2000) and the Xerox Alto workstation in the early 1970s. In 1968, J.C.R. Licklider and Robert Taylor, both with psychology training and directors of ARPA’s Internet Processing Techniques Office (1962–1964 and 1965–1969, respectively), and responsible for funding Engelbart’s work, the nascent ARPANET, and, arguably, modern computer science, wrote enthusiastically about their experiences using networked computers for collaboration. But they were already concerned about access to these emerging technologies: The impact for society, they wrote, “will be good or bad, depending mainly on the question: Will ‘to be on line’ be a privilege or a right? If only a favored segment of the population gets a chance to enjoy the advantage of ‘intelligence amplification,’ the network may exaggerate the discontinuity in the spectrum of intellectual opportunity” [7].

In 1978, early Apple employee (Apple employee number 66) Bruce Tognazzini wrote the original interface guidelines for the Apple II, a relatively inexpensive personal computer that bridged the hobbyist and general consumer markets, bringing in a wider and more diverse range of computer users. (Unlike early human factor contexts in which institutions selected who was capable of using their technologies, the domestication of computers empowered consumers to evaluate computers they consider purchasing in terms of user friendliness, and encouraged companies to accommodate a broader range of users.)

Tognazzini’s documentation serves as the foundation for UX guidelines at Apple, and indirectly, at other companies. He revised his original guidelines as the Human interface guidelines in 1985 after the Apple Lisa (released in 1983) and the Apple Macintosh (released in 1984) popularized user interfaces for non-technical consumers. The guidelines’ core concepts survive in print (e.g., Apple Computer [2005, 1992]), in presentations at the Apple Worldwide Developers Conference (Stern, 2017), and in Microsoft’s Fluent Design System and Google’s Material Design.

Norman and Draper’s (1986) User centered system design introduced generalizable interaction concepts that empower users such as mapping, feedback, discoverability, and affordances based on the cognitive sciences; Norman joined Apple from the University of California San Diego around the time of the book’s publication, and later popularized these concepts in the mass market book Psychology of everyday things (Norman, 1988). The thread linking these research, commercial, and popular efforts is that user interfaces should be simple to use and prioritize individual users, and the target users considered were increasingly heterogenous as the scope expanded from elite researchers (NLS system), to information workers (Alto and Lisa), to the general public (Macintosh).

In Apple’s seminal UX resources, key sections describe designing systems to provide “feedback and communication ... consistency ... what you see is what you get ... forgiveness [of errors]” [8]; in other words, designers should treat users as they would like to be treated by other people and systems, as in Rawls’ ethical framework, as well as broaden the scope of users to include “worldwide compatibility” and “universal accessibility” [9].

Focusing on user perspectives was not an abstract goal: In contrast to other computers of the era, designers used a relatively high proportion of the Apple Macintosh’s processor capacity to manage the UX rather than software applications. Macintosh programmers relied on standard user interface elements provided by Apple, and software code was kept separate from the resources (images and text) it used so each could be modified separately and by specialized teams (Guterl, 1984). (This allowed translators, for example, to “localize” software for international markets more efficiently and effectively.) According to very early Apple employee (Apple employee number 8) Chris Espinosa, the Macintosh development team engaged in an “intense and almost religious argument about the purity of the systems design versus the user’s freedom to configure the system” [10], and focused efforts on optimizing even rare tasks such as disk formatting, with the view that the time saved would be multiplied by millions of eventual users. The involvement of nontechnical staff in Macintosh design meetings led, in retrospect, to “glaringly evident things that only somebody completely ignorant [of the technical challenges] could come up with” [11], such as the importance of high-quality audio in the product’s design.

Apple’s (1992) Human interface guidelines emphasize relying on actual or representative users in design:

[I]dentifying and understanding your target audience are among the among important first steps ... . It’s useful to create scenarios that describe a typical day in the life of a person you think uses the type of product you’re designing. Involve users throughout the design process ... observe them working ... [use] people who fit your audience description to test your prototypes ... [l]isten to their feedback and try to address their needs. [12]

Under the heading “accessibility,” the guidelines note that the “computer should be accessible to everyone who chooses to use it. ... Users will undoubtedly vary in their ages, styles, and abilities. They may also have physical and cognitive limitations” [13]. This is parallel to architect Ronald Mace’s popularization of Universal Design in the 1970s as the “design of products and environments to be usable to the greatest extent possible by people of all ages and abilities” [14].

Apple’s updated 2005 guidelines echo these recommendations, adding a paragraph cautioning designers to “remember that you are not designing the program for yourself. It is not your needs or your usage patterns that you are designing for, but those of your (potential) customer” [15]. The challenge is to get as close as possible to users to understand them: Designers interact with representative or actual users and using talk-aloud, observational, among other user-testing methods. Apple added a detailed “Guidelines for conducting user observations” section that starts:

If time and budget permit, consider working with a professional usability testing facility ... . If this is not feasible, try to allow a cross-section of colleagues within your company to use a prototype of your product ... . some testing is far better than no testing. [16]

Newell, et al. (2011) recommend that “designers need to develop a real empathy with their user groups” [17] through the use of ethnographic (profiles and scenarios) and theatrical (scripted) techniques. Similarly, the “design for all” and “inclusive design” approaches (Waller, et al., 2015) advocate developing personas based on user observations. Each of these “just and fair” design goals need to gather sufficient contextual information about edge users to address the “veil of ignorance” approach, and each approach faces challenges of sampling and validity.

Ethics discounted

Despite the valorization of socially responsible, value-sensitive, and reflective design practices seen in industry guidelines, among researchers, and by the wider design community, recent digital product development is dominated by “move fast and break things” approach that, in practice, discounts these practices. As consumer technologies grow more dependent on software and network access, their characteristics become more software-like: Released incomplete, inadequately tested, and frequently revised.

The shift from desktop computers to small touch-based mobile devices has been an opportunity for companies to jettison familiar interface elements and guidelines for new, difficult-to-discover features such as the hamburger menu icon, varied swipe actions, and the obscure “shake to undo” action. Commercial pressures prioritize UX designs that monetize user interactions. This shift from human-centered to technologically- and commercially-deterministic agendas, and the associated reimagining of design goals and ethics has been attributed to the increased influence of neoliberal economics, venture capital, and libertarianism in Silicon Valley (Barbrook and Cameron, 1996; O’Mara, 2019).

The decline of user-centered goals and values in this new design era is popularly summarized by one phrase that avoids referencing humans or the humane. Initially coined as a company motto by Facebook CEO Mark Zuckerberg to prioritize development speed and encourage experimentation by discounting consequences, “move fast and break things” was emblazoned on company walls and widely adopted by technology companies. Industry watchers viewed its replacement by “move fast with stable infrastructure” at the company’s 2014 F8 conference as a sign of the company’s and the CEO’s maturation (Statt, 2014).

Yet Zuckerberg’s use of “things” and “infrastructure” continues the focus on technical rather than social consequences of technology design. Even those who themselves were deeply engaged in the breakneck development of technologies sounded the alarm: danah boyd, MIT Media Lab graduate and alum of its “demo or die” culture (revised in 2014 by former director Joi Ito to “deploy or die”), has noted that “‘move fast and break things’ is an abomination if your goal is to create a healthy society” (boyd, 2019). The emphasis on prioritizing speed over quality deemphasizes important ethical implications: What gets ignored or broken, and who is affected?

Agreeing upon and implementing ethical standards in software fields has long been problematic (Zwerman, 1999). Researchers found the Association of Computing Machinery’s code of ethics ineffective when they asked software developers and software engineering graduate students to evaluate realistic ethical scenarios (McNamara, et al., 2018). Because software is critical to the operation of essential technologies that include medical, infrastructure, social and transportation systems, such ethical discounting not only influences these systems, but also undercuts the ethical codes in other fields related to these technologies. The depth of the problem is made clear by Anthony Levandowski, the man responsible for developing Google’s self-driving vehicles, saying “If it is your job to advance technology, safety cannot [emphasis in original] be your No. 1 concern” [18]. (Levandowski was one of Google’s most-valued employees until he faced 33 counts of theft and attempted theft of trade secrets.)

Empathy of digital technology designers for the users of their products also has a mixed record in recent years. Design ethicist Tristan Harris questions the values embedded in technologies when their Silicon Valley designers are a “bunch of young mostly male, mostly white, 20- to 30-year olds, having studied computer science, technology, engineering, all living within now about 50 miles of San Francisco” (WNYC Studios, 2019), and who comprise a “particular culture of people who are not trained to question the systems” (WNYC Studios, 2019). Those who influence technology now may also empathize less with users than their predecessors: Trends in American college student surveys from 1979 to 2009 found an overall pronounced decline since 2000 in both empathic concern (“other-oriented feelings of sympathy for the misfortunes of others”) and perspective taking (a measurement of “people’s tendencies to imagine other people’s points of view”) [19].

The experience of programming simple systems might bias designers against the users of such systems. In a particularly revealing passage from a collection of essays about her programming career, Ellen Ullman writes, “Underlying every user-friendly interface is a terrific human contempt ... . In the designer’s mind, gradually, over months and years, there is created a vision of the user as imbecile ... No good, crash-resistant system can be built except if it’s done for an idiot” [20]. More worryingly, efforts to encourage empathy may backfire, as when the C.E.O. of a startup required Anna Weiner and other new employees to provide tech support for a few days to build empathy. The “engineers and salespeople tossed off replies to customer inquiries and rolled their eyes at developers who did not understand our product. ... It wasn’t exactly that [engineers] harbored contempt for our users; they just didn’t need to think about them.” [21].

The prevalence and sophistication of “dark patterns” in UX design — when “designers use their knowledge of human behavior (e.g., psychology) and the desires of end users to implement deceptive functionality that is not in the user’s best interest” [22] is further evidence of the need for ethical oversight in technology design. Gray, et al. (2018) identified five major techniques that take advantage of users:

As early as 1993, Tognazzini suggested designers should examine the principles of stage magic to develop an ethics of productive, acceptable deception in their own UX design after finding “book[s] on the principles of magic ... clearly delineate the basic principles and techniques that support graphical user interfaces” [24]. For example, people might appreciate a computer chess opponent delaying its moves for a realistic human pause of several seconds, rather than responding at a demoralizing actual computational speed of a few milliseconds.

Instead, the deceptive techniques listed above are widely implemented on current commercial Web sites, and an ecosystem of third-party services provides turnkey UX dark designs (Mathur, et al., 2019). As with aggressive marketing techniques, these dark design patterns allow for plausible deniability of the designers’ intention, and call for personal responsibility on the part of the user. Understanding, avoiding, and recovering from manipulative UXs are within (often barely) the capabilities of mainstream users, but the design characteristics of these patterns make them more likely to entrap edge users. Commercial incentives leading to these design patterns have not only degraded UXs for all users, they further marginalize the experiences of the more vulnerable users who are also less able to push back.

 

++++++++++

Edge users

In contrast to accessible mainstream users, edge users are people whose characteristics and contexts companies consider unusual, extreme, or irrelevant in the design process. For reasons of efficiency, stereotyping, disempowerment, or because they are socially invisible, edge users are conceptually (and often physically) far from the centers of design and their contexts and needs are considered outliers in design processes. Edge users are often not available for user-centered, participatory, standards-based, and crowdsourcing design approaches. Historically marginalized groups are often invisible or discounted in design: these include the elderly (in many domains), women (e.g., in bathroom logistics and pocket fashion), left-handed people (e.g., in tool design), people of color (e.g., in standards for imaging systems), and people who have low-income or low-literacy, or are linguistically isolated, ill, or disabled (physically and/or cognitively).

Because software and hardware designers now often assume persistent and inexpensive Internet access, people in rural or remote regions can also be considered edge users because they rely on such connections to authenticate access, download large updates, or need access to online media simply to learn, operate, and troubleshoot their technologies. Even high-status professional users such as doctors can become edge users if their perspectives are devalued as part of a shift in organizational priorities, or if they remove themselves from the design process (Gawande, 2018). While such high-status users may be excluded from initial designs, they are capable of expressing themselves in alternate forums, as many doctors did in response to laudatory coverage of Epic, a prominent health records system provider (Kelly, 2018).

In technical contexts, combining edge conditions leads to even more challenging corner cases; this is not uncommon given the reality of compositional experiences in which multiple technical systems are in use (Dourish, 2018). Likewise, edge user conditions can overlap: It is not unusual for the elderly, for example, to experience social isolation and physiological and cognitive challenges. Edge group members may also be difficult to access if they cannot communicate clearly or give informed consent—or may not have been involved in the purchase of the product in question (Newell, et al., 2011).

Companies and users manage the boundaries of who is included or excluded in design goals through a complex combination of strategies, including rationalizing both design constraints and commercial priorities (used by companies to shrink the boundaries) to user-group activism and legal maneuvers (used by advocates to expand them). Universal design poses a challenge for efficiency that is rhetorically justified by the 80/20 rule, an oversimplification based on the Pareto principle in management economics. This popularization is employed to rationalize resource allocation, and consequently, what (and who) to ignore during design processes. In business, 80 percent of clients come from 20 percent of the effort, while the remaining clients take an inordinate effort; in software optimization, 80 percent of the code is written in 20 percent of the time, while fixing 20 percent of the bugs addresses 80 percent of the problems.

Thus, the 80/20 rule is employed to define the edges, rhetorically justifying exclusion of user personas and profiles. The 80/20 rule gives license to simplify one’s user base: The effort to engage the most accessible 80 percent of users is deemed reasonable, while the remaining 20 percent takes an inordinate amount of effort and in the interests of efficiency is marginal. When a similar 20 percent of users are disregarded across experiences and products, for most of the technologies comprising the daily “compositional experience,” they are effectively marginalized in a digitally dependent society.

Boundary negotiations are valuable because users at the edge often serve as “canaries in the coal mine”: Edge users’ difficulty using a system, or their absence from it, often indicates an early warning of problems that might affect others. Companies, which usually focus on market segments, have only recently started to research edge users, according to CEO of Smart Design Davin Stowell. He attributes this to a cultural clash between marketing and design perspectives:

Oftentimes, we would have clients coming to describe their target buyer. ... We’d listen politely and say, “Well, that’s great, but we don’t care about that person.” What we really need is to look at the extremes. We want to know who’s at the one end of the scale. Maybe the weakest or the person with arthritis or the worst-case scenario or someone who’s having breakfast in the morning and is under a lot of stress, real-world situations. At the other end, we may want to know the tallest person or the athlete or the strongest or the fastest person, because if we understand what the extremes are, the middle will take care of itself. [25]

Stowell attributes the widespread popularity of the soft-handled OXO Good Grips kitchen tools to this design approach. Similarly, when Apple released an updated voice control system for the Macintosh in 2019, Apple’s Senior Director of Global Accessibility Policy and Initiatives Sarah Herrlinger made the case that “we want to learn how people use [voice control], and how other individuals might use it and then see how that goes. ... it’s one of those things that when you build for the margins, you actually make a better product for the masses” (Velazco, 2019). Microsoft has also made a concerted effort to address and publicize accessibility with initiatives such as the Xbox Adaptive Controller, featuring young gamers of limited mobility in the inspirational 2019 Superbowl commercial “When everybody plays, we all win” (Microsoft, 2019).

Improving the experience for edge users such as the elderly and the differently abled often benefits a much larger set of users — as has been noted in architecture and transportation contexts, as well as specific media standards including video, Web, and print. Closed captioning, for example, was originally intended to serve deaf and hearing-impaired users, but now serves a wide range of unexpected functions including displaying comprehensible video content in public spaces, enabling computational access to program content (Ellcessor, 2012), and providing language learning opportunities.

Because designers rely on feedback methods facilitated by digital distribution such as A/B testing and crowdsourcing error reports and feature requests, who is able to participate in them influences how these systems are optimized. The “spiral of exclusion” further marginalizes user participation and representation when networked media and communication resources are used for troubleshooting, providing feedback on, and crowdsourcing UXs.

Case in point: In 1999, when Duquenoy and Thimbleby adapted Rawls’ work for UX design, the participatory “Web 2.0” era was just beginning; its emphasis was on “delivering software as a continually-updated service that gets better the more people use it ... and creating network effects through an ‘architecture of participation’” [26]. While such iterative, crowdsourced design processes involve more user perspectives, they differentially advantage those people who can participate early and repeatedly over those who are not technically or socially networked. Unconnected people cannot contribute meaningfully to design discussions, and their absence is easy to overlook.

Documenting edges: Identifying “programmer falsehoods”

The “falsehoods programmers believe” meta-list (Deldycke, 2019) is a notable example of knowledgeable, networked users identifying edge cases in which UX has failed. This falsehoods genre was prompted by a 2010 blog post entitled “Your last name contains invalid characters” (Graham-Cumming, 2010), in which a Web site’s error message “Our system is unable to process last names that contain non-letters, please replace them with spaces” prompted Graham-Cumming to explore programmer assumptions. Additional lists of name (McKenzie, 2010) and time assumptions (Sussman, 2010) rapidly followed, with Sussman acknowledging a wide range of contributions from BoingBoing, Hacker News, Reddit, MetaFilter, and Twitter.

More contributors identified inaccurate assumptions in the arts (“Music can be written down”), business (“A product has a [fixed] price”), dates and times (“An hour will never occur twice in a single day”), languages (“Short words in English are short in other languages”), geography (“Places have only one official name”), human identity (“People’s names do not change”), phone numbers (“An individual has a phone number”), and postal addresses (“An address will start with, or at least include, a building number”). While such edge cases are discrete and identifiable, taken as a whole, the range of these cases indicates the challenge of developing systems based on a designer’s personal or local knowledge.

These edge-case lists remind designers (and programmers in this case) of their responsibility to inform themselves about the wide range of possible users, to better appreciate the limitations of their own experiences, and to understand how apparently minor, reasonable decisions based on these experiences can limit others in different contexts.

The power to resist the edge: The case of Adobe Lightroom

Besides documenting edge cases through individual users and examples, technologies can automatically collect and report data about usage scenarios as they are used. Because most users do not know (or forget) about monitoring, companies regard this approach as providing higher-quality “naturalistic” data on user activities than other methods. Despite monitoring’s sophistication and popularity, it cannot collect information about the user’s context nor predict how unmonitored users might use the system.

The problems with Adobe’s 2015 update of its Lightroom software demonstrate the drawbacks of relying analytics to understand users. Lightroom, a powerful image organization and manipulation tool, had undergone six major revisions since its introduction in 2007 to handle the increasing capabilities of digital cameras and demands of photographers. For years the complex import interface had caused confusion for new users, although many photographers learned to use it efficiently. The new version simplified this process, but changes to the familiar UX resulted in a torrent of complaints on Adobe product forums.

Adobe’s product representative admitted the update was “handled poorly ... We made decisions on sensible defaults and placed many of the controls behind a settings panel. At the same time we removed some of our very low usage features to further reduce complexity and improve quality” (Hogarty, 2015). “Low usage,” it turned out, was determined by Adobe using analytics software that many professionals had turned off for perceived performance and privacy reasons, effectively making them invisible to the designers.

By refusing analytics, Lightroom’s core users — professional photographers — had turned themselves into edge users, their concerns overlooked in the design process and key steps in their workflow changed. In this case, the overlooked users were able to reassert their centrality because they could effectively use product forums as a feedback channel. Adobe’s forums support working professionals and, in contrast to many consumer companies, are relatively responsive to user concerns and suggestions.

To design ethically using the “veil of ignorance” approach, designers require a broad range of user personas, including edge users. Both the “programmer falsehoods” and Lightroom analytics demonstrate that informed people can correct and advocate for overlooked users. But how can designers access edge users who are relatively invisible, and who can’t advocate for themselves? They can use active proxies — those already helping or standing in for edge users — as design allies to develop a deeper understanding of marginalized users and their contexts.

 

++++++++++

Proxies and active proxies

“Proxy,” commonly known as a person appointed to act in place of another, has a secondary meaning in computing: a system that “enables the indirect exchange of data on a network” (Oxford English Dictionary, 2019). The first meaning of proxy is usually employed in design contexts in order to learn about inaccessible users and scenarios when designers cannot directly engage with them. It is the second meaning of active intermediary, which I distinguish here by the phrase “active proxy,” that has untapped potential for technology design.

Software designers turn to the first type of proxy when “we cannot get as many users as we want to represent different perspectives of the product, [then] we need to resort to user proxies, who may not be users themselves but are on a project to help represent users” [27]. In a corporate setting, such proxies may be supervisors (representing staff), salespeople (representing customers), and domain experts, all of which pose problems of inadequate knowledge about, conflicting goals towards, and oversimplification of actual user contexts. Groups of actual users can also serve as proxies for a wider set of inaccessible users, but in all cases developers are “responsible for understanding how different types of user proxies will think about the system being built and how their backgrounds might influence ... interactions” [28].

While proxies who merely represent absent users run the risk of oversimplifying and misrepresenting those users’ experiences and needs, active proxies with direct experience assisting edge users can provide detailed contextual information about both the edge user and the assistance circumstance. Using a digital inequality perspective, researchers have studied how active proxies assist people whose “everyday lives and circumstances ... prevent them from making the best use of Internet technologies and/or gain from engagement with such technology” [29]. For the purposes of understanding edge users and the process of assistance, active proxies are akin to “surrogates” (Selwyn, et al., 2005), “intermediaries,” and “local experts” (Stewart, 2007), who go online to assist others and search for information, “surrogate seekers” (Cutrona, et al., 2015) who look for health information, and unbiased “navigators” (Pollitz, et al., 2018) helping with insurance coverage.

These proxies include those with “‘strong ties’ such as family members, [caretakers] and close friends, as well as ‘weaker ties’ such as neighbours, work colleagues, and technology-related community actors (e.g., librarians, local IT vendors)” [30]. The active proxy mediates between the edge user and the technological world that challenges them. Thus, the ideal proxy is digitally literate, and capable of explaining technological strategies to users, and user contexts to technologists. They can search for solutions to problems using appropriate terms. They must also be trustworthy: For reasons similar to, and as a consequence of, their marginalization in design processes, edge users such as the elderly (Ellin, 2019) and the blind and deaf (Brockman, 2019) are particularly vulnerable to exploitation. Ideal proxies are often children, as seen in the context of immigrant children serving as “language brokers” (Shen and Dennis, 2019) when translating and interpreting for other family members.

Active proxies become familiar with explaining — or resolving — UX challenges on a case-by-case basis for edge users; however, they are less commonly explored as sources of information about these members’ experiences for UX designers. In their dual roles as assistants to edge users and participants in troubleshooting experiences, active proxies provide a valuable and relatively untapped opportunity to make the needs of edge users more visible, and potentially improve UX for others as well. Thus, UX designers would benefit from addressing the edge person and the proxy together as a dyad: Active proxies can be cultivated by designers as information resources about how the UX can be improved for the edge user and for the proxy, and for both of them working together.

Design practice struggles to address the broader compositional experiences people face, as Dourish (2018) has noted. Active proxy experiences give designers more details about edge-user contexts that contribute to an informed veil-of-ignorance process. Meanwhile, the experience of such proxies themselves gives designers specific scenarios that can inform design goals. To mimic capability loss and help fully abled designers empathize with disabled users, designers can already use physical simulators such as MIT AgeLab’s AGNES suit and software simulators that degrade audio clips and images (Clarkson, et al., 2007).

Google’s Lighthouse tool tests Web sites for accessibility and best design practices in many ways, including simulating slow network connections and small mobile screens. Designers could use additional information from active proxies to immerse themselves in even more realistic edge-user contexts using capability simulators to introduce user input errors (random text and pointer movements), replace commands and icons arbitrarily, reduce network bandwidth, change familiar visual cues, and increase the aggressiveness of persuasive design patterns.

Detailed edge-user scenarios also provide valuable supporting information for a devil’s-advocate design process, in which a member of a planning team is tasked with countering groupthink by representing worst-case usage scenarios. The goal is to prompt revision of a proposed design, or, if not feasible, at least appreciation for its potential consequences. Israel’s Directorate of Military Intelligence famously has such a “devil’s advocate office,” which “regularly criticizes products coming from the analysis and production divisions, and writes opinion papers that counter these departments’ assessments. The staff in the devil’s-advocate office is made up of extremely experienced and talented officers who are known to have a creative, ‘outside the box’ way of thinking” [31]. This is in contrast with the “design thinking” approach popular in contemporary technology development. This approach encourages “divergent” thinking during the ideation stage in which a diverse group of participants also “think outside the box” to generate ideas, and then combines, sorts, and through mutual agreement about priorities, effectively eliminates some.

The “design thinking” approach considers the devil’s-advocate role unproductive and constraining (Kelley with Littman, 2005). (The subtitle of the book by Kelley and Littman is “Strategies for Beating the Devil’s Advocate and Driving Creativity Throughout Your Organization.”) Design sprints, touted to “solve big problems ... in just five days” (Knapp, et al., 2016), tightly constrain the number of people in the process and incorporate power differentials in which a “decider” or facilitator encourages closure. Although contrarian viewpoints are noted in these approaches, whether edge perspectives are included depends on the diversity and authority of perspectives the participants bring to the group, and their willingness to present them in the face of time and social-cohesion pressures. Collaboration with edge users via their proxies can provide details for such perspectives.

In sum, engaging active proxies to access previously “invisible” edge users benefits three parties: Users, designers, and the proxies themselves. Through proxies, broader user contexts are available, and information about and from edge users is brought closer to the center of design processes. Designers benefit by access to richer scenario data for their ethical standpoints, for internal institutional negotiation, and for reflection beyond the immediate design context — as soft tool grips, closed captions, and accessibility initiatives have shown. Finally, active proxies are able to provide details about their assistance challenges that can lead to better UX for proxies in general.

 

++++++++++

Conclusion

Design processes that overlook the experiences and needs of edge users result in systems that further isolate and disempower them. As the examples in this paper demonstrate, many of us can easily be (or already are) affected by edge scenarios. Design changes, even minor ones, force users to relearn how to operate in their digital environments and how to identify cues and react if it appears to not function properly.

Because modern software is tightly integrated, upgrading part of a system — such as an operating system — leads to many problems and forced uploads that are challenging for mainstream users, and even more so for edge users. Even dedicated core users, such as those in the Adobe Lightroom example, can be overwhelmed by change. But unlike edge users, core users and insiders can more easily voice their opinions, as when they are they are frustrated with Apple’s frequent software updates and flawed design processes (Shayer, 2019), out-of-date documentation (Hudson, 2017), and unresponsive bug reporting (Dunn, 2019).

If ethical UX design is an inclusive process that attempts to treat all users — actual and potential — fairly, design in practice is an ongoing conversation between designers and select users, in which design features are implemented, tolerated or resisted, revised, and updated. Society has been grappling with consequences of unjust technology design, and technologists with the ethical dilemmas of persuasive (dark) patterns, inclusion and diversity, and their design responsibilities. What is appropriate for designers to consider regularly shifts, and the principles and motivations behind these shifts are often ill-defined. As seen with dark design patterns, change appears to be for the sake of presumed commercial gain rather than — even in spite of — ethical design guidelines. Other times, software and Web site redesigns are “change for the sake of change” more appropriate to fast fashion than what has become essential societal infrastructure.

The technical features supporting, taking advantage of, or excluding users are the product of design processes that themselves have been shaped by previous experience, commercial incentives, and access to responsive users and their data and feedback. These processes can also be shaped by using ethical principles. Rawls’ Theory of justice is an excellent reminder that who participates in design “conversations” — and with what details and authority — varies considerably. Excluding groups from this conversation, or overlooking their exclusion, has serious implications for the design of technical systems and society.

To advocate for edge users, proxies must advocate for their own needs; to adequately serve edge users, designers must learn to accommodate and learn from their proxies. To make their designs more accessible, design processes should include the user experiences of both these edge users and their active proxies. Design ethics, in short, needs to address the spiral of exclusion and move from the worthy goal of making technologies accessible to all people to making more people — even those at the edge — accessible to designers. End of article

 

About the author

Julian Kilker’s work focuses on media technologies and innovation as an associate professor in the University of Nevada, Las Vegas Greenspun School of Journalism and Media Studies.
E-mail: julian [dot] kilker [at] unlv [dot] edu

 

Acknowledgements

Thank you to the Greg Miller, Yvonne Houy, and Stephen Bates for feedback during this project. This work is supported in part by a sabbatical provided by the University of Nevada, Las Vegas.

 

Notes

1. Duquenoy and Thimbleby, 1999, p. 282.

2. Duquenoy and Thimbleby, 1999, p. 285.

3. Duquenoy and Thimbleby, 1999, p. 281.

4. Rawls, 1971, p. 118.

5. Dourish, 2018, p. 4.

6. Chapanis, et al., 1949, p. v.

7. Licklider and Taylor, 1968, p. 31.

8. Apple Computer, 1992, pp. 3–4.

9. Apple Computer, 1992, p. 4.

10. Guterl, 1984, p. 41.

11. Ibid.

12. Apple Computer, 1992, pp. 13–14.

13. Apple Computer, 1992, p. 15.

14. Story, et al., 1998, p. 2.

15. Emphasis added; Apple Computer, 2005, p. 26.

16. Apple Computer, 2005, p. 27.

17. Newell, et al., 2011, p. 235.

18. Duhigg, 2018, p. 55.

19. Konrath, et al., 2011, p. 181.

20. Ullman, 2017, p. 16.

21. Wiener, 2019, p. 59.

22. Gray, et al., 2018, p. 1.

23. Gray, et al., 2018, p. 5.

24. Tognazzini, 1993, p. 355.

25. Hustwit, 2015, p. 359.

26. O’Reilly, 2005, p. 17.

27. Cohn, 2004, p. 55.

28. Cohn, 2004, p. 65.

29. Selwyn, et al., 2016, p. 6.

30. Selwyn, et al., 2005, pp. 6–7.

31. Kuperwasser, 2007, p. 4.

 

References

Apple Computer, Inc. 2005. Apple human interface guidelines. Cupertino, Calif.: Apple Computer, Inc.

Apple Computer, Inc., 1992. Macintosh human interface guidelines. Reading, Mass.: Addison-Wesley.

Richard Barbrook and Andy Cameron, 1996. “The Californian ideology,” Science as Culture, volume 6, number 1, pp. 44–72.
doi: https://doi.org/10.1080/09505439609526455, accessed 12 May 2020.

Thierry Bardini, 2000. Bootstrapping: Douglas Engelbart, coevolution, and the origins of personal computing. Stanford, Calif.: Stanford University Press.

danah boyd, 2019. “Facing the great reckoning head-on,” Medium (13 September), at https://onezero.medium.com/facing-the-great-reckoning-head-on-8fe434e10630, accessed 17 February 2020.

Joshua Brockman, 2019. “At banks and fund firms, access is too often denied, blind and deaf investors say,” New York Times (5 July), at https://www.nytimes.com/2019/07/05/business/retirement-planning-disabled-deaf-blind.html, accessed 3 February 2020.

Alphonse Chapanis, 1985. “Some reflections on progress,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 29, number 1, pp. 1–8.
doi: https://doi.org/10.1177/154193128502900102, accessed 12 May 2020.

Alphonse Chapanis, Wendell R. Garner, and Clifford T. Morgan, 1949. Applied experimental psychology: Human factors in engineering design. New York: Wiley; version at http://hdl.handle.net/2027/mdp.39015006056660, accessed 19 February 2020.

John Clarkson, Roger Coleman, Ian Hosking, and Sam Waller (editors), 2007. Inclusive design toolkit. Cambridge: Cambridge Engineering Design Centre, Department of Engineering, University of Cambridge, and at https://www-edc.eng.cam.ac.uk/downloads/idtoolkit.pdf, accessed 12 May 2020.

Mike Cohn, 2004. User stories applied: For agile software development. Boston, Mass.: Addison-Wesley.

Sarah L. Cutrona, Kathleen M. Mazor, Sana N. Vieux, Tana M. Luger, Julie E. Volkman, and Lila J. Finney Rutten, 2015. “Health information-seeking on behalf of others: Characteristics of ‘surrogate seekers’,” Journal of Cancer Education, volume 30, number 1, pp. 12–19.
doi: https://doi.org/10.1007/s13187-014-0701-3, accessed 12 May 2020.

Kevin Deldycke, 2019. “Awesome falsehood: A curated list of awesome falsehoods programmers believe in,” at https://github.com/kdeldycke/awesome-falsehood, accessed 3 February 2020.

Paul Dourish, 2018. “The allure and the paucity of design: Cultures of design and design in culture,” Human–Computer Interaction (22 May).
doi: https://doi.org/10.1080/07370024.2018.1469410, accessed 12 May 2020.

Charles Duhigg, 2018. “Did Uber steal Google’s intellectual property?” New Yorker (15 October), pp. 50–61, and at https://www.newyorker.com/magazine/2018/10/22/did-uber-steal-googles-intellectual-property, accessed 12 May 2020.

Corbin Dunn, 2019. “The sad state of logging bugs for Apple” (9 March), at https://www.corbinstreehouse.com/blog/2019/03/the-sad-state-of-logging-bugs-for-apple/, accessed 13 February 2020.

Penny Duquenoy and Harold Thimbleby, 1999. “Justice and design,” INTERACT’99, pp. 281–286.

Elizabeth Ellcessor, 2012. “Captions on, off, on TV, online: Accessibility and search engine optimization in online closed captioning,” Television & New Media, volume 13, number 4, pp. 329–352.
doi: https://doi.org/10.1177/1527476411425251, accessed 12 May 2020.

Abby Ellin, 2019. “Scammers look for vulnerability, and find it in older people,” New York Times (12 September), at https://www.nytimes.com/2019/09/12/business/retirement/scams-elderly-retirement.html, accessed 13 February 2020.

Atul Gawande, 2018. “Why doctors hate their computers,” New Yorker (5 November), at https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers>, accessed 20 February 2020.

John Graham-Cumming, 2010. “Your last name contains invalid characters” (17 June), at http://blog.jgc.org/2010/06/your-last-name-contains-invalid.html, accessed 3 February 2020.

Colin M. Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L. Toombs, 2018. “The dark (patterns) side of UX design,” CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, paper number 534, pp. 1–14.
doi: https://doi.org/10.1145/3173574.3174108, accessed 3 February 2020.

Fred Guterl, 1984. “Design case history: Apple’s Macintosh: A small team of little-known designers, challenged to produce a low-cost, exceptionally easy-to-use personal computer, turns out a technical milestone,” IEEE Spectrum, volume 21, number 12, pp. 34–43.
doi: https://doi.org/10.1109/MSPEC.1984.6370374, accessed 3 February 2020.

Joseph Henrich, Steven J. Heine, and Ara Norenzayan, 2010. “The weirdest people in the world?” Behavioral and Brain Sciences, volume 33, numbers 2–3, pp. 61–83.
doi: https://doi.org/10.1017/S0140525X0999152X, accessed 12 May 2020.

Tom Hogarty, 2015. “Lightroom 6.2 release update and apology,” Lightroom Journal (9 October), at http://blogs.adobe.com/lightroomjournal/2015/10/lightroom-6-2-release-update-and-apology.html, accessed 12 May 2020.

Paul Hudson, 2017. “Apple, can we please talk about your documentation?” Hacking with Swift (23 November), at https://www.hackingwithswift.com/articles/42/apple-can-we-please-talk-about-your-documentation, accessed 12 May 2020.

Gary Hustwit (editor), 2015. “Interview with Davin Stowell [New York, September 22, 2008],” In: Gary Hustwit (editor). Helvetica/objectified/urbanized: The complete interviews, London: Versions Publishing, and at https://www.hustwit.com/interviewsbook, accessed 12 May 2020.

Tom Kelley with Jonathan Littman, 2005. The ten faces of innovation: IDEO’s strategies for beating the devil’s advocate & driving creativity throughout your organization. New York: Currency/Doubleday.

Kate Kelly, 2018. “Willy Wonka and the medical software factory,” New York Times (20 December), at https://www.nytimes.com/2018/12/20/business/epic-systems-campus-verona-wisconsin.html, accessed 20 February 2020.

Jake Knapp, with John Zeratsky and Braden Kowitz, 2016. Sprint: How to solve big problems and test new ideas in just five days. New York: Simon & Schuster.

Sara Konrath, Edward H. O’Brien, and Courtney Hsing, 2011. “Changes in dispositional empathy in American college students over time: A meta-analysis,” Personality and Social Psychology Review, volume 15, number 2, pp. 180–198.
doi: https://doi.org/10.1177/1088868310377395, accessed 12 May 2020.

Yosef Kuperwasser, 2007. “Lessons from Israel’s intelligence reforms,” Brookings Institution, Saban Center for Middle East Policy, Analysis Paper, number 14, at https://www.brookings.edu/wp-content/uploads/2016/06/10_intelligence_kuperwasser.pdf, accessed 12 May 2020.

J.C.R. Licklider and Robert Taylor, 1968. “The computer as a communication device,” Science and Technology (April), pp. 21–31; version at https://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/Secondary/Licklider.pdf, accessed 12 May 2020.

Arunesh Mathur, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019. “Dark patterns at scale: Findings from a crawl of 11K shopping Websites,” Proceedings of the ACM on Human-Computer Interaction article number 81.
doi: https://doi.org/10.1145/3359183, accessed 12 May 2020.

Patrick McKenzie, 2010. “Falsehoods programmers believe about names” (17 June), at https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/, accessed 3 February 2020.

Andrew McNamara, Justin Smith, and Emerson Murphy-Hill, 2018. “Does ACM’s code of ethics change ethical decision making in software development?” Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 729–733.
doi: https://doi.org/10.1145/3236024.3264833, accessed 12 May 2020.

Microsoft, 2019. “Xbox adaptive controller,” at https://www.xbox.com/en-US/accessories/controllers/xbox-adaptive-controller, accessed 9 February 2020.

A.F. Newell, P. Gregor, M. Morgan, G. Pullin, and C. Macaulay, 2011. “User-sensitive inclusive design,” Universal Access in the Information Society, volume 10, number 3, pp. 235–243.
doi: https://doi.org/10.1007/s10209-010-0203-y, accessed 12 May 2020.

Donald A. Norman, 1988. The psychology of everyday things. New York: Basic Books.

Donald A. Norman and Stephen W. Draper (editors), 1986. User centered system design: New perspectives on human-computer interaction. Hillsdale, N.J.: L. Erlbaum Associates.

Margaret O’Mara, 2019. The code: Silicon Valley and the remaking of America. New York: Penguin Press.

Tim O’Reilly, 2005. “What is Web 2.0: Design patterns and business models for the next generation of software” (30 September), at https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html, accessed 12 May 2020.

Justin Owings, 2019. “What drives rage clicks? How users signal struggle, frustration online” (11 December), at https://blog.fullstory.com/rage-clicks-turn-analytics-into-actionable-insights/, accessed 18 October 2019.

Oxford English Dictionary, 2019. “Proxy,” at http://www.oed.com/view/Entry/153573, accessed 8 November 2019.

Karen Pollitz, Jennifer Tolbert, and Maria Diaz, 2018. “Data note: Further reductions in navigator funding for federal marketplace states,” Henry J. Kaiser Family Foundation (13 November), at https://www.kff.org/private-insurance/issue-brief/data-note-further-reductions-in-navigator-funding-for-federal-marketplace-states/, accessed 23 October 2019.

John Rawls, 1971. A theory of justice. Cambridge, Mass.: Belknap Press of Harvard University Press.

Neil Selwyn, Stephen Gorard, and John Furlong, 2005. “Whose Internet is it anyway? Exploring adults’ (non)use of the Internet in everyday life,” European Journal of Communication, volume 20, number 1, pp. 5–26.
doi: https://doi.org/10.1177/0267323105049631, accessed 12 May 2020.

Neil Selwyn, Nicola Johnson, Selena Nemorin, and Elizabeth Knight, 2016. “Going online on behalf of others: An investigation of ‘proxy’ Internet consumers,” >Australian Communications Consumer Action Network, at https://accan.org.au/grants/completed-grants/1067-proxy-internet-consumers, accessed 6 January 2020.

David Shayer, 2019. “Six reasons why iOS 13 and Catalina are so buggy,” TidBITS (21 October), at https://tidbits.com/2019/10/21/six-reasons-why-ios-13-and-catalina-are-so-buggy/, accessed 28 October 2019.

Jillian J. Shen and Jessica M. Dennis, 2019. “The family context of language brokering among Latino/a young adults,” Journal of Social and Personal Relationships, volume 36, number 1, pp. 131–152.
doi: https://doi.org/10.1177/0265407517721379, accessed 12 May 2020.

Katie Shilton, 2018. “Values and ethics in human-computer interaction,” Foundations and Trends in Human–Computer Interaction, volume 12, number 2, pp. 107–171.
doi: https://doi.org/10.1561/1100000073, accessed 12 May 2020.

Nick Statt, 2014. “Zuckerberg: ‘Move fast and break things’ isn’t how Facebook operates anymore,” CNET (30 April), at https://www.cnet.com/news/zuckerberg-move-fast-and-break-things-isnt-how-we-operate-anymore/, accessed 24 October 2019.

Mike Stern, 2017. “Essential design principles,” Apple Worldwide Developers Conference, at https://developer.apple.com/videos/play/wwdc2017/802/, accessed 5 January 2020.

James Stewart, 2007. “Local experts in the domestication of information and communication technologies,” Information, Communication & Society, volume 10, number 4, pp. 547–569.
doi: https://doi.org/10.1080/13691180701560093, accessed 12 May 2020.

Molly Follette Story, James L. Mueller, and Ronald L. Mace, 1998. “The universal design file: Designing for people of all ages and abilities,” Center for Universal Design, North Carolina State University, at https://projects.ncsu.edu/ncsu/design/cud/pubs_p/pudfiletoc.htm, accessed 15 November 2019.

Noah Sussman, 2010. “Falsehoods programmers believe about time,” Infinite Undo, at https://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time, accessed 3 February 2020.

Bruce Tognazzini, 1993. “Principles, techniques, and ethics of stage magic and their application to human interface design,” CHI ’93: Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, pp. 355–362.
doi: https://doi.org/10.1145/169059.169284, accessed 12 May 2020.

Ellen Ullman, 2017. Life in code: A personal history of technology. New York: Farrar, Straus and Giroux.

Chris Velazco, 2019. “Apple’s Voice Control is important for accessibility, and you,” Engadget (19 June), at https://www.engadget.com/2019/06/19/apple-voice-control-disability-accessibility/, accessed 9 December 2019.

Sam Waller, Mike Bradley, Ian Hosking, and P. John Clarkson, 2015. “Making the case for inclusive design,” Applied Ergonomics, volume 46, part B, pp. 297–303.
doi: http://dx.doi.org/10.1016/j.apergo.2013.03.012, accessed 12 May 2020.

Anna Wiener, 2019. “Four years in startups,” New Yorker (23 September), pp. 56–62, and at https://www.newyorker.com/magazine/2019/09/30/four-years-in-startups, accessed 12 May 2020.

WNYC Studios, 2019. “A history of persuasion, part 3” The stakes, (13 August), at https://www.wnycstudios.org/podcasts/the-stakes/episodes/the-stakes-history-persuasion-part-3, accessed 17 February 2020.

W.L. Zwerman, 1999. “Profession/occupation without a history,” IEEE Annals of the History of Computing, volume 21, number 1, pp. 66–70.
doi: http://dx.doi.org/10.1109/85.759371, accessed 12 May 2020.

 


Editorial history

Received 10 March 2020; accepted 15 April 2020.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Beyond accessibility: Design ethics, edge users, and the role of active proxies in unwinding the spiral of exclusion
by Julian Kilker.
First Monday, Volume 25, Number 6 - 1 June 2020
https://firstmonday.org/ojs/index.php/fm/article/download/10572/9475
doi: http://dx.doi.org/10.5210/fm.v25i6.10572