The effectiveness of crowdsourcing public participation in a planning context
First Monday

The effectiveness of crowdsourcing public participation in a planning context by Daren C. Brabham



Abstract
Governments increasingly turn to the Internet to facilitate public participation activities, part of a recent push toward transparency, accountability, and citizen involvement in decision–making. These activities take many forms, and one specific form, the crowdsourcing model, is examined here for its effectiveness as a public participation method. In 2009, the Next Stop Design project was launched to test the crowdsourcing model in an online public participation experiment for bus stop shelter design. Drawing on the ideals of online democratic deliberation, 23 Next Stop Design participants were interviewed via instant messenger for their perceptions of the project as an effective public participation activity. Findings suggest that crowdsourcing is a promising online public participation method that may complement off–line methods.

Contents

Introduction
Literature review
Next Stop Design
Method
Results and discussion
Conclusions

 


 

Introduction

In late 2009, the Next Stop Design (http://www.nextstopdesign.com) project was launched to test the crowdsourcing model in a public participation context for transit planning. Crowdsourcing is an online, distributed problem solving and production model largely in use for business. It leverages the collective intelligence of online communities by soliciting ideas and solutions for an organization from these communities through the form of an open call. Next Stop Design was an online competition where users submitted bus stop shelter designs and voted on the designs of peers to determine a best design. The project provided an opportunity to assess the effectiveness of crowdsourcing as an alternative public participation method for government administration. Online public participation alternatives are increasingly the focus of governments seeking to overcome deficiencies in traditional public participation methods, and the specific online method of crowdsourcing is a promising ideation tool for participatory decision–making.

On its face, Next Stop Design was a bigger success for its sponsoring organizations than anticipated. Web site analytics indicated that the project drew a large amount of site traffic from all over the globe. User registration data suggested that most of Next Stop Design’s 3,187 registered participants were regular bus riders and that more than two–thirds had never before engaged in a public participation process for city planning. The project brought in new voices and more voices in an engaging, interactive public participation process.

To test its effectiveness as a public participation tool, however, participants were interviewed to determine their perceptions of the project. In all, 23 Next Stop Design participants were interviewed via instant messenger for this study. Findings presented here suggest that Next Stop Design was perceived to be a generally effective online deliberative democratic process, with perceived weaknesses concerning the facilitation of the project through public voting and the equality of participants on the site in light of apparent voting fraud in the competition. Interestingly, too, participants offered constructive feedback during the interviews to improve the process as a whole, suggesting that this implementation of crowdsourcing to improve public participation was valuable both at the level of the problem at hand (bus stop shelter design) and at a meta–level of continuing to improve the online participation tool in general. I argue that governments can and should leverage the collective intelligence of online communities for the public good, and crowdsourcing offers a way to do just that.

 

++++++++++

Literature review

As an application of deliberative democratic theory in practice, traditional public participation programs in urban planning seek to cultivate citizen input and produce public decisions agreeable to all stakeholders. The tenets of deliberation and democracy are rooted in a very long history of political thought, “traced to Dewey and Arendt and then further back to Rousseau and even Aristotle” [1]. Yet, it is the use of “deliberative democracy” that appeared in the early 1980s (Bessette, 1980) that tipped off an intense scholarly interest in the concept that continues just as vibrantly today.

Many versions of deliberative democratic theory have emerged in the recent proliferation of research on the topic, and there have been review articles dedicated to organizing these diverse interpretations of the theory in recent years (Bächtiger, et al., 2010; Bohman, 1998; Chambers, 2003; Freeman, 2000; Thompson, 2008). Synthesizing the contributions of the “main advocates of deliberative democracy (Habermas, Rawls, Joshua Cohen, Michelman, Sunstein, Gutmann, Thompson, and Estlind, among others),” Freeman (2000) provides a neat, if lengthy, definition of “the primary features of the political ideal of deliberative democracy” that emphasizes the “freedom, independence, and equal civic status” of informed citizens deliberating on “measures conducive to the common good” [2].

Deliberative democracy is a normative theory that aims to prescribe ideals for which democratic practice can strive. It runs counter to individualistic conceptions of democratic participation and focuses more on “talk–centric” forms of participation emphasizing the common good, vibrant discussion among equals, and consensus [3]. Rather than aggregating the votes of self–interested individuals to declare a majority opinion, deliberative democracy strives for discussion among individuals about common interests. Voting can accompany effective deliberative process as a way to define an outcome of a deliberation, but the difference between deliberation–informed voting and self–interested, non–deliberative voting is that individuals consider these common interests in the former kind. Or, as Chambers (2003) notes, voting “is given a more complex and richer interpretation in the deliberative model than in the aggregative model” [4].

Mansbridge, et al. (2010) extends this discussion, and ultimately blurs this distinction, by acknowledging that all individuals, no matter their commitment to the common good, are inherently to a degree self–interested. But she argues that the presence of specific individuals’ personal interests in a deliberation is necessary in the first place to map the contours of the public good. Also, in the end, if a vote or other non–deliberative mechanism occurs after a considered deliberation, at the very least, minority parties in a conflict will more likely accept the outcome as legitimate than they would in an aggregative voting scheme involving no prior deliberation.

Theoretical development in the way of deliberative democracy has outpaced empirical research, and there is some claim that theories and empirical research about the subject are disconnected (Nino, 1996; Thompson, 2008). The existing empirical findings are mixed, but some core trends are emerging from a variety of studies on the practice, potential, and efficacy of deliberation. Delli Carpini, et al. (2004) enumerate these trends in the empirical literature:

  • “enough Americans engage in public talk” and seem equipped and willing to engage in democratic deliberation;

  • democratic deliberation “can lead to some of the individual and collective benefits postulated by democratic theorists”;

  • the “Internet may prove a useful tool in increasing [deliberation’s] use by and utility for citizens”; and,

  • “the impact of deliberation and other forms of discursive politics is highly context dependent.” [5]

These first two trends indicate that there may be motivation among citizens to accept democratic deliberation and that deliberation may actually improve upon other forms of democratic participation. The third trend above hints at the potential for new media technologies to enlarge the capacities of deliberation, which is the thrust of this study. The fourth trend reminds us that there is no perfect recipe for deliberative democracy, and that each instance requiring public input must be individually considered according to its specific parameters. Landwehr (2010) thinks understanding the “context conditions for successful and democratic deliberation ... remains the most important challenge for deliberative theory and deliberative politics” [6]. This point also speaks volumes to the practical importance of designing and moderating deliberative spaces and opportunities according to a given issue and citizenry. Some scholars in recent years have even begun to propose practical implementations, blueprints, and policies for deliberative democracy, following the trajectory of theory and empirical research. Leib (2004), for instance, proposes an entirely new fourth branch of government in the U.S. — a popular branch — that would bring panels of citizens into deliberative engagements to craft policy and respond to administrative needs.

I turn now to a specific context where deliberative democratic principles are attempted in practice: public participation programs for urban planning. A robust body of planning literature has acknowledged the benefits of public participation in planning processes (e.g., Creighton, 2005; Forester, 2006; Hou and Kinoshita, 2007; Pimbert and Wakeford, 2001). At most, public participation can be seen as a logical extension of the democratic process in more local, direct, deliberative ways (Pimbert and Wakeford, 2001). And at the very least, involving citizens in the planning process helps ensure a plan that will be more widely accepted by its future users (Brody, et al., 2003; Burby, 2003; Miraftab, 2003). Public participation programs in urban planning also value non–expert or non–mainstream knowledge brought into the creative problem solving process of planning. Participation is the act of creating new knowledge, contributing new perspectives to the planning process, and diffusing knowledge to others in the process (Hanna, 2000). Van Herzele (2004) found that inclusion of non–expert knowledge was beneficial to the planning process in general, since the perspectives of individuals outside of the professional bubble of urban planning can (re)discover creative solutions that could work in a specific local context. Local knowledge and non–expert knowledge adds the perspective of the future user of a designed space and the insights about environment and place that the planning discipline might never have approached or might have already forgotten (Burby, 2003; Laurian, 2003).

Though they have generally served us well in the past, traditional public participation programs, which include workshops, town hall meetings, hearings, and design charrettes, encounter a number of hindrances. These hindrances include the difficulty of enlarging the process to include a diverse representation of citizens (Beebeejaun, 2006); minimizing the influence facilitators may have on participants due to their personal facilitation style (Carp, 2004); managing the intimidating presence and influence of vocal, powerful special interest groups at meetings (Hibbard and Lurie, 2000); and, accounting for the willingness of some participants to contribute at all due to interpersonal dynamics, identity politics, or intimidation of peers “shouting down” other citizens in meetings (Campbell and Marshall, 2000; Hou and Kinoshita, 2007; Innes, et al., 2007; Urbina, 2009).

No public participation method is perfect, but when we consider the medium of the Internet, for instance, where anonymity for users is available and where body language, identity politics, and interpersonal power dynamics are absent or changed, we can begin to ameliorate some of the common pitfalls of traditional public participation programs. Brabham (2009b) makes the case for online public participation methods, and specifically crowdsourcing, as a way to improve upon traditional methods.

The Internet enables a kind of networked, creative thinking through its hypertext structures. Other aspects of the Internet that make it an ideal medium for facilitating creative participation include its temporal flexibility, reach, anonymity, interactivity, and its ability to carry every other form of mediated content. The Internet is an instant communications platform, where messages, and thus idea exchange, can travel so fast along its channels that the medium works in effect to virtually erase the issue of time, accelerating creative development. Furthermore, the Internet has a more or less global reach, or it least it can have a thoroughly global reach. This means that communication can take place between people in different places rapidly. Coupled with the virtual erasure of time, this global character of the Internet works to also erase space. Carey (1989) first best pondered the cultural transformations and the societal capabilities of communications technologies unmoored from time and space, noting that inventions like the telegraph that accomplished this erasure worked to unite nations in common cultural visioning.

In contrast to the speed of the Internet is the fact that the Internet is at the same time an asynchronous mode. That is, online bulletin board systems and similar applications enable users to post commentary and ideas to a virtual “location” at one point in time, and though the speed of the Internet tends to make users hasty in their online posts, asynchrony allows other users to engage those thoughts at much later points in time in measured deliberation. Much like the leaving and taking of notes on a bulletin board in a town square, the Internet can foster a sense of ongoing dialogue between members of a community without those members having to be present at the same time (Ostwald, 2000). This capability of the Internet is already being realized in some urban planning projects, as posting podcasts and meeting minutes on planning project Web sites is an exploitation of the Internet’s asynchrony and virtual permanence, particularly if these kinds of project Web sites are coexistent with online bulletin board systems.

Planning decisions are not about the will of the simple majority. They are about the ways in which communities provide qualitative commentary on how they want to see their future built environment. In an online context, individuals make qualitative input available primarily through online bulletin board systems and other modes of asynchronous communication. Ideally, according to the principles of deliberative democracy, individuals incorporate discussion and exchange as they develop a series of individual solutions to contribute to a commons. The asynchronous nature of the Internet is important for this development. Taken together, the speed and asynchrony of the Internet make for a temporal flexibility, the medium conforming to the needs and uses of the particular user, converging different speeds and usage patterns together in a collaborative project online that may be either synchronous (“real time”) or asynchronous.

Furthermore, the Internet is an anonymous medium. Users are able to develop their own online identities largely on their own terms, or they can choose to remain anonymous entirely. In a chat room or bulletin board system, for example, people can develop whole new personas or design entirely differently–bodied avatars to represent themselves and their interests. In line with much of the scholarly literature on nonverbal communication, Campbell and Marshall’s (2000) discovery that people’s body language, positioning in the space of a room, and small talk work to “script” the ensuing power dynamics of a planning meeting is relevant here. In an online environment, people are free to contribute to online discussions and the vetting of ideas without the burden of non–verbal politics. That is to say nothing of the very real power inequities at play with embodied forms of difference, such as race, gender, and (dis)ability, inequities buttressed many times over by empirical research in communication, sociology, health, psychology, and other disciplines. The medium of the Internet can work to liberate people from the constraints of identity politics and performative posturing by endowing users with the possibility for anonymity in participatory functions (Sotarauta, 2001). They can become, as Suler (2004) claims, “disinhibited” and expressive.

Finally, the Internet is an interactive technology and a site of convergence, where all other forms of media can be utilized. Rather than the simple transmission mode of information native to “older” forms of media (e.g., television, radio, newspaper) and much policy, the Internet encourages ongoing cocreation of new ideas. Content on the Internet is generated through a mix of bottom–up (content from the people) and top–down (content from policy–makers, businesses, and media organizations) processes, as opposed to solely a top–down model. To some, the Internet has many shortcomings, including the ways in which the Internet may alienate us from our neighbors interpersonally and the ways some companies seek to position Internet users as consumers ripe for profit (Bugeja, 2005; Putnam, 2000). In a “Web 2.0” era of increased content creation, though, Internet users are becoming particularly savvy at broadcasting their own ideas, uncovering buried information, and remixing previous ideas and content into new, innovative forms. Internet users are potentially problem solvers, are potentially creative. We should turn to the Internet to transform the public participation process, to enlarge our narrow perspective on how citizens actually participate in democracies today (Mack, 2004).

What may be the most promising about the Internet in a democratic sense is that not only does the Internet foster communication and collaboration among citizens, but if designed properly, a deliberative process online can be directed in such a way as to leverage collective intelligence from many participants for the purpose of solving a distinct problem. Terranova (2004) writes that the Internet is an ideal technology for distributed thinking because the Internet is “not simply a specific medium but a kind of active implementation of a design technique able to deal with the openness of systems” [7]. Fischer (2002) echoes this sentiment. That is, online deliberative processes need not just be “talk–centric,” but they can be purposefully designed to solve problems presented by the state. It is in this spirit that Next Stop Design was created and that the crowdsourcing model in particular was chosen for the project.

Crowdsourcing is an online, distributed problem solving and production model used largely by online businesses since 2000 (Brabham, 2008; Howe, 2008, 2006). The success of the crowdsourcing model depends on the assumption that online communities have “collective intelligence” (Lévy, 1997) or “crowd wisdom” (Surowiecki, 2004), and empirical research supports this assumption. Page (2007) found that problem solving processes benefit from cognitively diverse communities, even communities of non–experts, and Terwiesch and Xu (2008) found that “ideation problems” dealing with the generation of unique, creative ideas are well–suited to broadcasting to an online community for solving. Of the four types of crowdsourcing, the “peer–vetted creative production approach” is the best suited to online public participation processes (Brabham, 2012a; Friedland and Brabham, 2009). In this approach to crowdsourcing, an organization issues a challenge to an online community. Individuals in this community may then submit designs or solutions to address the challenge, and individuals are also able to vet the submissions of peers. Notable examples of this approach include t–shirt company Threadless.com (http://www.threadless.com) and user–generated advertising contests, such as Doritos’ Crash the Super Bowl contest (Brabham, 2009a). The logic of this approach is that by opening up the creative phase of a designed product to a potentially vast network of Internet users, some superior ideas will exist among the flood of submissions. Further still, the peer vetting process will simultaneously identify the best ideas and collapse the market research process into an instance of firm–consumer co–creation. It is a system where a “good” solution is also the popular solution, and a solution the market will support, not unlike the outcomes sought in public participation programs.

Experiments in online deliberation have focused on replicating, supplementing, or even replacing the function of face–to–face democratic governance in a mediated arrangement. The understanding in these online deliberation ventures is that everyday citizens feel alienated from the process of democracy (and indeed also from their elected representatives), and the Internet may motivate citizens and bring their input back into the system (Macintosh, 2006). The features of the Internet are instructive here: the reach of the Internet allows more and far–flung citizens to engage in the democratic process; citizens can come and go in the process at their own convenience and participate at their own pace due to the temporal flexibility of the Internet; anonymity afforded by the medium may encourage citizens to express their opinions freely and without fear of retribution; and, the interactive and media–converged platform of the Internet allows for rich, cognitively engaging contributions to the process of democracy.

Despite the promise of radically transforming governance with new technologies, though, Francis McDonough (as cited in Noveck, 2003) notes that there have so far been just “six generally accepted phases of e–government:

  1. providing information;
  2. providing online forms;
  3. accepting completed online forms;
  4. handling single transactions;
  5. handling multiple, integrated transactions; and,
  6. developing intergovernmental projects that require the restructuring of the government to allow the delivery of new integrated services” [8].

In other words, the potential for the Internet to turn citizens into creative collaborators with government rather than just users of government services online has mostly not yet been realized.

An anthology was published in late 2009 that developed out of a series of conferences on online deliberation (Davies and Gangadharan, 2009). Its chapters, from leading thinkers in the field, come together in much the same way literature on deliberative democracy in general has come together over the years. That is, the contents of this anthology offer a collection of mixed results, context–specific lessons learned from online deliberation, and idealistic visions for online deliberation’s future. Perhaps the most valuable take–away section of the anthology — and indeed this claim could be applied to much of the body of literature on online deliberation — is the part focused on the design of online deliberation tools. Attempting to synthesize the whole of the literature on online deliberation, Gangadharan (2009) notes that

online deliberation can be understood as a sociotechnical system that is coordinated or managed by a government institution, news outlet, civil society organization, corporation, educational body, or other institution (or set of institutions). Apart from the question of who manages such a project or endeavor, this level of online deliberation entails choices about the goals of deliberation, the software used to achieve those goals, the platforms that host the online deliberation experience, the modality of the user experience, the way in which participants are recruited, the types of participants being targeted, the context and scale of the user experience, the evaluation of deliberative goals, and the economics and managerial style of the deliberative endeavor. [9]

Importantly, then, online deliberation systems and tools are hierarchically managed, have goals for participants, and are sociotechnical arrangements, meaning there is an emphasis on the ways humans and technologies work together toward these goals (as opposed to technologies operating autonomously or artificially and as opposed to humans merely using technologies to facilitate typical face–to–face processes). Fundamentally, online deliberative practices weave deliberative democratic principles together with the collaborative and communicative capabilities of new media technologies under the direction and authority of government institutions. Importantly, all of this is possible through the design of online deliberation tools from the outset (Noveck, 2003).

Deliberative democracy enjoys “subjective legitimacy” that “consists of the generalized belief of the population in the moral justifiability of the government and its directives” [10]. In other words, the views of the people participating in a deliberative democracy work to legitimize a regime’s design and function through continued participation. Thus, to appraise a deliberative democratic tool or process, one could gather input from citizens and compare attitudes about the process against a set of normative ideal features of deliberative democracy.

Legal scholar Beth Simone Noveck wrote in 2003 of the need to design truly deliberative spaces in cyberspace. Her call for better processes and systems to improve governance and democracy in this article was later echoed in a book–length treatment (Noveck, 2009), and her authority on the topic of e–democracy and her direction of the Peer–to–Patent project landed her an appointment in the Obama Administration as Deputy Chief Technology Officer for Open Government. Her principles for ideal online democratic process are enumerated here as a functional heuristic and provide the concepts for which I base this study.

I use Noveck’s list of ideals for online deliberative democracy in this study, rather than those of another scholar, for three reasons. First, Noveck’s reflections on ideal online deliberative democratic process in light of her involvement with the successful public crowdsourcing project Peer–to–Patent and her role in President Obama’s groundbreaking initiatives in government technology, participation, and transparency make her a leading voice on this issue. Second, she positions her research on deliberative democracy from the standpoint of a lawyer interested in issues of new media technology in governance, and her practical activities in deploying online deliberative democratic tools have informed her theoretical contributions. Her work is that of an active critical media designer, not just a philosopher, so her distillation of ideal features reflects issues of both the theory and practice of online deliberative democracy. Third, and most importantly, her ideal features of deliberative democracy assemble the major topics from all of the current literature. Chambers (2003), Freeman (2000), Mansbridge, et al. (2010), Delli Carpini, et al. (2004), and other scholars each write about one or more of the ideal features of deliberative democracy, but Noveck’s (2003) work is arguably the best recent single summary of the literature. She even points out that “[t]hough many theorists extol [deliberative democracy’s] virtues, rarely do commentators define what it actually is and what features comprise a deliberative process,” and then proceeds with defining these “building blocks of deliberation [to] allow us to construct participatory processes” [11]. Her comprehensive list of ideals is reflective of the major trends in deliberative democratic theory and functions as an effective heuristic with which to appraise the effectiveness of an online deliberative democratic tool or process.

The reason we have yet to realize the full potential of new media technologies for democratic processes, according to Noveck (2003), is that “[t]he spaces we inhabit in cyberspace currently are constructed around the goals of commerce” and that these “[v]alue choices translate into design choices” [12]. That is, because the Internet is a largely privatized technology and many users use the Internet largely for e–commerce and transactional activities, we have come to conceive of this technology as only capable of facilitating those activities and have designed e–government services in that image. To achieve any sort of effective deliberation online, and thus an effective democratic process online, Noveck (2003) offers 11 ideal features. According to Noveck (2003), these processes must be designed to be:

  • Accessible — “the space in which [deliberation] occurs — whether physical or virtual — has to be available to as wide a range of participants as possible”;

  • Free of censorship — “the space needs to safeguard freedom of thought and expression”;

  • Autonomous — “the process must not treat [participants] as passive recipients of information, but as active participants in a public process”;

  • Accountable and relevant — “members of a community engage with one another in accountable and reasoned public discourse” and “cannot be anonymous to one another”;

  • Transparent — “the structure and rules of the space must be public so that citizens know who owns and controls the space, whether monitoring is taking place, and the origin of any information contributed to the discussion”;

  • Equal and responsive — “[i]n the constructed space, all participants must be equal players with like opportunities for access and voice” and “[t]he architecture cannot privilege one group over another”;

  • Pluralistic — “[r]ules or technology can be enlisted to regulate the space for deliberation” so that “viewpoints representing a broad spectrum are clearly expressed”;

  • Inclusive — “[e]ach participant must at least have the chance to be heard. Yet at the same time, a deliberative forum must be inclusive and open to all members of the relevant community; it cannot be [both] exclusionary and democratic”;

  • Informed — “deliberative dialogue cannot be divorced from information, and participants must have access to a wide variety of viewpoints in order to make effective and educated decisions”;

  • Public — a dialogue “must be open, accessible, and explicitly dedicated to the interests of the group, rather than any individual or particular interest group”; and,

  • Facilitated — some mechanism for “[m]oderation is essential to managing the work of groups or teams online or off” and “[t]he only way to manage the competing voices of a large number of participants is to facilitate the dialogue, highlighting what is productive and suppressing what is destructive” [13].

Understanding if individuals perceive these 11 ideals manifesting through a given online participation process, then, is to understand how closely a designed system aligns with an ideal online deliberative democratic process. Or, put simply, this degree of alignment may indicate how effective such an online process is in terms of democracy and public participation and whether such a process is “subjectively legitimate” (Nino, 1996). This study operationalizes these ideals in a series of interviews with participants from Next Stop Design in order to gauge whether participants viewed the project as an effective online participation process.

 

++++++++++

Next Stop Design

With funding from the U.S. Federal Transit Administration (FTA) and in cooperation with the Utah Transit Authority (UTA), the Next Stop Design Web site was launched 5 June 2009 (see Figure 1). On the site, participants could register a free account by completing a registration process that included questions about past bus ridership, past public participation in urban planning issues, and demographic information. Once registered, participants could submit designs for a bus stop shelter for a Salt Lake City bus stop. Registered participants could also comment and cast a 1–5 point vote for each submitted design in the competition (see Figure 2). At the end of the project on 25 September 2009, the three designs with the highest average scores were declared the winning designs (see Figure 3).

 

Screen shot of the home page of NextStopDesign.com
 
Figure 1: Screen shot of the home page of NextStopDesign.com (http://www.nextstopdesign.com).

 

 

Screen shot of a rating page for a user-submitted bus stop shelter design
 
Figure 2: Screen shot of a rating page for a user–submitted bus stop shelter design.

 

 

First place Folding Bus Stop
 
Second place Stop to Move
 
Third place Smart Stop
 
Figure 3: Top three winning designs: Top: First place — “Folding Bus Stop.” Middle: Second place — “Stop to Move.” Bottom: Third place — “Smart Stop.”

 

Google Analytics scripts were appended to each page of the Next Stop Design Web site to track basic traffic and user data on the site, such as the number of visitors, pages viewed, and geographic location of visitors. Table 1 describes Next Stop Design’s basic site traffic data based on Google Analytics scripts, and Table 2 describes Next Stop Design’s registered users based on information gathered during the registration process.

 

Table 1: Basic traffic data from Next Stop Design.
Basic site traffic 
Site visits29,855
Page views316,141
(10.6 pages viewed per visit)
 
Site visits by geography 
Countries/territories visiting127
Top countries in terms of visitorsU.S. (16,045 visits; 53.7% of all visits)
U.K. (1,920; 6.4%)
India (1,174; 3.9%)
Greece (991; 3.3%)
Canada (897; 3.0%)
U.S. states visiting50 (plus D.C.)
Top U.S. states in terms of visitorsNew York (3,379 visits; 21% of all U.S. visits)
California (2,245; 14.0%)
Utah (1,250; 7.8%)
Texas (778; 4.8%)
Louisiana (745; 4.6%)
Cities in Utah visiting29
Top Utah cities in terms of visitorsSalt Lake City (716 visits; 57.3% of all Utah visits)
Midvale (380; 30.4%)
Orem (20; 1.6%)
Provo (20; 1.6%)
Logan (17; 1.4%)

 

 

Table 2: Registered user data from Next Stop Design.
Registered users, designs, and votes 
Registered users3,187
Bus stop designs submitted260
Total votes cast in the contest15,276
Fraudulent votes cast4,218 (27.6% of all votes)
Legitimate votes cast11,058
 
Registered users by location 
Registered users from the U.S1,448 (45.4% of all registered users)
Registered users from Utah52 (3.6% of U.S. registered users)
 
Registered users by race and age 
Racial/ethnic breakdown of U.S. registered usersWhite, not Hispanic (935 users; 64.6% of U.S. registered users)
Prefer not to disclose (225; 15.5%)
Hispanic or Latino (129; 8.9%)
Asian or Pacific Islander (73; 5.0%)
Multiracial (42; 2.9%)
Black or African American (21; 1.5%)
Other race/ethnicity (15; 1.0%)
Racial/ethnic breakdown of Utah registered usersWhite, not Hispanic (43 users; 82.7% of Utah registered users)
Prefer not to disclose (4; 7.7%)
Asian or Pacific Islander (2; 3.8%)
Hispanic or Latino (1; 1.9%)
Multiracial (1; 1.9%)
Other race/ethnicity (1; 1.9%)
Range of ages of users13–85 years old
Age breakdown of registered users<20 years old (9% of all registered users)
20–29 years old (49%)
30–39 years old (21%)
>40 years old (21%)
 
Registered users by ridership and prior public participation experience 
Rode the bus “more than once a week” or “every day”48%
Rode the bus at least once a week57%
Had never attended public participation meeting before68.5%

 

Cheating in the contest was a major concern. It was discovered that fully 27.6 percent of all votes cast in the competition were a result of a handful of users who had created several dummy accounts. A rigorous method for determining fraudulent accounts and votes, which included examining voting patterns and geographic locations of IP addresses, led to the deletion of those accounts and votes to arrive at a legitimate ranking of winners in the competition. The large number of participants and the rampant cheating were a bit surprising, since there was no form of compensation offered to the winners; winning the competition did not guarantee construction of the design by the UTA (and UTA still has no plan to build any of the designs), nor was there a cash prize. The only reward offered by the competition to the winners was acknowledgment of their win in the form of a press release to media and announcement on the Web site itself.

 

++++++++++

Method

Studying an online community like the one at Next Stop Design requires awareness of community norms, and studies should be conducted by researchers with substantial experience participating in myriad online community forms. It is important that online studies do not burden participants (Kaye and Johnson, 1999) and that they do not violate an online community’s expectations for topical relevance (Swoboda, et al., 1997) or its sense of privacy, tact, or politeness (Wright, 2005). A researcher familiar with the conventions of online community would likely be better suited to study these phenomena than novices.

Conducting interviews via instant messenger (IM) programs was the most appropriate way to study the Next Stop Design community. Research that “explores an Internet–based activity such as ... online community” ought to be conducted online, since “research participants are already comfortable with online interactions” [14]. Online interviewing methods have begun to receive thorough scholarly treatment (e.g., Al–Saggaf and Williamson, 2004; Davis, et al., 2004; Kazmer and Xie, 2008; Lange, 2008; Mann and Stewart, 2000; O’Connor and Madge, 2001; Opdenakker, 2006; Stieger and Reips, 2008). IM interviewing

allows synchronous and semi-private interaction and can automatically record the interaction text. The ad hoc conversational nature of IM interviews lets them resemble oral interviews. As a result, developing emergent probes in IM interviews can be easier than in email. [15]

As Kazmer and Xie (2008) acknowledge, IM interviewing allows for essentially perfect transcription, since all interactions can be stored in logs and entire IM windows can be saved as HTML files. Additionally, this logging produces a data bank that is already clean, organized, and digital, making computer–aided analytic methods simple to execute. Critics of mediated interviews worry that affective data may be lost that may have existed in face–to–face interviewing. This is partially true, because non–verbal cues, facial expressions, and tone of voice are lost in the mediated environment. However, Kazmer and Xie (2008) note that participants are still able to express themselves in IM interviews, but that this expression occurs through online written conventions, such as emoticons, font changes, italics, bolding, and other methods [16]. In fact, the anonymous veil afforded by the Internet could even encourage participants to feel less inhibited and express themselves more honestly, emotionally, and directly (Suler, 2004).

Participants were recruited from those who indicated a willingness to be contacted for a follow–up interview during the registration process on the Next Stop Design Web site. Out of 3,187 registered users, 950 indicated they were willing to be contacted for a follow–up interview, or 29.8 percent. Understanding how the diverse participants at Next Stop Design, some who submitted designs and some who just voted, perceived the effectiveness of the project in terms of public participation required a sample of interviewees that captured this breadth of participation. This necessitated a quota sample for interviewing. To gather a fair variety of interviewees, the quota was proportional such that the actual interviewed sample more or less reflected the makeup of registered users.

The proportional quotas I used to determine my sample from the population of registered Next Stop Design users willing to be interviewed (N = 950) related to the scope of my study. That is, participants’ perceptions of Next Stop Design as an effective online deliberation tool are likely affected by their level of involvement on the site. Participants who submitted several designs, participants who won the competition, and participants who merely registered and cast a few votes on the site may all perceive the project differently. Thus, to address this study, my quota included a proportional mix of more involved and less involved participants. One’s previous experience in public participation meetings and one’s bus ridership also factor into perceptions, so a proportional quota was also sought here. Across the sample, I sought proportional representation of registered users based on demographic factors. Despite the use of quota sampling to ensure a representative sample of participants on the site, however, it should be noted as a limitation in this study that only those willing to be contacted for an interview in the first place ended up participating in the study. Self–selection bias ought to be of greater concern to qualitative researchers even though they do not aim for generalizability in their work, but developing the quota frame for the selection of participants in this study was a step toward ameliorating some of this bias (Collier and Mahoney, 1996). This qualitative study, ultimately, was about deriving meaning and richness from those who participated in the Next Stop Design project, not about generalizing their perceptions to all contexts.

Participants were sent an e–mail message to the e–mail address they supplied during registration in order to schedule an interview via an IM program of their choosing. Interviews remained true to the conventions of instant messaging, with an informal and courteous tone. Interview questions operationalizing concepts relating to Noveck’s (2003) 11 ideal features for online deliberative democratic processes were asked of participants. A general opening question asked if participants thought the project was effective, and their initial answer helped open this discussion and direct me to more detailed follow–up questions stemming from the ideal features. The Appendix shows the full slate of interview questions used in the study.

There was not a rigid plan for the order or wording of questions. The interviews were semistructured and proceeded like comfortable conversations where themes emerged through questions and specific probes, rather than proceeded like a survey with multiple choice responses. I identified broad themes on the fly as a way to direct my line of questioning to more specific probes relating to Noveck’s (2003) 11 ideal features.

The IM interviews produced an automatically time–stamped transcript collection. Data analysis was an ongoing process, and I made marginal notes and other commentary in the digital transcripts after completing each interview. In a process similar to what Miles and Huberman (1994) and Lindlof and Taylor (2002) describe, these notes were helpful in developing emergent codes, which I then distilled into broad themes. It is unlikely that new themes emerge after the first dozen or so interviews, and in this study, no new themes were discovered after the thirteenth interview. Guest, et al. (2006) found that theme “saturation” in qualitative interviewing typically occurs within the first 12 interviews and metathemes appear as early as within the first six interviews. I anticipated that 20 interviews would capture all themes for this study, and my proportional quota sample initially took place within this 20 interview target size. A total of 23 interviews were eventually conducted in the study.

 

++++++++++

Results and discussion

Table 3 describes the 23 participants interviewed. Notably, only four of the participants were women. This is most likely attributable to the large numbers of architects participating in the competition and the relatively low numbers of women in the architecture profession. For instance, a 2009 study by the National Architectural Accrediting Board found that 41 percent of graduates of architecture programs are women; an American Institute of Architects (AIA) study found only 20 percent of licensed architects were women; and another AIA report found that only 27 percent of staff in architectural firms in the U.S. were women (Gregory, 2009).

 

Table 3: Basic information about interview subjects.
Note: * Letters have been used to represent participant names to protect individuals’ identities.
† Participant’s age at the time of the competition’s closing on 25 September 2009.
‡ Cities with state abbreviations are U.S. cities. Countries are noted otherwise.
Participant*SexAge†Hometown‡Public transit use frequencyEver attended traditional meeting?Number of designs submittedNumber of votes cast
AM47Dublin, Ireland> weeklyYes11
BF52Olympia, WAEvery dayNo02
CM25Istanbul, TurkeyEvery dayNo112
DM33Santa Rosa, CAEvery dayYes28
EM26Ft. Worth, TXNot used in past yearNo01
FM30Beijing, ChinaEvery dayNo048
GM34Riverton, UTYearlyNo00
HM23Salisbury, MD> weeklyNo136
IF36Baton Rouge, LAYearlyYes03
JM23Kansas City, MOMonthlyNo00
KM21Honeoye Falls, NYMonthlyNo011
LF26Buenos Aires, ArgentinaEvery dayNo01
MM34Brooklyn, NYEvery dayNo137
NM39San José de Guanipa, VenezuelaMonthlyYes36
OM28Delmar, NYEvery dayNo012
PM23West Jordan, UT> weeklyYes10
QM30Los Angeles, CAMonthlyNo12
RM37Bhopal, IndiaYearlyNo21
SF21Elkton, MDYearlyNo01
TM40Thessaloniki, GreeceWeeklyNo01
UM19Visoko, Bosnia and Herzegovina> weeklyNo10
VM54Ardmore, PA> weeklyYes04
WM29Thessaloniki, Greece> weeklyNo171

 

The 23 interviews were conducted over the course of 16.5 hours in March and April 2010 and generated a corpus of 83 pages of single–spaced transcripts for analysis. For the purposes of this study, all interview transcript excerpts are presented without correction [17].

According to Noveck (2003), the 11 ideal features of online deliberative democratic processes are in play when a project is accessible, free of censorship, autonomous, accountable and relevant, transparent, equal and responsive, pluralistic, inclusive, informed, public, and facilitated. Interviews with Next Stop Design participants revealed that most of these features could describe the Next Stop Design project in a positive light, though the project seemed most deficient in terms of its facilitation, how participants perceived the competition as equal and responsive, and the accountability and relevance of peers on the site. Most of these perceived deficiencies stemmed from two major issues: 1) perceptions of cheating and “popularity contests” and 2) flaming in comments on individual design submissions. I explore the 11 themes in logical groupings below.

Accessible and informed

The accessibility of the site was the most frequently mentioned feature about Next Stop Design. Generally, comments relating to this feature were positive, many of the participants praising the site’s design and ease of use. The following comments illustrate this praise.

Participant H: I think the scope was quite clearly defined, and the goal was clearly laid out.
 
Participant J: The site was easily accessible which added to its effectiveness.
 
Participant R: the web site was pleasently lucid, handy, easy.

Some participants mixed praise for the accessibility of the site with some suggestions for improvement, however.

Participant Q: The website was easy to use
 Very user friendly
 The only downside I would say was that the requirement of the registration of the voters
 Which I understand it is necessary for the records
 But takes 5 minutes which may discourage people to vote.
 
Participant T: I wish only the images were a little bigger
So anyone could see more clear the general idea of the projects.

If accessibility is the idea that an online deliberative space should “be available to as wide a range of participants as possible” [18], then Participants Q and T point out that the site may have presented a burden on some participants. Participant Q’s concern is that the registration form, which was required to do virtually anything participatory on the site, may have been a bit long and demanding, especially for someone less familiar with English or someone who has little time to invest in new Web sites. Participant T’s concern is that images representing design submissions needed to be bigger, something with which older people or those who have difficulty seeing might struggle. Images could be clicked, which then enlarged them one by one on the screen. It is unclear whether Participant T was aware of this enlarging function, and it certainly means the enlarging function should have been more prominently displayed on the site. Aside from these two comments, nearly every participant explicitly mentioned that the site was easy to use and they had no problem navigating the different functions of the competition.

The issue of accessibility, however, is called into question a bit due to various “digital divisions,” inequalities concerning Internet access (Birdsall and Birdsall, 2005; Fox, 2005; Jones and Fox, 2009; Warschauer, 2002). Because Next Stop Design was an Internet–based project, it was not accessible to a significant number of Americans and an even larger number of individuals worldwide. As an online deliberative democratic process, it is effective in terms of accessibility, but because the medium of the Internet excludes, it cannot truly be called an overall accessible tool for deliberative democracy. This is why crowdsourcing, no matter how well executed, should never supplant traditional public participation methods. Rather, crowdsourcing should complement traditional methods.

A participant feeling informed in his or her interaction with an online deliberative process is a related concept to accessibility. Again, most participants indicated a level of satisfaction regarding the availability of information on the site.

Participant L: it was pretty easy to get the information you were looking for
 
Participant N: the information available it seemed to me appropriate to the objectives, for me it was a contest of ideas despite the fact that some proposals were conceptual and others were more developed, in general terms that information allowed the development of the idea requested.
 
Participant W: The site was really easy to use, easy to upload and to vote as well. The rules were clear as well.

However, just like with the issue of accessibility, two participants expressed a mixture of praise and criticism regarding the quality of information on the site.

Participant C: Hmm. Yes i can say that it was easy to use. But i also think that the datas about the zone should be more easy to access.
 
Participant H: I think the scope was clearly defined, and the goal was clearly laid out. I think that more information could have been given as far as the use/ intended multiples uses for the project though. Give more examples sites could have been a benefit as well. Overall the information was good though.

Further conversation with Participant C revealed that he wanted more information about the actual terrain the proposed bus stop was to be built upon, and Participant H would have liked more information describing the importance of the specific bus stop in the entire transit system, the amount of pedestrian traffic, ridership, and so on.

Generally speaking, the Web site performed well in the opinion of participants regarding accessibility and quality of information, but there is certainly room for improvement. What is most remarkable about the interview process with participants, and which can be seen in the comments from those participants offering criticism, is that they came forward with constructive suggestions for improvement. That is, participants interviewed for this study were quick to identify what was lacking, pointed out the flaws, but then also offered very helpful tips for making a future project better. This constructive criticism is apparent in transcript excerpts throughout the remainder of this study, and it demonstrates that participants saw themselves as active agents in the business of the site, both at the specific level of the competition and at the meta–level of the concept of the study. This quality, autonomy, is discussed in the next section.

Autonomous, free of censorship, and transparent

I discuss the features of autonomy, lack of censorship, and transparency together because these concepts connect neatly. Online deliberative processes are autonomous when participants are treated as “active participants in a public process” rather than “passive recipients of information” [19]. Lack of censorship as a concept is fairly self–explanatory, and transparency means that participants are aware of the rules governing the deliberative process, whether there is monitoring on the site, and what entity stands to benefit from the process. I connect these three concepts in a single discussion because they all deal with participant agency — the ability for participants to act freely, express themselves, and understand their relationship to the governing authorities over the project. Interview data indicated that autonomy and lack of censorship were realized through the Next Stop Design competition, but transparency was only mentioned by a single participant.

Several participants acknowledged their role as active agents in a public process rather than merely observers, and many participants spoke positively about this empowerment.

Participant W: I believe it is very important to activate citizens. More that, the more ides the better. At the moment in Athens Greece there is a competition about a bench. Anyone can participate. What s better than having the citizens of your city designing for you, not just the professionals, Everyone!
 
Participant G: I like it to participate even it wont make a difference.
it feels good to have my voice be heard.

Participant G expressed mixed feelings regarding autonomy. On the one hand, he enjoyed the ability to make himself heard through the competition, but he expressed doubt about his ability to affect the process in a meaningful way.

In addition to making his voice heard, Participant G also noted that he did not perceive any censorship on the site. In fact, he even stated that censorship rarely had a place in such a public project, except in cases of obscenity.

Participant G: I feel the comments shouldnt be censored unless they are being extremely vulgar.

Other participants felt similarly about valuing free speech on the site.

Participant L: I do like the idea of expressing freely ... It is rewarding for the one participating and for those who aren’t.
 
Participant H: The presence of any censorship was difficult to sense.

There was no software architecture in place to prevent certain forms of speech on the site. Participants were free to write all manner of words on the site without the interference of profanity filters, and no participants were censored by the Next Stop Design team based on anything they produced for the site. During the competition, the team received no complaints or requests to take down offensive comments from other participants, though it became evident during these interviews that some participants found flaming commentary annoying and not helpful. I am unaware of ways the site censored anyone, and participants seemed satisfied in this regard.

Participant Q indicated that knowing the competition was determined by popular vote rather than by a jury of experts was a “freeing” experience for him. He elaborated on this and was thus the only person interviewed for this study that touched on the issue of transparency in any real way.

Participant Q: I entered this because everytime I run for compeitions I know that there are no limiting factors like real clients
I knew this was a research program which provides more freedom.

No other participants seemed to outright indicate that they were aware Next Stop Design was primarily a research program, and no one explicitly acknowledged that the FTA, or the U.S. government in general, was funding the project. Most participants were aware that the bus stop was to be built for Salt Lake City, but despite the prominence of FTA and UTA logos on the Web site, only Participant Q seemed to address the issue. Users who visit sites to participate in the public business of a government entity ought to know what is going on. I am concerned by the lack of participants who seemed aware of Next Stop Design’s larger purpose as an experiment in public participation for FTA. However, given the many logos, the registration process and waiver, and the “About” page on the site, I am unsure what more could have been done to educate participants about the greater purpose of the project. It is my feeling that the majority of participants saw the project as a bus stop design competition and nothing more.

Pluralistic and inclusive

A pluralistic online deliberative space guarantees that “viewpoints representing a broad spectrum are clearly expressed” [20], and an inclusive space means that there is opportunity for all viewpoints to be heard. The two concepts are very similar, but pluralism emphasizes the importance of entertaining differing opinions while inclusivity simply invites diverse opinions. Architecturally, the Web site allowed anyone to access the site who had access to an Internet connection capable of loading the site. No barriers were put in place through technological means or rules to deny anyone the chance to participate. Indeed, the immense international involvement in the project suggests that people worldwide saw the site as a place where their opinions and designs were welcome. Yet, there was a known instance of exclusivity in that the site was U.S.–centric.

Many participants commented on the diversity of opinions represented on the site, and the following excerpts illustrate this sentiment.

Participant K: Everything I saw ranged from minimalist and simple to outrageous and unrealistic. I thought it was good to find a nice array of ideas and themes.
 
Participant W: My general view is that the competition had almost everything, amateur to professional design, and that was very interesting. More than that it was obvious that the projects came from different design backgrounds and the multicultural of the competition was the strongest point.

Some participants saw the variety of design styles as an indicator of diversity on the site, some saw the international involvement as a sign of diversity, and some saw the mix of amateurs and professional designers and architects as an indicator of diversity. Still another participant mentioned that the project brief, the description of the parameters of the bus stop design competition, was inclusive from the start.

Participant B: Lots of good consideration given to the need for inclusiveness in the project brief, too, BTW.
My husband uses a wheelchair, and was one of the submissions particularly addressing that issue of access, so we paid attention to such things more than other folks might have.

A list of design considerations were included in the project brief on the site, considerations which included an emphasis on the eventual bus stop to be accessible to people with disabilities. While such a design consideration is mandated by U.S. law, for Participant B and her husband, it was a welcome nod toward inclusivity in the brief and may have helped other participants acknowledge diverse abilities in their designs.

However, despite the few mentions in interviews about inclusivity and the Web analytics and registration data to suggest international interest in the site, it is important to remember the U.S. slant of the project and how this made the project a bit exclusive from the outset. The Next Stop Design Web site was in English and the registration process included a question about race/ethnicity with distinctly American categorical constructions modeled on the U.S. Census. These and other U.S.–centric markers likely worked to exclude at least some visitors to the site. These interview data, unfortunately, cannot account for these missing voices, and I temper this claim of inclusivity in light of such missing voices. Future studies would surely need to be designed from the beginning to be internationally inclusive and multilingual. Though the success of public participation programs may be culture–specific (Abram and Cowell, 2004; Alfasi, 2003), this does not mean that findings from the present study would not apply in other cultural contexts. Rather, what I am suggesting is that the wide–reaching tool of the Internet complicates the targeting of specific groups in online participation contexts, and so crowdsourcing sites should be carefully designed to be globally inclusive (Brabham, 2012b).

Accountable and relevant and public

Noveck (2003) acknowledges that her ideal of accountability and relevance is the most “controversial value–choice and one that is surely not appropriate for all purposes” because it means that individuals ought not be anonymous to one another [21]. Suler (2004) suggests that anonymity online makes individuals feel “disinhibited,” for better or worse, and in the best case for an online public participation process, this could mean that a shy individual or one who might fear retaliation for speaking out would at last find a way to contribute to the good of the common project. But Noveck (2003) asserts that participants “must express themselves publicly as members of the community of dialogue” and be accountable for their contributions in order to have productive deliberation [22]. This is a point of tension and one that manifests in the architecture of an online space. Siding with the productive potential in one’s online dis–inhibition, Next Stop Design was constructed so as to allow users to remain anonymous to one another during the competition. In terms of accountability and relevance, then, participants’ perceptions seemed mixed; some valued the ability to remain anonymous and some thought anonymity destroyed accountability, encouraged rampant flaming, and made the competition not worth participating in any further. First, here is some of the more positive commentary on this issue:

Participant G: I felt they were accountable.
As far as I could tell there were no hidden agendas.
 
Participant I: I think the anonymity probably works well for those who are timid about their drawings or images.

Participant O, however, doubted whether anonymity was really available to participants in the first place, since he suspected many people on the site came to the site to vote for a friend’s submission. Still, he trusted the voting system to level out any effects of exposure on the site and bring the deserving design to the top of the heap. In other words, Participant O thought the voting mechanism itself allowed for accountability, even if individuals were technically anonymous to one another on the site. Another participant explicitly addressed this veil of anonymity and what happens when one steps out from behind it.

Participant M: I felt anonymous as long as I didn’t post comments. However, comments were linked to registered users (if I remember correctly) so my comments about other designers’ (who may have been super–sensitive to critique) submittals I believe led to some malicious backlash for my submittal.

Again, participants who noticed flaws in the design of the site were ready to offer suggestions for improvement, as Participant D demonstrates below.

Participant D: One of my commenters specifically mentioned that he was posting designs under one account while commenting under another one to keep safe from “retaliation”.
An unfortunate side effect of anonymous accounts.
I am a fan of less anonymity when it comes to something publicly funded with a potential for actual construction. User profiles with voting records would go a long way in that direction. Also having tiered accounts ala Amazon’s “verified” status for people willing to submit to a higher standard of openness about their identity with the organizer, but still remain anonymous to the public.
And when I am feeling medival, a public outing of cheaters.

Participant D’s suggestion to follow Amazon’s lead with accountability is astute. Johnson, et al. (2004) explored the notion of accountability in Internet governance and suggested that reputational rating systems allow individuals the opportunity to make peers accountable to one another while still remaining anonymous. As is the case at Amazon, as well as eBay, some news sites, and many other popular sites, an architecture exists that allows peers to assign ratings to one another’s anonymous screen names based on the value that individual brings to the commons in some way. At eBay, for instance, reputable sellers encourage their buyers to leave feedback about the transaction, and the more positive feedback and experience with numerous transactions, the more prestigious that user’s accompanying star icon becomes. And depending on the site, various reputational icons may become keys to allow certain behaviors and functions on the site, keeping less reputable individuals from engaging in some of the more visible and important business of a community until they have earned their stripes. If a community recognizes these kinds of merit badges and reputational seals in the form of peer–voted icons on screen, then a functional system exists that holds individuals accountable to one another without requiring them to disclose their identity. This is a distinction between authentication and identification (Johnson, et al., 2004). Unfortunately, Next Stop Design was not built to allow this kind of system. Had it included such a system, presumably the project would have appraised better in light of Noveck’s (2003) 11 ideals for online deliberative democracy.

Perhaps in part because participants were anonymous, some perceived the process as not remaining true to the ideal of being public, which is a focus on collective interests rather than individual interests or the interests of specific groups. Participant E felt the process was hijacked by individuals who recruited friends to inflate their votes, resulting in a kind of special interest group of sorts. At least six other participants had similar opinions.

Participant E: this falls into the black hole of the facebook genre. The positive/negative people always tend to be influence by others and if you want someone’s true opinion the comments will prob sway that statistically in one direction or the other. But, then again you might want that, kinda gives a middle school edge to the thing. Vote by mob.
Does Bob get 200 of his friends that are part of his soccer to team vote on him and what are their background?
 
Participant V: People with a lot of contacts get their friends to vote and whoever has the biggest network gets the biggest number of votes.

On the one hand, it could be argued that special interest groups have always dominated traditional public meetings, coordinating large showings at hearings and commissioning experts to intimidate everyday citizens (Hibbard and Lurie, 2000). Next Stop Design, then, simply replicated this less–than–ideal situation online in this competition. On the other hand, it could be argued that something public and for the common good is really just something that a plurality of special interests agrees to. In other words, the distinctions between special interests, influence among personal networks, and the commons may be artificial or irrelevant in the scope of public involvement. What is the difference between politicking to win influence and backing the most influential idea? And could anything truly be done in a technological/architectural sense to protect against this in an online space? I am inclined to think the ideal of being public in an online deliberative space is not something that can ever truly be guaranteed by the moderator or architect of a Web site. This ideal seems to reach beyond the control of a designer of a deliberative space and seeks to dictate ideal behaviors for citizens involved in the process. On a practical level, though, whether this ideal was achieved in the Next Stop Design case was doubtful in the eyes of participants interviewed for this study.

Facilitated and equal and responsive

Perhaps the most controversial feature of Next Stop Design was the voting mechanism. The Next Stop Design team identified cheating in the competition early on, and a methodology involving IP addresses, geo–locations, voting patterns, and other means was developed to identify these abusers. Twenty individuals were found to have been responsible for dozens of junk e–mail accounts and multiple registrations. Multiple registrations were not allowed according to the “terms of use” on the site. These individuals manually cast thousands of votes for specific designs and against all other designs. In all, these 20 cheaters accounted for 27.6 percent of all votes in the competition. Once identified, the fraudulent accounts were deleted, along with their voting histories.

The problem with this voting manipulation throughout the competition, and the various rounds of vote purging that happened, is that participants on the site perceived that the site had been compromised and were at times unsure whether the Next Stop Design team was doing anything about it. This caused a real problem for the legitimacy of the project as a whole, and these incidents reflect poorly on the ideal concepts of facilitation and equality and responsiveness. Because the team did not actively moderate content on the site, the voting and commenting mechanisms alone became the point of facilitation in the process. And because of the questionable voting patterns, many participants felt that some individuals were receiving unequal treatment and exerting unequal influence in the process.

Participant S: I thought that it was a good way to try and vote on some ones piece but I am sure people didn’t vote fairly or voted many times.
 
Participant D: I was excited at the beginning, but lost interest later on as it became obvious that people were gaming the ratings, but was not yet obvious that you would be moderating the effects of the intentional low–rating of competing designs.
 
Participant F: i think there was a leak with the voting process
it allowed ‘trollers’ a chance to swing things an unfair way.

In an effort to be transparent, details about the cheating, how cheaters were identified, and what actions were taken to delete fraudulent votes were posted to the front page of the Web site at the conclusion of the competition accessible through a link labeled “If you’re curious: why the decrease in rating counts.” It is unclear through the interviews with participants whether these voting adjustments were known by participants. When I mentioned the action the team had taken to participants during interviews, they often seemed as though they were just finding out about the fixes that were implemented.

Still other participants perceived inequality among users on the site along different lines. Some, as discussed above, expressed concern about the influence of individuals with large networks of friends to vote for their designs. But inequality was also perceived in terms of how designs were presented on the site. The design rating gallery during the competition featured most recently added designs first, at the top of the page. Older designs were pushed toward the bottom of the page. Though users had the ability to sort design submissions according to a number of criteria, the more designs that were submitted, the less likely it seemed that a new visitor to the site would take the time to explore all designs, and especially the older designs toward the bottom. Also, the longer a good design stayed on the site, the more time it had to attract negative feedback. Because of this, designs submitted later in the competition were perceived to have an advantage by some interview participants.

Participant K: it seemed to me that the earlier a project was submitted (and especially if it was a competitive design) the more it was voted down by competing users so because of that, it was more advantageous to submit in the last two weeks. And i feel like a few of the top 5 projects were around that time and didn’t have as much of a chance to be voted to the same level of scrutiny, whether it was legitimate or just an attempt to increase your own chances.
 
Participant B: You almost hope for a randomizer to bring ideas to the front page for a revisit once in a while ...
it’s like the ‘most emailed’ or ‘most viewed’ box on a newspaper’s home page.
They sometimes get all that airplay just because everyone clicks on that box!

In step with other constructive criticisms received during these interviews, Participant B offered a good solution for future competitions like Next Stop Design. A randomizer would ensure that all designs, no matter when they were submitted, would get an equal chance to be seen in the flood of designs. Or, as a few participants suggested, there could be distinct phases between submitting designs and voting — two rounds so that users had to see the whole collection of designs before casting votes.

 

++++++++++

Conclusions

Most of Noveck’s (2003) 11 ideals for online deliberative democracy seemed to be realized through the Next Stop Design project, according to interviews with participants; however, there is plenty of room for improvement. Participants mostly perceived the site to be accessible, informed, autonomous, free of censorship, pluralistic, and inclusive. In terms of transparency, participants mostly did not seem very aware of the overall research goals of the project and did not specifically mention the organizations that stood to benefit. With voting irregularities, especially, the efforts by the Next Stop Design team to remain transparent about its fraud detection policies seemed to go unnoticed. This does not necessarily mean the project was not sufficiently transparent, just that participants did not seem to take note of what was being made transparent for them by the project team. Concerns over both flamers on the site and cheating in the voting process by a handful of individuals called the ideals of equality and responsiveness, accountability and relevance, facilitation, and being public into question for participants interviewed for the study. That users could remain anonymous on the site encouraged, in some participants’ opinions, unproductive, selfish, and mean–spirited comments on designs by some users. Perceptions about voting fraud affected many participants’ faith in the quality of the facilitation mechanisms. And the fact of open voting in the competition paired with perceived rapid fluctuations in the rankings led many participants to assume that the competition was more a “middle school” popularity contest between large networks of friends than a serious design competition. All of this called into question the perception that the project was truly being executed with the common good in mind.

It is fair to say that Next Stop Design was far from perfect in living up to Noveck’s (2003) ideals for online deliberative democracy, but it is clear to me from discussions with participants that the project held some promise as a public participation tool. Where the project was deficient in these ideal features, too, participants were quick to recommend fixes. On this last point, it may be that participants were effectively engaged in a meta–participation process to refine the tool itself, even if they did not fare well in the bus stop design competition. This drive to participate in the refinement of the online deliberative tool may be the most remarkable outcome of this study.

Next Stop Design demonstrated that crowdsourcing is an imperfect solution to the problem of public participation in governance because not all of the ideals of online deliberative democracy can be easily realized through the medium of the Internet. But crowdsourcing, and any online tool, is also an imperfect tool for public participation because not everyone has access to the Internet in the first place. Though Internet access and use has become more widespread in the U.S. over the past decade, Internet access is still not ubiquitous and digital divisions persist across racial/ethnic, socioeconomic, and other lines (see, e.g., Rainie, 2010). Mobile technologies are promising vehicles for online public participation in governance issues (Goolsby, 2010; Johnson, 2011; Nash, 2010; Okolloh, 2009), particularly because cell phone use, mobile Internet use, and cell phone app use is now actually greater in the U.S. among African Americans and English–speaking Latinos than among whites (Smith, 2010). However, though mobile technology may ultimately bridge the digital divide in terms of race/ethnicity and socioeconomic status, complete access in a population through personal ownership of technological devices seems unlikely. Public Internet access centers and Internet training programs, a logical solution to the problem of access (Hayden and Ball–Rokeach, 2007; Schmitz, et al., 1995), are also imperfect approaches that come saddled with their own challenges (Pierce, 2006; Warschauer, 2002). Because Next Stop Design demonstrated that crowdsourcing is an imperfect solution to the problem of public participation in governance, both because of its difficulty in aligning with the ideals of online deliberative democracy and the persistent reality of the digital divide, crowdsourcing should thus only be seen by public administrators as an online tool complementary to traditional public participation programs.

By complementing traditional public participation programs, such as workshops and town hall meetings, crowdsourcing can bring more and different voices to governance through different methods people may find more comfortable or convenient. As a hypothetical example of how crowdsourcing and traditional methods may complement each other, let us consider the issue of transforming a city’s neglected warehouse district. Perhaps the city planning commission could collect public feedback, off–line and online, for ideas concerning the district’s future. Then, the planning commission could distill the ideas into a set of agenda items for discussion both in a town hall meeting and through an online discussion board. Once a general direction is determined (e.g., to transform the district into a mixed–use residential and commercial neighborhood), a crowdsourcing competition could be launched to solicit design ideas and feedback on the look and feel of the neighborhood, in much the same vein as Next Stop Design. A series of design charrettes — intensive face–to–face design workshops — and open meetings could be held parallel to the crowdsourcing competition to generate additional designs from citizens who are unable to participate online or who prefer face–to–face participation methods. The crowdsourcing process and the design charrettes and meetings would generate a slate of finalist designs, and the public and/or planning commission could select the best design for the district. In this hypothetical example, crowdsourcing and traditional public participation methods would complement each other, allowing several points of entry into the democratic process for citizens that would likely produce a larger, more robust public conversation about city design and urban reimagining. In addition to urban plnning, this multi–pronged process would reasonably work in other governance contexts, such as the crafting of public policy, the creation and selection of public art installations, and the planning of public festivals.

The Next Stop Design project contributes empirical data regarding the perspective of citizens participating in a creative, interactive, online public participation tool. By gathering participants’ input about this crowdsourcing experiment, the Next Stop Design team was able to assess the “subjective legitimacy” (Nino, 1996) of the crowdsourcing model as a public participation tool, to discover through their participation in the project whether it was an effective, democratic, justified process in public planning. We are in need of practical examples and scholarly research on the effectiveness of new media tools in governance, especially cutting–edge models that move beyond the conception of e–governance as a form of simple service delivery online. As technology increases in sophistication and access, public administrators must move beyond thinking of e–government and online public participation as being able to pay parking tickets online and send city officials e–mail messages. There is certainly promise in the crowdsourcing model to bring together the creative energies of a citizenry to bear on a given problem, to actively engage difficult issues of policy and design in forms complementary to traditional methods. Governments ought to continue to experiment with crowdsourcing — and other social media tools — to improve the reach and quality of public participation programs. These real–world tests, accompanied by academic studies, will help to refine the new media tools needed to enrich the public participation experience going forward.

Finally, the Next Stop Design project demonstrated that, even when a new media tool does not satisfy citizens’ needs or meet expectations, citizens are willing and able to improve the tool itself. A recurring theme in this study was that participants followed their critique of the project with constructive and often detailed feedback about how it could be improved. This finding should encourage public administrators to think of e–government projects as processes rather than products, as agile, iterative refinements of participation tools rather than one–time, large–scale roll–outs of media solutions. Technologies should evolve as public debate evolves, and gathering input on how to improve new media tools from citizens is as important for the improvement of online public participation methods as solving the problem at the center of the debate itself. End of article

 

About the author

Daren C. Brabham, Ph.D., is an assistant professor in the School of Journalism & Mass Communication at the University of North Carolina at Chapel Hill, where he teaches and conducts research on public relations and new media. Among the first to publish research on crowdsourcing, his work has appeared in Convergence; First Monday; Planning Theory; Information, Communication & Society; Journal of Applied Communication Research; and, the Participatory Cultures Handbook. His book, Crowdsourcing, will be published by MIT Press in spring 2013.
Web: http://www.darenbrabham.com
E–mail: daren [dot] brabham [at] unc [dot] edu

 

Notes

1. Bohman, 1998, p. 400.

2. Freeman, 2000, p. 382.

3. Chambers, 2003, p. 308.

4. Op.cit.

5. Delli Carpini, et al., 2004, p. 336.

6. Landwehr, 2010, p. 120.

7. Terranova, 2004, p. 3.

8. Noveck, 2003, p. 46.

9. Gangadharan, 2009, pp. 340–341.

10. Nino, 1996, p. 8.

11. Noveck, 2003, p. 12.

12. Noveck, 2003, p. 11.

13. Noveck, 2003, pp. 12–17.

14. Kazmer and Xie, 2008, pp. 257–258.

15. Kazmer and Xie, 2008, p. 259.

16. Kazmer and Xie, 2008, pp. 272–273.

17. Transcript excerpts are presented in this paper as true as possible to the expressive capabilities available to participants (including capitalization); the limitations of mediated synchronous communication; and the limitations of the specific IM program used for interviewing. There are a number of grammatical, spelling, punctuation, and capitalization errors present in the transcript excerpts, and these have been maintained to remain true to the participants’ words. Each discrete message is displayed in its own line of text. Use of brackets in the manuscript indicates either unimportant commentary omitted by the author or the participant’s intended edits that I compiled for ease of reading. In the latter case, for example, if a participant types “shortwhiel” in one line, but immediately follows it up with “short while” in another message, it is understood that the participant intended to correct his or her spelling error in the previous message, per the norms of IM conversations. In a case such as this, “[short while]” is included in place of the initial instance of misspelling. In other words, interview transcript excerpts are mostly unedited in terms of mechanics and style, and they contain many mechanical errors that were present in the raw transcripts.

18. Noveck, 2003, p. 12.

19. Noveck, 2003, p. 13.

20. Noveck, 2003, p. 15.

21. Noveck, 2003, p. 14.

22. Op.cit.

 

References

Simone Abram and Richard Cowell, 2004. “Learning policy: The contextual curtain and conceptual barriers,” European Planning Studies, volume 12, number 2, pp. 209–228.http://dx.doi.org/10.1080/0965431042000183941

Nurit Alfasi, 2003. “Is public participation making urban planning more democratic? The Israeli experience,” Planning Theory & Practice, volume 4, number 2, pp. 185–202.http://dx.doi.org/10.1080/14649350307979

Yeslam Al–Saggaf and Kirsty Williamson, 2004. “Online communities in Saudi Arabia: Evaluating the impact on culture through online semi–structured interviews,” Forum: Qualitative Social Research, volume 5, number 3, article 24, at http://www.qualitative-research.net/index.php/fqs/article/view/564, accessed 30 August 2012.

André Bächtiger, Simon Niemeyer, Michael Neblo, Marco R. Steenbergen, and Jürg Steiner, 2010. “Disentangling diversity in deliberative democracy: Competing theories, their blind spots and complementaries,” Journal of Political Philosophy, volume 18, number 1, pp. 32–63.http://dx.doi.org/10.1111/j.1467-9760.2009.00342.x

Yasminah Beebeejaun, 2006. “The participation trap: The limitations of participation for ethnic and racial groups,” International Planning Studies, volume 11, number 1, pp. 3–18.http://dx.doi.org/10.1080/13563470600935008

Joseph M. Bessette, 1980. “Deliberative democracy: The majority principle in republican government,” In: Robert A. Goldwin and William A. Schambra (editors). How democratic is the Constitution? Washington, D.C.: American Enterprise Institute, pp. 102–116.

Stephanie A. Birdsall and William F. Birdsall, 2005. “Geography matters: Mapping human development and digital access,” First Monday, volume 10, number 10, at http://firstmonday.org/article/view/1281/1201, accessed 30 August 2012.

James Bohman, 1998. “Survey article: The coming of age of deliberative democracy,” Journal of Political Philosophy, volume 6, number 4, pp. 400–425.http://dx.doi.org/10.1111/1467-9760.00061

Daren C. Brabham, 2012a. “Crowdsourcing: A model for leveraging online communities,” In: Aaaron Delwiche and Jennifer Jacobs Henderson (editors). The Participatory Cultures Handbook. New York: Routledge, pp. 120–129.

Daren C. Brabham, 2012b. “Managing unexpected publics online: The challenge of targeting specific groups with the wide–reaching tool of the Internet,” International Journal of Communication, volume 6, pp. 1,139–1,158, at http://ijoc.org/ojs/index.php/ijoc/article/view/1542/, accessed 30 August 2012.

Daren C. Brabham, 2009a. “Crowdsourced advertising: How we outperform Madison Avenue,” Flow, volume 9, number 10, at http://flowtv.org/2009/04/crowdsourced-advertising-how-we-outperform-madison-avenuedaren-c-brabham-university-of-utah/, accessed 30 August 2012.

Daren C. Brabham, 2009b. “Crowdsourcing the public participation process for planning projects,” Planning Theory, volume 8, number 3, pp. 242–262.http://dx.doi.org/10.1177/1473095209104824

Daren C. Brabham, 2008. “Crowdsourcing as a model for problem solving: An introduction and cases,” Convergence, volume 14, number 1, pp. 75-90.

Samuel D. Brody, David R. Godschalk, and Raymond J. Burby, 2003. “Mandating citizen participation in plan making: Six strategic planning choices,” Journal of the American Planning Association, volume 69, number 3, pp. 245–264.http://dx.doi.org/10.1080/01944360308978018

Michael Bugeja, 2005. Interpersonal divide: The search for community in a technological age. New York: Oxford University Press.

Raymond J. Burby, 2003. “Making plans that matter: Citizen involvement and government action,” Journal of the American Planning Association, volume 69, number 1, pp. 33–49.http://dx.doi.org/10.1080/01944360308976292

Heather Campbell and Robert Marshall, 2000. “Public involvement and planning: Looking beyond the one to the many,” International Planning Studies, volume 5, number 3, pp. 321–344.http://dx.doi.org/10.1080/713672862

James W. Carey, 1989. Communication as culture: Essays on media and society. Boston: Unwin Hyman.

Jana Carp, 2004. “Wit, style, and substance: How planners shape public participation,” Journal of Planning Education and Research, volume 23, number 3, pp. 242–254.http://dx.doi.org/10.1177/0739456X03261283

Simone Chambers, 2003. “Deliberative democratic theory,” Annual Review of Political Science, volume 6, pp. 307–326.http://dx.doi.org/10.1146/annurev.polisci.6.121901.085538

David Collier and James Mahoney, 1996. “Insights and pitfalls: Selection bias in qualitative research,” World Politics, volume 49, number 1, pp. 56–91.http://dx.doi.org/10.1353/wp.1996.0023

James L. Creighton, 2005. The public participation handbook: Making better decisions through citizen involvement. San Francisco: Jossey–Bass.

Todd Davies and Seeta Peña Gangadharan (editors), 2009. Online deliberation: Design, research, and practice. Stanford, Calif.: Center for the Study of Language and Information, Stanford University.

M. Davis, G. Bolding, G. Hart, L. Sherr, and J. Elford, 2004. “Reflecting on the experience of interviewing online: Perspectives from the Internet and HIV study in London,” AIDS Care, volume 16, number 8, pp. 944–952.http://dx.doi.org/10.1080/09540120412331292499

Michael X. Delli Carpini, Fay Lomax Cook, and Lawrence R. Jacobs, 2004. “Public deliberation, discursive participation, and citizen engagement: A review of the empirical literature,” Annual Review of Political Science, volume 7, pp. 315–344.http://dx.doi.org/10.1146/annurev.polisci.7.121003.091630

Gerhard Fischer, 2002. “Beyond ‘couch potatoes’: From consumers to designers and active contributors,” First Monday, volume 7, number 12, at http://firstmonday.org/article/view/1010/931, accessed 30 August 2012.

John Forester, 2006. “Making participation work when interests conflict: Moving from facilitating dialogue and moderating debate to mediating negotiations,” Journal of the American Planning Association, volume 72, number 4, pp. 447–456.http://dx.doi.org/10.1080/01944360608976765

Susannah Fox, 2005. “Digital divisions,” Pew Internet & American Life Project (5 October), at http://www.pewinternet.org/Reports/2005/Digital-Divisions.aspx, accessed 30 August 2012.

Samuel Freeman, 2000. “Deliberative democracy: A sympathetic comment,” Philosophy & Public Affairs, volume 29, number 4, pp. 371–418.http://dx.doi.org/10.1111/j.1088-4963.2000.00371.x

Noah S. Friedland and Daren C. Brabham, 2009. Leveraging communities of experts to improve the effectiveness of large–scale research efforts (white paper). Renton, Wash.: Friedland Group.

Seeta Peña Gangadharan, 2009. “Understanding diversity in the field of online deliberation,” In: Todd Davies and Seeta Peña Gangadharan (editors). Online deliberation: Design, research, and practice. Stanford, Calif.: Center for the Study of Language and Information, Stanford University, pp. 329–347.

Rebecca Goolsby, 2010. “Social media as crisis platform: The future of community maps/crisis maps,” ACM Transactions on Intelligent Systems and Technology, volume 1, number 1, article number 7.

Alexis Gregory, 2009. “Calling all women: Finding the forgotten architect,” AIA Archiblog (Weblog; 12 November), at http://blog.aia.org/aiarchitect/2009/11/calling_all_women_finding_the.html, accessed 30 August 2012.

Greg Guest, Arwen Bunce, and Laura Johnson, 2006. “How many interviews are enough? An experiment with data saturation and variability,” Field Methods, volume 18, number 1, pp. 59–82.http://dx.doi.org/10.1177/1525822X05279903

Kevin S. Hanna, 2000. “The paradox of participation and the hidden role of information: A case study,” Journal of the American Planning Association, volume 66, number 4, pp. 398–410.http://dx.doi.org/10.1080/01944360008976123

Craig Hayden and Sandra J. Ball–Rokeach, 2007. “Maintaining the digital hub: Locating the community technology center in a communication infrastructure,” New Media & Society, volume 9, number 2, pp. 235–257.http://dx.doi.org/10.1177/1461444807075002

Michael Hibbard and Susan Lurie, 2000. “Saving land but losing ground: Challenges to community planning in the era of participation,” Journal of Planning Education and Research, volume 20, number 2, pp. 187–195.http://dx.doi.org/10.1177/0739456X0002000205

Jeffrey Hou and Isami Kinoshita, 2007. “Bridging community differences through informal processes: Reexamining participatory planning in Seattle and Matsudo,” Journal of Planning Education and Research, volume 26, number 3, pp. 301–314.http://dx.doi.org/10.1177/0739456X06297858

Jeff Howe, 2008. Crowdsourcing: Why the power of the crowd is driving the future of business. New York: Crown Business.

Jeff Howe, 2006. “The rise of crowdsourcing,” Wired, volume 14, number 6, at http://www.wired.com/wired/archive/14.06/crowds.html, accessed 30 August 2012.

Judith E. Innes, Sarah Connick, and David Booher, 2007. “Informality as a planning strategy: Collaborative water management in the CALFED Bay–Delta program,” Journal of the American Planning Association, volume 73, number 2, pp. 195–210.http://dx.doi.org/10.1080/01944360708976153

Anne Johnson, 2011. “City: SeeClickFix has good first month,” WRAL.com (17 February), at http://www.wral.com/news/news_briefs/story/9128944, accessed 30 August 2012.

David R. Johnson, Susan P. Crawford, and John G. Palfrey, Jr., 2004. “The accountable Internet: Peer production of Internet governance,” Virginia Journal of Law and Technology, volume 9, number 3, article 9, at http://www.vjolt.net/vol9/issue3/v9i3_a09-Palfrey.pdf, accessed 30 August 2012.

Sydney Jones and Susanna Fox, 2009. “Generations online in 2009,” Pew Internet & American Life Project (28 January), at http://pewinternet.org/Reports/2009/Generations-Online-in-2009.aspx, accessed 30 August 2012.

Barbara K. Kaye and Thomas J. Johnson, 1999. “Research methodology: Taming the cyber frontier: Techniques for improving online surveys,” Social Science Computer Review, volume 17, number 3, pp. 323–337.http://dx.doi.org/10.1177/089443939901700307

Michelle M. Kazmer and Bo Xie, 2008. “Qualitative interviewing in Internet studies: Playing with the media, playing with the method,” Information, Communication & Society, volume 11, number 2, pp. 257–278.http://dx.doi.org/10.1080/13691180801946333

Claudia Landwehr, 2010. “Discourse and coordination: Modes of interaction and their roles in political decision–making,” Journal of Political Philosophy, volume 18, number 1, pp. 101–122.http://dx.doi.org/10.1111/j.1467-9760.2009.00350.x

Patricia G. Lange, 2008. “Interruptions and intertasking in distributed knowledge work,” NAPA Bulletin, volume 30, number 1, pp. 128–147.http://dx.doi.org/10.1111/j.1556-4797.2008.00024.x

Lucie Laurian, 2003. “A prerequisite for participation: Environmental knowledge and what residents know about local toxic sites,” Journal of Planning Education and Research, volume 22, number 3, pp. 257–269.http://dx.doi.org/10.1177/0739456X02250316

Ethan J. Leib, 2004. Deliberative democracy in America: A proposal for a popular branch of government. University Park: Pennsylvania State University Press.

Pierre Lévy, 1997. Collective intelligence: Mankind’s emerging world in cyberspace. Translated by Robert Bononno. New York: Plenum Trade.

Thomas R. Lindlof and Bryan C. Taylor, 2002. Qualitative communication research methods. Second edition. Thousand Oaks, Calif.: Sage.

Ann Macintosh, 2006. “eParticipation in policy–making: The research and the challenges,” In: Paul Cunningham and Miriam Cunningham (editors). Exploiting the knowledge economy: Issues, applications and case studies. Amsterdam: IOS Press, pp. 364–369.

Timothy C. Mack, 2004. “Internet communities: The future of politics?” Futures Research Quarterly, volume 20, number 1, pp. 61–77.

Chris Mann and Fiona Stewart, 2000. Internet communication and qualitative research: A handbook for researching online. Thousand Oaks, Calif.: Sage.

Jane Mansbridge, with James Bohman, Simone Chambers, David Estlund, Andreas Føllesdal, Archon Fung, Cristina Lafont, Bernard Manin, and José Luis Martí, 2010. “The place of self–interest and the role of power in deliberative democracy,” Journal of Political Philosophy, volume 18, number 1, pp. 64–100.http://dx.doi.org/10.1111/j.1467-9760.2009.00344.x

Matthew B. Miles and A. Michael Huberman, 1994. Qualitative data analysis: An expanded sourcebook. Second edition. Thousand Oaks, Calif.: Sage.

Faranak Miraftab, 2003. “The perils of participatory discourse: Housing policy in postapartheid South Africa,” Journal of Planning Education and Research, volume 22, number 3, pp. 226–239.http://dx.doi.org/10.1177/0739456X02250305

Andrew Nash, 2010. “Web 2.0 applications for collaborative transport planning,” In: Manfred Schrenk, Vasily V. Popovich, and Peter Zeile (editors). RealCorp 2010 Proceedings (Vienna, Austria), pp. 917–928, and at http://www.corp.at/archive/CORP2010_86.pdf, accessed 30 August 2012.

Carlos Santiago Nino, 1996. The Constitution of deliberative democracy. New Haven, Conn.: Yale University Press.

Beth Simone Noveck, 2009. Wiki government: How technology can make government better, democracy stronger, and citizens more powerful. Washington, D.C.: Brookings Institution Press.

Beth Simone Noveck, 2003. “Designing deliberative democracy in cyberspace: The role of the cyber–lawyer,” Boston University Journal of Science and Technology Law, volume 9, number 1, pp. 1–91.

Henrietta O’Connor and Clare Madge, 2001. “Cyber–mothers: Online synchronous interviewing using conferencing software,” Sociological Research Online, volume 5, number 4, at http://www.socresonline.org.uk/5/4/oconnor.html, accessed 30 August 2012.

Ory Okolloh, 2009. “Ushahidi, or ‘testimony’: Web 2.0 tools for crowdsourcing crisis information,” Participatory Learning and Action, volume 59, number 1, pp. 65–70.

Raymond Opdenakker, 2006. “Advantages and disadvantages of four interview techniques in qualitative research,” Forum: Qualitative Social Research, volume 7, number 4, article 11, at http://www.qualitative-research.net/index.php/fqs/article/viewArticle/175/391, accessed 30 August 2012.

Michael Ostwald, 2000. “Virtual urban futures,” In: David Bell and Barbara M. Kennedy (editors). The cybercultures reader. New York: Routledge, pp. 658–675.

Scott E. Page, 2007. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton, N.J.: Princeton University Press.

Joy Y. Pierce, 2006. “Communication unplugged: A qualitative analysis of the digital divide,” unpublished doctoral dissertation, University of Illinois at Urbana–Champaign.

Michael Pimbert and Tom Wakeford, 2001. “Overview: Deliberative democracy and citizen empowerment,” PLA Notes, volume 40, pp. 23–28.

Robert Putnam, 2000. Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.

Lee Rainie, 2010. “Internet, broadband, and cell phone statistics,” Pew Internet & American Life Project (5 January), at http://www.pewinternet.org/Reports/2010/Internet-broadband-and-cell-phone-statistics.aspx, accessed 30 August 2012.

Joseph Schmitz, Everett M. Rogers, Ken Phillips, and Donald Paschal, 1995. “The Public Electronic Network (PEN) and the homeless in Santa Monica,” Journal of Applied Communication Research, volume 23, number 1, pp. 26–43.http://dx.doi.org/10.1080/00909889509365412

Aaron Smith, 2010. “Mobile access 2010,” Pew Internet & American Life Project (7 July), at http://www.pewinternet.org/Reports/2010/Mobile-Access-2010.aspx, accessed 30 August 2012.

Markku Sotarauta, 2001. “Network management and information systems in promotion of urban economic development: Some reflections from CityWeb of Tampere,” European Planning Studies, volume 9, number 6, pp. 693–706.http://dx.doi.org/10.1080/713666507

Stefan Stieger and Ulf-Dietrich Reips, 2008. “Dynamic interviewing program (DIP): Automatic online interviews via the instant messenger ICQ,” CyberPsychology & Behavior, volume 11, number 2, pp. 201–207.http://dx.doi.org/10.1089/cpb.2007.0030

John R. Suler, 2004. “The online disinhibition effect,” CyberPsychology & Behavior, volume 7, number 3, pp. 321–326.http://dx.doi.org/10.1089/1094931041291295

James Surowiecki, 2004. The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. New York: Doubleday.

Walter J. Swoboda, Nikolai Mühlberger, Rolf Weitkunat, and Sebastian Schneeweiß, 1997. “Internet surveys by direct mailing,” Social Science Computer Review, volume 15, number 3, pp. 242–255.http://dx.doi.org/10.1177/089443939701500302

Tiziana Terranova, 2004. Network culture: Politics for the information age. London: Pluto Press.

Christian Terwiesch and Yi Xu, 2008. “Innovation contests, open innovation, and multiagent problem solving,” Management Science, volume 54, number 9, pp. 1,529–1,543.

Dennis F. Thompson, 2008. “Deliberative democratic theory and empirical political science,” Annual Review of Political Science, volume 11, pp. 497–520.http://dx.doi.org/10.1146/annurev.polisci.11.081306.070555

Ian Urbina, 2009. “Beyond beltway, health debate turns hostile,” New York Times (7 August), at http://www.nytimes.com/2009/08/08/us/politics/08townhall.html, accessed 30 August 2012.

Ann Van Herzele, 2004. “Local knowledge in action: Valuing nonprofessional reasoning in the planning process,” Journal of Planning Education and Research, volume 24, number 2, pp. 197–212.http://dx.doi.org/10.1177/0739456X04267723

Mark Warschauer, 2002. “Reconceptualizing the digital divide,” First Monday, volume 7, number 7, at http://firstmonday.org/article/view/967/888, accessed 30 August 2012.

Kevin B. Wright, 2005. “Researching Internet–based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and Web survey services,” Journal of Computer–Mediated Communication, volume 10, number 3, at http://jcmc.indiana.edu/vol10/issue3/wright.html, accessed 30 August 2012.

 

Appendix: Interview guide

Questions to operationalize concepts related to Noveck’s (2003) 11 ideal features:

• Did you think the Next Stop Design project was effective in general? Why or why not? How so?

   ◊ Accessible — Was the site easy to use, access, understand?

   ◊ Free of censorship — Did you feel that you were able to freely express yourself on the site?

   ◊ Autonomous — Did you feel that you were an active participant in the business of the site?

   ◊ Accountable and relevant — Did you feel anonymous on the site or exposed? Did you feel others were anonymous or exposed?

   ◊ Transparent — Did you understand the rules of the competition, what agency benefitted from the competition, and how the voting worked?

   ◊ Equal and responsive — Were participants equal on the site? Have the same influence?

   ◊ Pluralistic — Was there a wide range of ideas presented on the site? Significant amount of diversity of opinion?

   ◊ Inclusive — Did you feel there was a diverse variety of users participating in the project?

   ◊ Informed — Was there enough information available on the site to understand the competition? Did you learn anything by participating?

   ◊ Public — Did you feel as though the project was to benefit the public, a specific community, or a specific individual? Do you think other participants understood this as well?

   ◊ Facilitated — Did you think the voting system was an effective way to present and sort through people’s contributions? Was it fair?

• Do you think a Web site like this would be useful for other government functions? Why or why not? What functions?

 


Editorial history

Received 30 August 2012; accepted 18 November 2012.


Copyright © 2012, First Monday.
Copyright © 2012, Daren C. Brabham.

The effectiveness of crowdsourcing public participation in a planning context
by Daren C. Brabham
First Monday, Volume 17, Number 12 - 3 December 2012
http://firstmonday.org/ojs/index.php/fm/article/view/4225/3377
doi:10.5210/fm.v17i12.4225





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.