As big data shapes more and more aspects of our lives, a wide-spread concern is that the affected people’s interests are not sufficiently taken into account. The field of participatory technology assessment (pTA) has attempted to make such processes more democratic by involving laypeople as an additional voice to the often rather expert-driven development. This text presents the lessons learned from such an endeavor, namely three citizen conferences on big data held in Germany. The empirical results are presented and critically contextualized in the larger discourse on pTA and big data. Therefore, the article sheds light on lay perspectives on big data while also providing insights into the strengths and weaknesses of participatory methods such as citizen conferences.
Discussion and conclusion
This article presents results from three German citizen conferences on big data and contextualizes these insights in the critical discourse on such participatory methods. Thus, it follows two goals: Shedding light on lay perspectives on big data and contributing to the methodological and theoretical discussions on participation. For this purpose, we need to first understand the larger theoretical and historical background of involving citizens in the governance of technology, in particular with regard to the relationship between laypersons and experts in democracies.
As our current democracies become increasingly data-driven, they also depend more and more on experts and their decisions on various levels: It is experts who design systems of data collection, procession and analysis. We need them to understand and govern such systems and at the same time these technologies themselves become a tool for governance. Big data influences which information we get to see, which prices we pay, who we connect with etc. While we do not know how exactly this will evolve in the future, it is evident that technology’s deep impact on our lives is only getting bigger. For democracies this is particularly challenging as they need to avoid to turn into technocracies of “solutionism” (Morozov, 2013) which rather evolve around the needs of tech companies than the concerns of their citizens. Despite of the increasing proximity and ubiquity of technology in data-driven democracies, this closeness has not necessarily bridged the gap between experts and laypersons. Instead, technology is often black-boxed (Pasquale, 2015) by hiding its complexity from the user as it is done e.g., through the “platformization of the Web” (Helmond, 2015). The technical backend usually remains inaccessible to ordinary users (due to intentional barriers raised by the developers or a lack of knowledge on the user side). Democratic societies need to consider this knowledge gap by broadening the perspectives beyond the technocratic point of view of experts.
At the same time, successful governing of complex technologies requires substantial expertise. This is particularly challenging for democracies, as expertise follows rather exclusive principles which may conflict with the inclusive ideas of democracies and the way they are organized (Fischer, 2009; Nowotny, 2003). As scholars have argued, the “reflexive modernity” (Beck, 1992; Giddens, 1990) faces self-induced risks, which became obvious with the various environmental threats caused by industrialization. In this context, experts are not just needed to tackle the problem; they themselves become part of the problem, posing the question what their role for governing technologies may be (Bogner and Torgersen, 2005). At the same time, experts often cannot provide sufficient answers to give clear and reliable guidance and society is left with vast uncertainty.
Since the 1960s the interdisciplinary field of technology assessment (TA) has emerged to find new perspectives on this increasingly pressing issue and to provide decision-makers with options for actions beyond technocratic expert-based concepts (Grunwald, 2018). One frequently used approach is to involve citizens through participatory technology assessment (pTA) (Joss and Bellucci, 2002). A number of pTA formats have been developed and applied in different contexts on various subjects (Abels and Bora, 2004). They differ in the way they are designed and how they are incorporated in the political system. For example, the number of participants and the way they get invited vary, experts may be present to consult the citizens, different formats for the discussions can be applied (including digitally-mediated “e-participation”) and the procedures may or may not be tied to concrete political decisions. Generally, pTA can be regarded as a “qualitative (scientific) method for determining the attitudes, interests, and patterns of argumentation used by laypersons with regard to complex issues of science and technology policy” . In contrast to e.g., opinion polls, pTA follows a much more discourse-oriented approach in order to unravel the rationality of laypersons. Thus, pTA aims at a deliberative model of democracy which appears to be suitable to tackle the concerns around the reflexive modernity.
However, the theoretical concept of pTA as well as its practical application through various formats have received substantial criticism. Some of the common concerns are:
Lack of representation: As the goal of pTA is to democratize the governance of technology (Nowotny, 2003), the question who becomes involved and how is vital and by no means trivial. This is particularly the case when a procedure is directly tied to a concrete decision: While members of a parliament gain their legitimacy through a democratic election process, this is not the case for citizens involved in pTA. At the same time, the number of participants in deliberative formats is necessarily low, which leads to a dilemma:
“[...] deliberative exercises that provide opportunities for meaningful involvement by civic participants are, by design, exclusive. To realize the procedural criteria for reciprocal and public dialogue, they can only involve a small group of citizens. While such processes indeed may foster free and equal reason-giving among the participants, the legitimacy of the outcomes can still be questioned by the broader community asked to live under them.” 
While this dilemma cannot be fully resolved, some have argued that representation is simply not the goal of pTA but rather “to obtain a layperson’s judgment based on knowledge, simulating informed public opinion” .
Lack of deliberation: Instead of representation, many stress the specific procedural value of pTA which is supposed to be achieved by deliberation:
“In contrast to aggregative models of democracy that have established voting and representation as the procedures for reaching collective decisions, deliberative democrats emphasize the need to justify collective decisions through an open and reasoned dialogue among free and equal citizens [...].” 
However, in practice this ideal is hard to realize: What exactly does it take to create a platform for free and equal reason-based dialogue? Some already see a challenge in the very fact that this form of participation is based on invitations (which may be more or less problematic depending on who is the organizing institution) and not on citizens’ own initiatives:
“This form of participation is not realized as a protest expressing real demands made ‘from below’ but rather as an experiment which is frequently set up as a research project and observed from start to finish by the team of researchers who are present throughout.” 
While “non-invited” participation might be more authentic, pTA attempts to tackle societal issues before they fully unfold. But this is obviously problematic for the aim of providing lay narratives that are independent of professionals: Experts are needed to identify and articulate potential problems at an early stage, at the same time, the idea of pTA is to emancipate the discourse from the experts’ approach.
Lack of impact: Another frequent concern is the questionable impact of pTA. As these procedures are often not formally institutionalized, their influence “on political decision-making is usually minimal and hardly apparent” . This is particularly problematic with regard to the significant resources required to conduct participatory procedures. Another worry is that the procedures may have an undesired impact as they can be misused to reach certain political goals .
Some have responded to these concerns by lowering the expectations towards pTA. For example, Lövbrand, et al. have argued that the goal of pTA is simply “[...] to make explicit the plurality of reasons, culturally embedded assumptions and socially contingent knowledge ways that can inform collective action” . Hennen stressed that pTA is (and can only be) just one element of many in the discourse on technology: “It is a deliberative element of the organization of the way in which society treats knowledge, not an Aristotelian point to turn the system upside down” .
Therefore, we can conclude that while pTA may be a step to tackle the issues related to the reflexive modernity, it will not and cannot be their solution. Nevertheless, this specific mode of “civic epistemology” (Jasanoff, 2005) appears to gain new relevance in data-driven governance as expert judgements are increasingly automatized through algorithmic decision-making. Concerns have been raised that this may lead to an automatization of problematic tendencies such as inequality (Eubanks, 2018) as capitalist ideology is inscribed into algorithms (Mager, 2012). In turn, there have been calls to democratize these systems, e.g., by proposing a more participatory form of “design justice” (Costanza-Chock, 2018) which would include laypeople in the developing process. This article discusses how pTA may help to tackle some of the societal challenges posed by big data by drawing on three citizen conferences held in Germany in 2016. The next sections will focus on the methodology and the results of these events before they will be critically discussed with regard to the concerns connected to pTA described above.
As outlined above, there are various formats of pTA which differ for example in size, duration, intended outcome and proximity to decision-making. Even within certain formats there are very different approaches to these questions and to some extent they are all still in an experimental state, although some are fairly established by now. For example, consensus conferences on controversial technological/scientific issues have been conducted on a regular basis in Denmark since the 1980s (Joss and Durant, 1995). However, most participatory procedures are not clearly defined and may vary depending on the specific context.
In the case examined in this article, the goal was to assess lay perspectives on challenges and opportunities related to big data in Germany. The given context was a relatively large project on the societal impact of this technology (ABIDA — Assessing big data; see www.abida.de) funded by the German Federal Ministry of Education and Research. Since the project involved expert perspectives on the issue in a number of ways (e.g., an expert poll, disciplinary working groups, in-depth studies and multiple workshops) the emphasis was laid on the view of “ordinary” citizens without particular prior knowledge on the topic. Furthermore, as the project also conducted a representative opinion poll among German citizens the aim was to create a complimentary qualitative perspective on the way laypeople frame the complex issue of big data. For this purpose, three citizen conferences were organized in Germany in 2016. The author of this text was part of the organizing team and moderated some of the session of the conferences. The results presented here rest on participatory observations and the documentation in a project report (Hügle, 2017).
One of the first methodological challenges of conducting a citizen conference is choosing a location. Citizen attitudes may differ widely based on specifics of a region or city. For example, there is still a noticeable east-west divide in Germany (Klüsener and Goldstein, 2016). Although citizen conferences are qualitative and their deliberation-oriented format does not allow for true representativeness, a diverse set of participants and opinions appeared desirable in order to gain a comprehensive picture. Therefore, various geographic and demographic factors were considered by studying the Bertelsmann Foundation’s categorization of German municipalities . This resulted in the selection of three cities in very different locations and socio-economic situations:
- Aachen: A city in the northwest of Germany with a rather highly-educated and diverse population and a relatively strong economy.
- Stralsund: Located at the north-eastern tip of Germany, it is a smaller city that is struggling with a decreasing population due to its negative economic situation.
- Kempten: Smaller than the other cities, Kempten is situated close to the Alps at the southern end of Germany in a rather rural and stable socio-economic environment.
To attract citizens with different backgrounds and to avoid self-selection biases as far as possible, a random sample of local adults was invited by letter. In Aachen 1,500 invitations were sent out and this number was increased to 3,000 for the other two cities due to an unsatisfying turnout (see results below).
Each conference was held within one Saturday in order to lower the entry barrier for working citizens. Those who registered received additional information, including a short text which outlined some of the concerns frequently associated with big data. These were previously identified through a review of scholarly and popular literature and an expert workshop. The topics were framed along three questions:
- Who is responsible — the state or the individual?
- Who is in control? Is our behavior getting manipulated?
- Who decides? Human or computer?
The paper as well as the conferences attempted to break down these complex issues by referring to illustrative examples and aimed at posing questions rather than making statements. For instance, tracking of consumer behavior was used as an example to highlight positive (improved usability through personalization) as well as potential negative effects (with regard to privacy concerns), asking how this may influence individual choice.
Each conference had the same structure: After a welcome and some information on the project, a brief presentation on the societal issues of big data was given as an impulse. Then the citizens were divided in three small groups which discussed each topic and rotated using the World Café method, i.e., moderators hosted and guided each topic and briefed participants about what has been previously discussed by the other groups. These moderators were members of the project who were particularly trained to enable a free dialogue between all the citizens involved, regardless of their background and knowledge on the issues.
After an open discussion, the focus was turned to hopes and fears for the future (the year 2026) and for each topic the groups were asked to develop a “wish list” of agreed suggestions to improve the situation and its expected development. These lists were then presented and discussed, bringing all groups together. They form the primary results of the citizen conferences which were then further analyzed by the research team. A report was written and published (Hügle, 2017) and the results informed the project on its next steps and the final goal of developing options for actions for decision-makers.
Before we take a look at the content of the results, we need to consider who actually participated. The first thing to notice is the dissatisfying turnout: Overall, only 0.84 percent of the invited citizens came to the events (overall 63 persons). A striking result here is that doubling the number of invites (from 1,500 to 3,000) did not lead to a significantly increased turnout. In fact, in the case of Stralsund, participation was even lower: While in Aachen 23 citizens came to the event and 1,500 were invited, in Stralsund only 15 showed up despite of the 3,000 invites (see Table 1). At all three events more male than female citizens participated: The percentage of females was overall 38.10 percent, with 39.13 percent in Aachen, 33.33 percent in Stralsund and 40.00 percent in Kempten.
Table 1: Turnout of citizen conferences. Aachen Stralsund Kempten Total Invites 1.500 3,000 3,000 7,500 Participants 23 15 25 63 Female 9 5 10 24 Male 14 10 15 39
A look at the age groups of the participants (Table 2) shows no significant abnormality as all age groups but the youngest (16–19 years) and the oldest (80 years and older) did at least to some extent attend all three events.
Table 2: Number of participants in age groups. Age group (in years) Aachen Stralsund Kempten Total 16–19 1 — — 1 20–29 4 3 5 12 30–39 3 1 6 10 40–49 4 2 3 9 50–59 7 4 4 15 60–69 1 1 5 7 70–79 2 2 2 6 80/older — 2 — 2 Not specified 1 — — 1
Additional demographic data on the participants was not collected in order to avoid that they might be alienated by a too “nosy” questionnaire prior to their attendance. The low turnout and the qualitative format of citizen conferences do not allow us to make any claims of representation. However, as this data shows, the attendance was mixed in regard to age and gender and at least somewhat diverse.
The following results are based on the citizens’ “wish lists” produced in small group discussions on the three guiding questions for each conference. The first question (Who is responsible — the state or the individual?) pointed to contradictory approaches to challenges posed by big data that reflect long political traditions: On the one side, there is a perspective which basically puts personal over governmental responsibility in order to maximize individual freedom. The other side emphasizes the limitations of individual agency (e.g., due to social and environmental circumstances) and transfers more of the burden of choice towards a larger collective like the state. While this question of responsibility has a long and complex history (for an introduction, see Williams, 2018), it has become acutely pressing in the age of big data technology. As our lives become increasingly datafied, we need to wonder how much control and responsibility over these processes should lay in the hands of individuals and where we wish to delegate it. In many big data applications, multiple actors are involved which leads to complex networks of responsibilities, e.g., in autonomous driving systems (Loh and Loh, 2017). This opens the door to what was asked in question two: Who is in control? Is our behavior getting manipulated? As we delegate decisions to algorithms controlled by other actors, we need to grasp how free our will really is or can be in the future. This is currently intensively debated, especially in the context of nudging — an attempt to use a mixture of technical and psychological tricks to influence a person’s behavior (von Grafenstein, et al., 2018). The third question (Who decides? Human or computer?) aimed at the (potential) loss of human agency as the automation of complex processes makes it harder and harder to understand them and their effects.
Although each question focused on a particular aspect, they are also all connected through underlying themes such as freedom and autonomy. At the same time, each group moved together from one topic to the next. It is therefore not surprising that the participants’ answers to the questions overlapped from topic to topic. A more striking result is the similarity of the citizens’ suggestions from conference to conference despite of their diverging demographic, geographic and economic situations. While different in nuances, there were dominant and recurring wishes expressed by the attendees:
In all three conferences a lack of knowledge on big data (from technical aspects to legal and social questions) in the general population was identified as a crucial problem and an enhanced education was seen as a step towards its solution. There was an agreement that this education on various aspects of big data (and more broadly digitalization) should already start in schools and needs to be pushed by the state, e.g., by developing suitable curricula and providing necessary funding (Aachen), an active of usage of digital technology in the classroom (Stralsund), establishing a special school subject and the needed university education for teachers (Kempten), fostering a dialogue between parents, students and teachers (Stralsund). Also adult education was regarded as important: In Stralsund the participants particularly wished for increasing the general public’s awareness on risks related to providing (personal) data by educating employees and senior citizens. Aachen pleaded for an intensified public debate on critical questions, e.g., on scoring methods that may determine a person’s opportunities, e.g., for receiving a credit. Kempten not only saw an improved education as a measure to prevent unwanted consequences of big data such as a loss of autonomy, but also a path to achieve its positive potential (e.g., a smartification of health, energy, traffic, law enforcement). Beside the state, a strong independent media system that informs the public and educated individuals who treat data carefully were seen as important responsible actors in this context. Stralsund stressed new opportunities for enhancing public knowledge by providing (standardized) access to databases.
In all three conferences the citizens were in favor for stronger governmental regulation, especially in Aachen and Stralsund. First of all, they unanimously expected the state to provide a clear and reliable legal frame which is supposed to maximize big data’s opportunities while minimizing its risks. A number of suggestions were made to get to this goal, reaching from vague ideas to highly concrete options. Some of them were actually already put into law or were being formulated in the European General Data Protection Regulation (GDPR) which was still under development during the time of the citizen conferences.
Aachen as well as Stralsund expressed concerns about cybersecurity. In Aachen the participants demanded an opt-in solution for data transfers to third parties together with the possibility to partial consent and an option to withdraw it. Moreover, compulsory yearly reports, including security breaches, were on the wish list. Here, scoring (i.e., the assessment of individuals through big data, e.g., to estimate creditworthiness) was perceived as particularly critical and citizens asked for a prohibition (or at least limitation) of scoring in certain sensitive areas. Stralsund also was in favor of regulating data transfers to third parties, especially for commercial purposes. Concretely, the citizens called for mandatory security standards (e.g., certifications) and a stricter liability for companies in cases of security breaches. Moreover, citizens here were concerned about the persistence of data and wished for an anonymization of metadata, especially in the context of health information. They also demanded a “right to be forgotten”, which could be found on the list in Kempten as well.
Kempten shared concerns about (opaque) dissemination of data to third parties. Improved data protection legislation and options for sanctioning violations, e.g., through class action suits, were demanded. Moreover, the citizens here suggested to improve consumer rights by establishing an additional institution such as a specialized ministry.
Kempten did not only see the state in charge of protecting its citizens but also referred to its role as data collector and (potential) user of big data. In this context, the participants asked for more transparency of such practices by detailing their purpose. Aachen also stressed the state’s obligation to provide information (through anonymized raw data).
Enabling (individual) choice
Another repeatedly expressed wish was to enhance the opportunities of (individual) choice. Partly, the strengthening of governance aimed at this purpose, e.g., the suggestion to extend consumer protection, as well as the demand for more education (see above). But there were also numerous suggestions that addressed individuals’ capacity of dealing with big data more specifically. For example, a mandated simplification of terms of service and privacy policies were on the agenda in Aachen as well as in Stralsund, with the intent to create more transparency for individuals who try to understand a company’s data practice. As briefly mentioned above, the participants in Kempten stressed the importance of individual responsibility and therefore saw education as vital to enable citizens to make informed decisions. More specifically, they expected their fellow citizens to become more aware of their already existing rights and asked for making use of them. For instance, they pointed to the right to request information on personal data as well as the right to delete and correct it. Additionally, Kempten’s participants were in favor of easily accessible options for individual intervention in areas of automated decision-making in order to maintain the ability of self-determination. Similarly to the demands of enhanced consumer rights, the attendees in Stralsund as well as in Aachen called for (neutral) institutional assistance and expertise to help individuals effectively in questions around data protection and other challenges related to big data. Aachen also expressed concerns about a too strong market concentration in this area and suggested to actively support alternatives (e.g., working with open source and other approaches suitable to serve the common good). Stralsund was also in favor of aiding alternatives and emphasized the importance of creating true freedom of choice by such measures. Aachen even stated that there should be no pressure to use big data. Another approach in this context was to conduct more research that is independent of commercial interests while actively involving citizen perspectives.
Taking a global approach
In all three cities the attendees were keenly aware that the challenges posed by big data reach across borders and therefore require global solutions: Aachen explicitly stressed the limitations of national regulation and aimed at protecting fundamental rights through an international convention. Moreover, a global ethics committee was suggested to enable positive applications (e.g., in medicine) while preventing malpractice. Aachen’s previously mentioned concern regarding market concentration was obviously also based on the observation of globally acting companies, thus requiring action that takes this into account. While the citizens in Kempten recognized some opportunities related to a smartification (e.g., of industrial processes), they also saw the requirement of uniform European standards, for example, of labor rights and data practices. Moreover, they urged for an effective global organization with the power to monitor and enforce such standards. Finally, Stralsund hoped that big data could be used to solve international conflicts.
All in all, the participants discussed a wide range of topics and developed a diverse set of suggestions. What can we learn from this with regard to the initial thoughts on pTA and the related hopes and concerns to such attempts of involving citizens in the assessment of technologies? This will be discussed in the following concluding section.
Discussion and conclusion
As outlined at the beginning of this text, pTA originally emerged in order to address a number of challenges connected to the reflexive modernity and its self-imposed risks. The involvement of citizens particularly followed the purpose to challenge the dominance of the expert’s rationale in order to make technology policies more democratic. At the same time, a number of concerns were voiced over the decades of experiments with various formats. Our three citizen conferences add to this discussion by relating our empirical example to some of these concerns, namely the lack of representation, the lack of deliberation and the lack of impact of pTA. By doing so, the text continues the debate on participation in general while also providing some insights into the challenges related to the specific topic of big data.
The worries around a lack of representation discussed in the literature proved to be highly relevant. While representation in a statistical sense can never be the goal of a format such as a citizen conference, at least a somewhat diverse set of opinions is required to come to any meaningful results. As our experience shows, recruiting a satisfying number of participants can be a hurdle that should not be underestimated. Despite of considerable efforts (namely personally addressing 1,500 and for the following two events even 3,000 citizens by mail), the turnout was lower than expected. We can only speculate whether improved invitations could have led to a higher number of participants. The fact that doubling the number of invites did not lead to a significantly increased turnout, appears to indicate that local factors played an important role. For example, there is a relatively large number of educated and tech-savvy citizens living in the city of Aachen (e.g., due to its renowned technical university, the RWTH Aachen) who might be more interested in discussing an abstract topic such as big data than the general public. This may explain why a comparable number of citizens participated although only half the number of invites were sent out. Since one goal of pTA is to capture voices that otherwise remain unheard, this is a serious conceptual issue. Future organizers of pTA may address it by making an extra effort to target specific underrepresented groups. However, this leads to further methodological challenges because then the participants would not be selected randomly and the question arises which group should be chosen how and on what grounds.
Nevertheless, in our case the sample of participants seems to be sufficiently diverse as they came to similar results in all three cities — although self-selection bias might still play a role here (see also below). Thus, they appear to be at least somewhat representative of the way citizens in Germany think about big data.
The emphasis of pTA on deliberation, i.e., a reason-based dialog which minimizes power struggles, proved to be challenging in practice. This already starts with framing the topic: If it is too wide, it remains too vague to discuss in a meaningful way. Narrowing it down inevitably means limiting it — potentially in a way that represents more the bias of the organizers than the opinion of the citizens. Thus, choosing the topic and finding a balance between these extremes is not a trivial task. We tried to tackle it by rather asking questions than making statements in our introductory information. This might have been not very satisfactory to those who were interested in answers but it certainly did help to engage the citizens who participated — at no point did the moderators have to motivate the attendees and they discussed intensively throughout all events.
Another obstacle towards deliberation has to do with the problem of representation: Although we invited a random set of people to each conference, it does not mean that the distribution of attendees was random. Certainly, some were driven by a self-selection bias as they came for their particular interest in the topic. This can be assumed based on the results: While some of the suggestions reflect a certain naiveté that is expected when lay people formulate options for action for a topic that they do not have much knowledge on, others reveal a fairly sophisticated level of familiarity. For example, several suggestions show knowledge on specific technical or legal issues as they were discussed for the development of the GDPR during the time of the citizen conferences. This points us to a general problem of pTA: The categorizations of people as “lay” or “expert” are, of course, social constructs and cannot always be separated in such a binary way. We may have invited participants as citizens but maybe they came as experts as there is generally no clear line between them in terms of their competence. This came as no surprise and was countered by applying certain techniques. For example, we made sure that each participant would be able to voice her opinion by allowing them to write it down on a sheet of paper. Moreover, the moderators were trained to encourage rather shy participants while reducing the impact of the more dominant ones. However, those who are able to illustrate their deep knowledge of a topic in a rhetorically sophisticated way obviously have an advantage over others with lesser knowledge and language skills. It is hard to say which point on the “wish lists” made it there because of convincing arguments and which one prevailed because of an implicit power structure within the group. However, it is important to stress that no incidents of severe conflict between the participants occurred and there was a generally positive and welcoming atmosphere (as indicated by the feedback of some of the participants to the organizers).
Another way of approaching this issue is to invite experts and to introduce them explicitly in this role. Of course, this may also lead to a domination of expertise over lay opinions. In our particular case, it would have been particularly challenging to provide suitable experts as the topic was rather broad and we could not know in advance what kind of questions would be asked.
Another issue emerging from the consensus-oriented format is that unavoidably certain rather uncontroversial suggestions make it through the evolution of arguments. For example, it is hardly surprising that the call for more education dominated all three events. While some governmental restrictions or a radically neoliberal approach are likely to result in conflict, it is hard to argue against a suggestion like improved education. How would the results look like, if the participants were asked to identify where they disagree instead?
But despite of all these challenges which may even appear like dilemmas, it is also important to emphasize that there were indeed various occasions in which otherwise marginalized voices have been heard and were expressed in the results. Therefore, a certain level of “democratization of expertise” did actually occur, largely because the moderators were explicitly trained to encourage such voices and the format allowed them to express themselves.
As the three conferences were neither tied to specific decisions, nor to specific technologies, their impact can hardly be measured. Their primary goal was also not to influence politics but rather to inform the ABIDA project for its subsequent phase. In this regard, there certainly was an impact: The results helped to identify topics that were then addressed more deeply. For example, the strong emphasis on education by the citizens contributed to conducting a study focusing on education on big data. Also, the concern regarding possible behavioral manipulation helped to identify nudging as an important topic for a following in-depth study.
For any organizer of a participatory event, the exact goal needs to be clear and realistic. In our case, it seems plausible to conclude the desired impact was achieved. At the next step, some of the suggestions will be represented in a more or less explicit way in the project’s final options for action for Germany’s decision-makers. It will be difficult to assess any form of impact in shape of concrete decisions because the basis for them is usually not easily identifiable.
So, was it worth it? That is a matter of perspective. One lesson can certainly be drawn from our citizen conferences: Participation is messy and probably always will be. Some will regard this as a reason against it, others might argue that this is exactly why we need it. Isn’t that what countering technocratic accounts and democratizing expertise is all about? For example, some of the points raised by the citizens were not directly related to big data in the usual experts’ understanding of the term, i.e. data-heavy applications characterized by their large volume, wide variety of data types and high velocity of computing. Instead, the participants partly focused on issues closer to their everyday experiences such as concerns about children’s media literacy. From an experts’ point of view, one could easily dismiss such inputs as off-topic. But taking a broader perspective that is more problem-oriented and less technology-oriented, it is just as easy to understand the relevance of this rationale as it tries to address the individual’s lacking ability to grasp the consequences of her actions in a largely digitalized life. A participatory approach may help to keep such concerns and the “bigger picture” in mind, as experts tend to focus on very particular problems framed by their specific fields. However, this does not undermine expert knowledge in any way. In fact, it appears fruitful to combine both types of knowledge. It is obvious that the citizens’ suggestions need to be contextualized in the various discussions around them. For instance, the call for simplified terms of service needs to be regarded in the light of what is known about the general carelessness of internet users regarding such documents (see Obar and Oeldorf-Hirsch, 2018). In the end, pTA is another piece in the puzzle of searching ways to live with the challenges we are facing in a data-driven world. Such formats will certainly not solve this puzzle but may provide a helpful addition to the expert accounts that often dominate such discussions.
About the author
René König is a sociologist researching at the Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology. He can be found online at http://renekoenig.eu and on Twitter (@r_koenig).
The ABIDA-project which provided the context of the citizen conferences was funded by the German Federal Ministry of Education and Research, funding code 01IS15016A-F. I would like to thank the entire ABIDA-team involved, in particular Anika Hügle, who was the main organizer of the conferences and responsible for their conceptualization as well as their execution and documentation. I am also grateful for the valuable feedback by the editors of this special issue, Payal Arora and Hallam Stevens, the reviewers and my colleagues Leonhard Hennen and Reinhard Heil.
1. Hennen, 2012, p. 40.
2. Lövbrand, et al., 2011, p. 483.
3. Hennen, 2012, p. 38.
4. Lövbrand, et al., 2011, pp. 475 f.
5. Bogner, 2012, p. 507.
6. Hennen, 2012, p. 32.
7. Hennen, 2012, p. 34.
8. Lövbrand, et al., 2011, p. 479.
9. Hennen, 2012, p. 39.
Gabriele Abels and Alfons Bora, 2004. Demokratische Technikbewertung. Bielefeld: Transcript Verlag.
Ulrich Beck, 1992. Risk society: Towards a new modernity. Translated by Mark Ritter. London: Sage.
Alexander Bogner, 2012. “The paradox of participation experiments,” Science, Technology & Human Values, volume 37, number 5, pp. 506–527.
doi: https://doi.org/10.1177/0162243911430398, accessed 8 March 2019.
Alexander Bogner and Helge Torgersen (editors), 2005. Wozu Experten? Ambivalenzen der Beziehung von Wissenschaft und Politik. Wiesbaden: VS Verlag für Sozialwissenschaften.
doi: https://doi.org/10.1007/978-3-322-80692-5, accessed 8 March 2019.
Sasha Costanza-Chock, 2018. “Design justice: Towards an intersectional feminist framework for design theory and practice,” Proceedings of the Design Research Society 2018, at https://papers.ssrn.com/abstract=3189696, accessed 17 July 2018.
Virginia Eubanks, 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.
Frank Fischer, 2009. Democracy and expertise: Reorienting policy inquiry. New York: Oxford University Press.
Anthony Giddens, 1990. Consequences of modernity. Cambridge: Polity Press.
Max von Grafenstein, Julian Hölzel, Florian Irgmaier and Jörg Pohle, 2018. “Nudging: Regulierung durch Big Data und Verhaltenswissenschaften,” Alexander von Humboldt Institut für Internet und Gesellschaft (30 July), at http://www.abida.de/sites/default/files/ABIDA-Gutachten_Nudging.pdf, accessed 8 March 2019.
Armin Grunwald, 2018. Technology assessment in practice and theory. London: Routledge.
Leonhard Hennen, 2012. “Why do we still need participatory technology assessment?” Poiesis & Praxis, volume 9, numbers 1–2, pp. 27–41.
doi: https://doi.org/10.1007/s10202-012-0122-5, accessed 8 March 2019.
Anika Hügle, 2017. “Big Data — Lösung oder Problem? Dokumentation und Analyse der Bürgerkonferenzen,” Karlsruher Institut uür Technologie, Institut für Technikfolgenabschaätzung und Systemanalyse (ITAS) (February), at https://www.itm.nrw/wp-content/uploads/ABIDA_Bürgerkonferenzen_Ergebnisbericht.pdf, accessed 8 March 2019.
Sheila Jasanoff, 2005. Designs on nature: Science and democracy in Europe and the United States. Princeton, N.J.: Princeton University Press.
Simon Joss and Sergio Bellucci, 2002. Participatory technology assessment: European perspectives. London: Center for the Study of Democracy.
Simon Joss and John Durant, 1995. Public participation in science: The role of consensus conferences in Europe. London: Science Museum with the support of the European Commission Directorate General XII.
Sebastian Klüsener and Joshua R. Goldstein, 2016. “A long–standing demographic east–west divide in Germany,” Population, Space and Place, volume 22, number 1, pp. 5–22.
doi: https://doi.org/10.1002/psp.1870, accessed 8 March 2019.
Eva Lövbrand, Roger Pielke and Silke Beck, 2011. “A democracy paradox in studies of science and technology,” Science, Technology, & Human Values, volume 36, number 4, pp. 474–496.
doi: https://doi.org/10.1177/0162243910366154, accessed 8 March 2019.
Astrid Mager, 2012. “Algorithmic ideology: How capitalist society shapes search engines,” Information, Communication & Society, volume 15, number 5, pp. 769–787.
doi: https://doi.org/10.1080/1369118X.2012.676056, accessed 8 March 2019.
Evgeny Morozov, 2013. To save everything, click here: The folly of technological solutionism. New York: PublicAffairs.
Helga Nowotny, 2003. “Democratising expertise and socially robust knowledge,” Science and Public Policy, volume 30, number 3, pp. 151–156.
doi: https://doi.org/10.3152/147154303781780461, accessed 8 March 2019.
Jonathan A. Obar and Anne Oeldorf-Hirsch, 2018. “The biggest lie on the Internet: Ignoring the privacy policies and terms of service policies of social networking services,” Information, Communication & Society (3 July).
doi: https://doi.org/10.1080/1369118X.2018.1486870, accessed 8 March 2019.
Frank Pasquale, 2015. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.
Garrath Williams, 2018. “Responsibility,” Internet Encyclopedia of Philosophy, at https://www.iep.utm.edu/responsi/, accessed 7 December 2018.
Received 25 February 2019; accepted 5 March 2019.
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Lay perspectives on big data: Insights from citizen conferences in Germany
by René König.
First Monday, Volume 24, Number 4 - 1 April 2019