First Monday

The anatomy of citizen science projects in information systems by Duong Dang, Teemu Maenpaa, Juho-Pekka Makipaa, and Tomi Pasanen



Abstract
Citizen science is an emerging approach for conducting research in the field of information systems. It refers to the participation of individuals with various backgrounds in research projects. It is necessary to match research implementation closely with plans because the anatomy of a citizen science project is quite complex. The literature shows that managing a long-term citizen science project is an even more complex task. To obtain a coherent understanding of citizen science in the field of information systems, we conducted a systematic literature review on the topic for which we used all the major information systems journals and conference proceedings. We devised the episode framework which consists of four blocks: design of pillars, the episodes of CS implementation, adjustment of activities, and post-implementation. The framework emphasizes the Weinhardt division of the project into separate episodes which are sequentially ordered but which we need to run in parallel because of the dynamic nature of a citizen science project when participants can join and leave freely. Moreover, in some projects, participants can take on different roles, which complicates project management further.

Contents

1. Introduction
2. Background
3. Methods
4. Findings
5. Discussion
6. Conclusions

 


 

1. Introduction

Information systems (IS) scholars are increasingly using data from beyond organizational boundaries (Crall, et al., 2013; Levy and Germonprez, 2017; Lukyanenko, Wiggins, et al., 2019; Wiggins and Crowston, 2011), such as user-generated content from Facebook, Twitter, and other social media platforms, for their research. Because data is produced by citizens, and quite often consumed by them as well, it is natural for researchers that use data to draw on citizens to contribute to data collection and collation, and even to help them understand the content of the data and turn it into knowledge (Ulahannan, et al., 2020). While this phenomenon raises several legitimate issues regarding the accuracy, quality, verification, and validation of data, it is believed and has even been shown to some extent that an approach called citizen science (CS) can help scholars address the issues without jeopardizing research agendas (Jackson, et al., 2020; Lukyanenko, et al., 2017; Lukyanenko, Parsons, et al., 2019).

Citizen science refers to a process whereby individuals with various backgrounds participate in research projects. These participants, or citizen scientists, who may not have any professional training, are volunteers who take part in research activities in various phases of the CS project life cycle. They may participate in resource gathering, research question definition, various forms of data collation including categorization or transcription, data analysis, dissemination of results, and evaluation of the success of a project. Quite often CS can also serve as a vehicle to make citizens more scientifically literate, thereby contributing to life-long learning, which is highly valued in our modern society.

Levy and Germonprez (2017) stressed and explained the importance of CS to IS in the future. IS research has had an identity that seeks to identify how IT systems are developed and how individuals, groups, organizations, and markets interact with IT (Sidorova, et al., 2008). Levy and Germonprez, however, addressed the socio-technical roots of IS, which can be expanded within CS to attract audiences and provide relevant societal contributions that consider IS research beyond organizations. Citizen participation in scientific endeavours not only as participants but also scientists is extreme in IS research, offering a promising approach to bridge the gap between scientists and citizens (Weinhardt, et al., 2020). For these reasons, IS scholars have recently become increasingly interested in citizen science (Jackson, et al., 2020; Levy and Germonprez, 2017; Lukyanenko, et al., 2014b; Lukyanenko, Wiggins, et al., 2019; Simperl, et al., 2018; Wright, et al., 2019). However, IS research on citizen science is still fragmented, leaving much room for a greater focus on CS in the IS field (Levy and Germonprez, 2017; Lukyanenko, et al., 2017; Lukyanenko, Wiggins, et al., 2019).

To apply citizen science and to communicate the results to the IS community there is a need for a consensus on the definition of CS and an understanding of the structures, and the processes of CS project, i.e., the anatomy of CS. This motivated us to conduct a literature review on citizen science. We decided to focus only on IS research in order to offer research findings of CS which are straight accessible for IS scholars. Thus, our research questions were:

To answer the research questions, we conducted a systematic literature review. The sources used for the review were eight IS journals, 47 special interest groups (SIGs) recommended journals by the Association of Information Systems (AIS), and proceedings of five major IS conferences.

As is evident, the success of a CS project depends heavily on participation, and therefore, we devised a synthesized framework that captured the designing needs and dynamic nature of a CS project where the focus is on sustaining the motivation and engagement of citizen scientists. The framework, which we call the Episode framework, consists of four blocks: design of pillars, episodes of CS implementation, adjustment of activities, and post-implementation. Design of pillars included three plans of which two were dealing with how to motivate and sustain participation during a project. The episodes of implementation emphasized how a creation of a net of meaningful episodes to participate was central to conducting a CS project. We argue that besides the fragility of establishing a functional CS project the process of conducting the CS project is complex and may involve both parallel episodes and sequential episodes where the output of an episode may be used as input for another episode.

The paper is organized as follows. The next section gives a brief background of CS in the context of IS research. Section 3 describes our methodology. Our findings are presented in Section 4, and Section 5 presents a discussion of the results. The final section consists of concluding remarks.

 

++++++++++

2. Background

In the IS community, citizen science is now considered part of a movement toward research that is societally impactful as it facilitates research that is carried out by both researchers and people in their everyday lives (Crall, et al., 2013; Levy and Germonprez, 2017). Citizen science also offers a lens through which to examine various aspects of society, such as behaviour, technology, and the environment (Levy and Germonprez, 2017; Lukyanenko, Wiggins, et al., 2019). This has recently led to growing interest in CS among IS scholars (Levy and Germonprez, 2017; Lukyanenko, et al., 2014a; Lukyanenko, Wiggins, et al., 2019). For example, in their discussion of the potential of CS in IS research, Levy and Germonprez (2017) focused on the origins of citizen involvement in science and presented three perspectives of contemporary CS: sociological, natural science, and public policy perspectives. In their view, CS is best viewed within the context of current research activities because it resembles participatory design in many ways. Scholars have also identified close relationships between CS and several other concepts, such as user-generated content, social media, crowdsourcing, and collective intelligence (Awal and Bharadwaj, 2019; Mosier and Smith, 1986).

These multiple views and categorizations of CS reflect the complexity to understand of CS (Halavais, 2013). As a result, several literature reviews on CS have been conducted (e.g., Conrad and Hilchey, 2011; Ebitu, et al., 2021; Lukyanenko, Wiggins, et al., 2019). None of these reviews, however, focused on CS in IS. Nevertheless, Lukyanenko, et al. (2019) discussed information quality (IQ) research opportunities in CS. They used exact-match querying for the phrase “citizen science” indexed in Web of Science, IEEE eXplore, Scopus, and the ACM Digital Library. Lukyanenko, et al. concluded that much of the research on CS is published in non-IS and non-IQ journals and CS is being actively pursued by scholars who are rarely familiar with IS. Consequently, there is evidently a need for a literature review that maps research into CS in IS in order to understand how the IS community uses CS in research and how CS has been discussed in the literature. We aimed to understand how IS scholars view CS and how they conduct research related to CS in both academic and practical contexts to make relevant societal contributions (Levy and Germonprez, 2017).

 

++++++++++

3. Methods

To fulfil our research aims, we conducted a systematic literature review in the IS field (Paré, et al., 2015; Templier and Paré, 2018; Webster and Watson, 2002). To improve reliability and minimize biases, we followed several techniques used by other researchers (Paré, et al., 2015; Templier and Paré, 2018; Webster and Watson, 2002). The review process consisted of two main steps: selecting studies and analysing data (Paré, et al., 2016). The steps are described in detail below.

3.1. Selecting studies — Step 1

This step involved performing a literature search, evaluating papers, and selecting papers to study. The process is outlined in Figure 1. First, we performed a literature search by searching for papers among the AIS basket of eight IS journals: Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), European Journal of Information Systems (EJIS), Information Systems Journal (ISJ), Journal of the Association for Information Systems (JAIS), Journal of Information Technology (JIT), Journal of Management Information Systems (JMIS), and Journal of Strategic Information Systems (JSIS). We also included the SIG recommended journals in AIS in our study, of which there were 47 (see Appendix A). Finally, we included the proceedings of five main conferences: (ICIS); European Conference on Information Systems (ECIS); Pacific Asia Conference on Information Systems (PACIS); Americas Conference on Information Systems (AMCIS); and, Hawaii International Conference on System Sciences (HICSS). Thus, our chosen publication outlets consisted of 60 major journals and conferences in the IS field and the selected databases are presented in Appendix A. We used the AIS electronic library (AIS eLibrary) and Web sites or portals of the individual journals and conferences for our literature search. We focused on the title, abstract, or the body of papers in AIS eLibrary and on Web sites or portals of journals and conferences. The date range for our searches was until the end of May 2021.

 

Process of choosing the papers for study
 
Figure 1: Process of choosing the papers for study.

 

Second, we evaluated papers in following fashion. We used search engines and targeted research and empirical papers in the IS field that contained the term “citizen science” in the title, abstract, keywords, or body of the paper. We eliminated literature reviews, editorials, opinions, commentaries, and short papers, because we were more interested in how citizen science projects were conducted, which often appears in research and empirical papers.

Third, we selected articles to study in this manner. Each of us read and assessed papers based on their title, abstract, and keywords. If the title, abstract, and keywords did not provide enough information, we read through the body of the paper. We each assessed papers independently; we then split into two pairs to read and assess each paper. During this process, we focused on whether papers matched our research aims. We faced some challenges when deciding whether to include or eliminate papers, for example, few papers only used tools of CS for their own purposes without a connection to scientific research and therefore it was impossible to use them to answer our research questions. To address these challenges, we held several meetings to discuss and reassess the papers until a common opinion was reached. The selected papers are listed and numbered from 1 to 31 in Appendix B.

3.2. Analysing data — Step 2

We analysed articles in three stages. First, we applied an iterative coding process to all papers, analysed aspects relating to CS, and collected relevant evidence (Paré, et al., 2015; Templier and Paré, 2018; Webster and Watson, 2002). The analysis was guided by a review framework, which consisted of core ideas, terms, theoretical bases, and suggestions for future research (Table 1).

 

Table 1: Review framework.
DimensionsMain questions
Core idea of the paperWhat are the core research questions, scopes, and goals of the paper?
ConceptsHow does the study view CS? What concepts, definitions, and characteristics are contained in the paper?
MethodWhat methodologies, including approaches, data collection and analysis, are used in the paper?
TheoriesWhat theories do the authors use to substantiate their research? Future research What does limitations and suggestions for future research do the authors identify?

 

In the second stage, our aim was to offer new insights into CS in the context of IS. Thus, we went beyond merely mapping or describing current discourses. We extracted core patterns from the data that emerged during the first stage. Specifically, we captured the main patterns and then identified relationships between those patterns. For example, all patterns related to concepts of CS and approaches to establishing a CS project were recorded or all patterns related to organizing of CS project were grouped. Next, we identified broader patterns from the previous steps. This involved grouping patterns into broader categories, which are presented later in the findings section. Where necessary, we refined those patterns during the analysis process.

In the third stage, we analytically abstracted patterns around the activities of a CS project. We held lengthy discussions on the management process to devise ideas for a framework, such as blocks of activities and their contents. After several rounds of discussions and refinements, we identified four blocks: the design block, episodes of CS implementation block, post-implementation block, and adjustment block. The content of each block came from selected papers (Appendix C). The episode framework is discussed in detail in Section 4.3.

 

++++++++++

4. Findings

In Section 4.1, we present the distribution of selected papers by year and by outlet. In Section 4.2, we collate our findings concerning different views of CS concepts and characteristics. In Section 4.3, we propose an episode framework for conducting and managing a CS project which includes various sequential and parallel steps.

4.1. Distribution of selected papers

Figure 2 shows the number of papers by outlets. Among journals, Computers in Human Behavior has published most CS papers (nine papers), and among conferences, the Hawaii International Conference on System Sciences has published six CS papers and International Conference on Information Systems has published four papers.

 

Distribution of selected papers by outlets
 
Figure 2: Distribution of selected papers by outlets.
Note: IJHCS = International Journal of Human-Computer Studies; CiHB = Computers in Human Behavior; TFSC = Technological Forecasting and Social Change; First Monday = First Monday; BMC MIDM = BMC Medical Informatics and Decision Making; JAMIA = Journal of the American Medical Informatics Association; MISQ = Management Information Systems Quarterly; JAIS = Journal of the Association for Information Systems; ICIS = International Conference on Information Systems; PACIS = Pacific Asia Conference on Information Systems; HICSS = Hawaii International Conference on System Sciences.

 

Although CS articles have been published in IS outlets since 2011, it seems that CS has gained momentum in the past five years as scholars have paid attention to CS since 2017 (Figure 3). This indicates that CS is still in its early stages in the field of IS.

 

Distribution of citizen science papers per year
 
Figure 3: Distribution of citizen science papers per year.

 

4.2. Concepts related to citizen science

In this section, we discuss the main concepts related to CS that emerged when we analysed data from our selected papers. These include CS, CS participation, CS project, and other closely related concepts.

Citizen science is a relatively new aspect in IS. The definitions for citizen science in general as well in IS literature are variable (Halavais, 2013). We went through the definitions provided for CS in selected papers and identified two characteristics. First, definitions have differences regarding the scope of participation in CS. For example, participants may take part in data collection and analysis (Huang, et al., 2018), become involved in CS activities in their everyday lives (Levy and Germonprez, 2017), or even participate in writing a scientific publication (Crowston, et al., 2019). Second, although the definitions have differences (regarding the scope of participation), they refer to the participants in a similar manner. Participants are described as non-professionals, amateur participants (Sprinks, et al., 2017), volunteers (Reed, et al., 2013), public audience or citizens (Huang, et al., 2018) who contribute data for scientific research and collaborate with professional scientists.

 

Table 2: Example definitions for citizen science.
Citizen scienceReference
“involves members of the public (non-professional scientists) collaborating with professional scientists to conduct scientific research.”Silva and Heaton, 2017
“is a type of crowdsourcing in which scientists enlist ordinary people to generate data to be used in scientific research.”Lukyanenko, et al., 2014b
“is a form of social computation where members of the public are recruited to contribute to scientific investigations.”Crowston and Prestopnik, 2013
“is research [that relies] on the support of the general public to make progress.”Harteveld, et al., 2016
“described as research conducted, in whole or in part, by amateur or nonprofessional participants often through crowdsourcing techniques.”Sprinks, et al., 2017
“involves the general public in research activities that are conducted in collaboration with professional scientists.”Cappa, et al., 2018
“refers to participation of volunteers in research projects led by professional scientists.”Palermo, et al., 2017
“refers to partnerships between scientists and the public in scientific research in which data are collected and analyzed in response to a scientific or research-based question.”Huang, et al., 2018

 

Table 2 shows example definitions for CS extracted from selected papers. Based on our analysis of the definitions and their context in the literature, we identified common properties of CS and created a characterization of CS. It is centred on two properties: individuals (i.e., the persons who take part in CS) and scope (i.e., the scope of individual participation in a given project). Based on these two properties, CS can be characterized as a scientific process where the responsibility of scientific rigour is maintained by organising scientists while main actions and duties are carried out by volunteers with various backgrounds [1].

Based on the characteristics of CS, participation can include citizens’ contributions to analysing (Cappa, et al., 2018) or interpreting data or even to writing a scientific paper (Crowston, et al., 2019) or collaborating in the management of a CS project (Huang, et al., 2018). The characterization is also based on the assumption that a scientific research project involves the participation of scientists in the project. Activities involved in the scientific research project include the collection, categorization, transcription, and analysis of data. We stress that the individual activities are not limited to collecting or analysing data, which is implied in some definitions. Individuals that take part in a CS project may or may not be trained scientists. Citizen science participants are generally called citizen scientists.

In the literature, the term “citizen science project” is described in such a way that it also provides supporting examples of the characteristics for the term “citizen science”. The literature describes activities that are commonly part of CS projects. For example, Prestopnik, et al. (2017) define CS projects as projects in which “members of the general public are recruited to contribute to scientific investigations.” Moreover, CS research projects often use participants as sensors or data collectors but do not include them in data analysis or in the presentation of research results in the form of scientific reports (Sprinks, et al., 2017). In addition, online or Web-based platforms for data collection are often used for CS projects, which is similar to crowdsourcing (Crowston and Prestopnik, 2013; Sprinks, et al., 2017; Tinati, et al., 2017).

There are concepts, such as crowdsourcing and user-generated content, which are closely related to CS. For example, crowdsourcing specifically refers to a large group of volunteers external to an organization that perform distributed tasks or solve certain problems (Prestopnik and Tang, 2015; Schlagwein and Daneshgar, 2014; Zhou, et al., 2017). Crowdsourcing provides organizations with access to free or low-cost labour for data gathering (Jackson, et al., 2015). Therefore, it has been argued that crowdsourcing is similar to CS. However, the terms differ insofar as CS can be seen as one innovative type of crowdsourcing (Schlagwein and Daneshgar, 2014; Zhou, et al., 2017), where people without any particular prerequisite or preliminary knowledge (Cappa, et al., 2018) participate in scientific processes and generate data for scientific purposes (Lukyanenko, et al., 2014b). When it comes to the difference between crowdsourcing and citizen science, scholars have differing views. For example, Levy and Germonprez (2017) argued that the difference between crowdsourcing and citizen science was that citizen science followed the structure of the scientific process and citizens could intervene in some, or all, stages of that process, while Wiggins and Crowston (2014) suggested that the difference between citizen science and crowdsourcing was not clear, given their overlapping features.

4.3. Episode framework for conducting citizen science projects

We synthesize the current practices involved in conducting a CS project from the selected papers into an Episode Framework, which has episodes at its centre (Figure 4).The framework is intended to be used by an IS scholar for designing, implementing, and conducting robust CS projects in research. The framework provides a mechanism for ensuring that each vital aspect of CS project is considered. The framework consists of four blocks: design of pillars block, episodes of implementation block, adjustment block, and post-implementation block. The arrows in Figure 4 denote conceptual moves between blocks of the framework, which are taken when needed or when the current block ends. We describe the framework below with references. Appendix C provides the lists of references for each block.

 

Episode framework for conducting a citizen science project
 
Figure 4: Episode framework for conducting a citizen science project.

 

4.3.1. Design of pillars

Although CS projects can have different elements and structural relationships (Halavais, 2013), many common features can be identified from the selected articles. Most of the features are concerned with the design of a given project while other features are connected to the management of a running project. As the features of the design were further analysed, three categories emerged, which we named approach, participation, and consolidation. The categories form the basic interconnected pillars on which a CS project can be built. We argue that considering all three pillars as a design tool is sufficient to develop a robust project, and at the same time all the pillars need to be considered to not ignore focal parts. The approach pillar includes plans for the general design of the project with structure and goals while the participation and consolidation pillars describe plans for how participants are supposed to be engaged and how their participation will be supported during the project. The list of pillars and their corresponding references are provided in Appendix C.

The first block in the framework is the design of a CS project as a whole before the project is actually implemented. The design can be carried out by using the approach, participation, and consolidation pillars, all are described in the sections below.

Approach pillar: With this pillar researchers can identify the project’s aims, type, intended outcomes, and plans for implementation (cf., Bonney, et al., 2016; Turrini, et al., 2018; Ulahannan, et al., 2020).

The project’s aims may include generating new knowledge, awareness-raising, facilitating in-depth learning, or enabling civic participation with enculturating volunteers in scientific practices and processes (Turrini, et al., 2018; Huang, et al., 2018). The goals and outcomes will depend on the project type and the mode of participation. The latter may involve contracts, contributions, collaboration, co-creation, or collegial working (Shirk, et al., 2012). For example, in the case of a CS education project, the foundations and educational structure should be established in the approach pillar and should be considered in accordance with the primary goal of the project (Price and Lee, 2013).

With set-up goals for our project, we need to make decisions regarding the organisation and implementation of it. Citizen science researchers have argued that reality is often different from promises made when implementing a CS project. Different project types (e.g., data collection, data processing, and community projects) generate different challenges with respect to choosing appropriate project designs, identifying suitable criteria to measure and evaluate outcomes, and selecting methods to engage new audiences (Bonney, et al., 2016; Crowston, et al., 2019).

In terms of project type, citizen science projects can also take the form of learning labs set up by researchers. These labs enable participants to understand how hypotheses are formulated and how analyses are performed (Harteveld, et al., 2016) as participants in CS projects may not have the skills necessary for hypothesis construction or analysis of results. In a learning lab setting, even novices can learn how to create simple questionnaires and analyse results in a relatively short period. Learning these skills may provide participants with opportunities to fully participate in a CS project from beginning to end (Harteveld, et al., 2016). Moreover, the implementation can be categorized as both process-based and learning-by-doing approaches. In a process-based approach, researchers and participants strictly follow plans. By contrast, a learning-by-doing approach is more flexible and may be adapted to participants and to the goals of the project (Silva and Heaton, 2017; Lukyanenko, et al., 2014b; Lukyanenko, et al., 2019). For example, participants in a CS project that follows a learning-by-doing approach must develop and mobilize operational and digital skills, a process that depends on existing resources and competencies.

When the number of participants is high, as it is likely in a CS project, plans of implementation needs careful consideration because a situation may arise in which a participant submits a vast amount of data for evaluation. This challenge can be addressed using automated tools, for instance, an automated scoring system has been used to analyse sociolinguistic and other characteristics of submitted text, as well as the activities of participants (Nagar, et al., 2016).

There is no one-size-fits-all approach to designing and conducting a CS project (Holmer, et al., 2015; Palermo, et al., 2017; Prestopnik and Tang, 2015).

Participation pillar: With this pillar researchers can plan how to motivate and engage participants. The success of a CS project largely depends on its participants. In general, to encourage motivation and engagement, project management in general should be transparent by clearly communicating the project’s goals, ethical viewpoints such as how data is used, who owns the data and who will benefit from results and how (Beier, et al., 2019; Sauermann, et al., 2020).

According to Tinati, et al. (2017) motivations can be grouped into four general classes: the desire to contribute, to learn, to be part of a community, and to be challenged, entertained, and play. These include usually both intrinsic and extrinsic motivations. The intrinsic ones relate to the act of contribution, user experience, and to support a worthy cause; extrinsic ones relate to behaviour, which is influenced by several factors, such as using expertise for external rewards, or to reach an elevated status in a community through superior positioning on a leaderboard, winning competitions.

In addition to motivational factors, the user interface (UI) of an online platform is a central tool for motivating new volunteers to contribute to CS projects (Jackson, et al., 2015; Tinati, et al., 2017). UI design makes it possible to adjust tasks to volunteers’ level of expertise, enabling them to satisfy intrinsic motivational factors such as engaging in purposeful activities together with other contributors (Sprinks, et al., 2017; Tinati, et al., 2017). It allows experienced participants more autonomy in performing tasks (Sprinks, et al., 2017).

In order to obtain data and ensure participation, researchers should focus on engagement in a CS project. These include behavioural activities (sharing information, exploring data, and recruiting others), emotional factors (concern, commitment, and interest), cognitive activities (experiential learning, use of resources and skills), and social experiences (relationships, sharing resources and knowledge). These dimensions of engagement are largely influenced by motivational factors (Phillips, et al., 2019; Reed, et al., 2013). Thus, volunteer engagement is a particularly crucial element for a successful CS project and can be improved by task design (Prestopnik, et al., 2017; Sprinks, et al., 2017). The findings of our literature review show that the division of complex tasks into smaller, precisely defined tasks has a positive effect on volunteer engagement. Task design might also make volunteer engagement more sustained (Jackson, et al., 2015; Sprinks, et al., 2017; Tinati, et al., 2017).

To strengthen volunteer engagement, certain online platform features can be used (Zhou, et al., 2017). Gamification, such as using points and scoreboards, are approaches that can be used to motivate and engage participants (Prestopnik, et al., 2017; Tinati, et al., 2017). If a game has an interesting story, the level of volunteer engagement may increase, with volunteers contributing more to a CS project than they otherwise would (Harteveld, et al., 2016; Prestopnik and Tang, 2015; Zhou, et al., 2017). However, it should be considered that a game with a story ends at some point, which may lead volunteers no longer participating (Prestopnik and Tang, 2015). Moreover, rewarding volunteers monetarily or in some related fashion is effective mean of increasing and sustaining participation (Steger, et al., 2017) but it may lead to a situation where the sole motivation for volunteer engagement is a reward (Cappa, et al., 2018). Therefore, a CS project should offer different kind of roles with distinctive modes, as well as communication mechanisms to motivate volunteers for sustained participation (Jackson, et al., 2015; Jackson, et al., 2016; Reed, et al., 2013).

Consolidation pillar: The aim of this pillar is to provide supporting mechanisms that secure the success of a CS project. Based on the literature, the mechanism should include at least the following: performance monitoring (task progress, data quality and volunteer collaboration), volunteer training, and facilitation of collaboration between project personnel, including scientists and volunteers.

Even at the highest level of a project, original plans may change. Therefore, constant monitoring, based on traditional methods (surveys, interviews) or trace data, is necessary. Monitoring is usually very essential for a CS project because volunteers can come and go as they please (Crowston, et al., 2020).

Data quality encompasses a multitude of dimensions, such as completeness, accuracy, consistency, validity, timeliness, currency, integrity, accessibility, precision, lineage, and representation (Jayawardene, et al., 2015; Laranjeiro, et al., 2015). Because data quality is one of the key areas in which challenges for achieving goals, constant monitoring is needed in most projects (Riesch and Potter, 2014). Some other examples of challenges include how to deal with datasets containing sensitive information or how to find a balance between transparency and privacy (Anhalt-Depies, et al., 2019) or between open and closed standards (Pearce-Higgins, et al., 2018).

Most CS projects require flexibility, which can be achieved through dynamic guidance and training of participants (Rosser and Wiggins, 2019). Since training affects data diversity — more trained participants produce more accurate data while less trained produce more diverse data — it should be aligned with the goals of a project (Ogunseye, et al., 2020; Steger, et al., 2017).

By choosing a suitable volunteer training regime, it is possible to improve data quality and volunteer retainment (Beier, et al., 2019; Jackson, et al., 2020). Training needs to address all aspects of the project, including vocabulary which may introduce complications for newcomers (Jackson, et al., 2019). In addition to training, vocabulary related issues could be solved by using common conceptual framework (Castellanos, et al., 2020).

Moreover, participants or volunteers may lack the necessary skills to write formal reports or analyse data, in which case they should be given appropriate instructions on how to carry out the steps of the project (Crowston, et al., 2019). Finally, researchers should consider carefully choosing and clarifying user profiles, from novice to expert depending on the project’s tasks and missions (Aristeidou, et al., 2017).

Regardless of the approach adopted for a CS project, researchers should consider facilitation mechanisms to monitor progress (Aristeidou, et al., 2017; Huang, et al., 2018; Eames and Egmose, 2011). Good project design helps researchers meet their goals, consider project risks in advance, and make adjustments, as necessary. The selection of proper measurements helps to improve the success of a CS project. Ensuring data quality and evaluating intended outcomes are the most challenging tasks associated with these projects (Crowston and Prestopnik, 2013; Dickinson and Bonney, 2012; Jordan, et al., 2012).

4.3.2. Episodes of CS implementation

When the design of pillars block ends, the process moves to the episodes of CS implementation (arrow a in Figure 4). In this block, one or more episodes may run sequentially or, in some cases, parallel, after which the control may revert to the design of pillars block (arrow b in Figure 4) to revise an initial design, if necessary.

Any CS project contains different phases — some of which are conceptually simple — such as data collection, while other phases, such as synthesizing findings, are more demanding (Eames and Egmose, 2011; Reed, et al., 2013). In some projects, volunteers can be offered different roles and options to acquire skills for more advanced tasks related to project management. The observations imply that a CS project consists of several phases in the context of an individual and group level in addition to project level (Crowston, et al., 2019; Huang, et al., 2018).

CS project management, however, differs noticeably from a traditional project because volunteers can join and leave as they wish; a CS project must provide its participants with meaningful participation. Because of this, we need to define a new concept, called an episode, for a CS project. An episode provides meaningful context for participants regardless of when participation takes place. An episode represents the perspective of each participant, which can be concurrent, differing and parallel. We suggest that the implementation process should be divided into many different episodes, following one after another or in parallel as participants join or leave. The stakeholders involved in project planning may be limited to a small group of individuals with various backgrounds, but the episodes of CS implementation should involve all levels of participation and a wide range of tasks. The planned activities in each episode should include participant engagement (Jackson, et al., 2015; Sprinks, et al., 2017; Tinati, et al., 2017) and motivation (Harteveld, et al., 2016; Jackson, et al., 2015; Prestopnik and Tang, 2015; Zhou, et al., 2017).

4.3.3. Adjustments

During each episode, feedback may be generated (arrow c in Figure 4). This feedback may be used to adjust or refine to episodes (arrow d in Figure 4). Examples of adjustments include assessing participant engagement, developing strategies to attract new audiences and participants, and strengthening motivation, learning, and performance of volunteers by using suitable feedback (Bonney, et al., 2016; Zhou, et al., 2020; Østerlund, et al., 2020). Feedback (for volunteers already on the project for a long time) can consist of task performance or self-feedback about the progression of volunteer competence (Zhou, et al., 2020). Feedback itself can be implemented by interviews, questionnaires, personal, or group observations, for example. If a CS project is conducted on a digital platform, then preferably conducting trace data gathering could be utilized (Østerlund, et al., 2020).

Feedback provides mechanisms that secure the success of a CS project. The mechanism should include at least the following: performance monitoring (task progress, data quality and volunteer collaboration); volunteer training and facilitation of collaboration between project personnel, including scientists and volunteers. Even at the highest level of a project, original plans may change. Therefore, constant monitoring based on traditional methods (surveys, interviews) or trace data is necessary. It is very essential for a CS project because volunteers are transitory (Østerlund, et al., 2020).

4.3.4. Post-implementation

After several episodes, the process proceeds to the post-implementation block (arrow e in Figure 4). In this block, researchers, together with participants (depending on the project model), evaluate and measure outcomes and analyse the impacts of the CS project (Crowston and Prestopnik, 2013; Dickinson and Bonney, 2012), report on results, propose new directions for research, and share knowledge among volunteers (Jackson, et al., 2015). For example, in this stage, researchers may measure the extent to which a project generated new knowledge or created learning opportunities for participants (Turrini, et al., 2018). Finally, the findings of the post-implementation phase can be used as inputs or references for further implementation episodes (arrow f in Figure 4). A CS project often involves an iterative process rather than a linear one, and outputs of episodes are often used as inputs for other episodes.

 

++++++++++

5. Discussion

Citizen science is considered a relatively new approach in IS research. As a result, there is no universal agreement on CS-related concepts or on terms closely related to citizen science, such as crowdsourcing, participation, and user-generated content, in the IS community (Jackson, et al., 2015; Prestopnik and Tang, 2015; Zhou, et al., 2017). Most scholars in the IS community have used CS as a tool. It has been used to improve data quality during the data collection phase of research projects and to improve the accuracy and completeness of data. For example, CS can be used to involve participants in the process of data collection in both the short and long term. Lukyanenko, et al. (2014b) demonstrated that non-professionals with various backgrounds require fewer formal instructions and, thus, should be given more freedom in reporting observations.

Based on our synthesis of the literature, we devised a framework for conducting a CS project. Framework consists of four blocks: design of pillars; the episodes of CS implementation; adjustment of activities; and, post-implementation. The main character of our proposed framework are episodes which captures the views of participants: how they can join and leave a project and still are expecting a meaningful experience. We argue that the process of conducting a major CS project is complex and composed of a network of episodes occurring in sequential or parallel manners.

To conduct a CS project, researchers have devised several frameworks. Some frameworks are linear where a gathering of participants occurs only at the start. Eames and Egmose (2011) proposed five phases for conducting a CS project: (i) engaging and recruiting participants; (ii) exploring narratives and perceptions of topics; (iii) sharing local knowledge and experience; (iv) envisioning sustainable communities; and, (v) developing a community. Other frameworks utilizing digital platforms, like Zooniverse (Reed, et al., 2013), are based on repeated transitions of participants, with a little support for specialisation (Jackson, et al., 2016). Thus, our framework can be seen as a purposeful synthesis where a project is designed so that participants are at the centre. Because the episodes in our framework can occur sequentially or concurrently, different groups can participate in different episodes at the same time.

Each block of episodes offers key factors that need to be addressed when designing a CS project. For example, a key factor regarding the approach to conducting a CS project in the design of pillars block includes organizing a CS project and planning to implement a CS project. CS project approach can be a process-based approach, where plans are strictly followed, or a learning-by-doing approach, where plans are more flexible (Eames and Egmose, 2011; Reed, et al., 2013). Regardless of which approach is taken, citizen scientists should consider issues such as project design, performance monitoring, collaboration, and participant involvement (Aristeidou, et al., 2017; Huang, et al., 2018; Eames and Egmose, 2011).

CS projects usually deal with phenomena that interest both citizens and scientists. Participant involvement in scientific processes increase scientific literacy and eventually leads to more informed citizens (Kullenberg and Kasperowski, 2016; Levy and Germonprez, 2017; Riesch and Potter, 2014). The involvement of participants is a crucial factor to be considered in the design of the pillars block. The involvement of participants affects the success of a CS research project (Jackson, et al., 2015; Sprinks, et al., 2017; Tinati, et al., 2017). To attract new participants and improve and sustain participant engagement, researchers should consider several factors: participant motivation by task design as well as the use of proper mechanisms and tools, such as online platforms with gamified elements (Aristeidou, et al., 2017; Cappa, et al., 2018; Crowston, et al., 2019; Prestopnik, et al., 2017; Prestopnik and Tang, 2015; Schlagwein and Daneshgar, 2014; Sprinks, et al., 2017; Tang and Prestopnik, 2017; Tinati, et al., 2017; Zhou, et al., 2017).

The proposed model acknowledges that some adjustment of activities might be needed. This aspect is addressed in the adjustment block. Adjustment activities can include designing feedback procedures to provide support eventually securing the success of a CS project. Furthermore, different types of feedback (motivational, reinforcement, and informational meanings) (Jaehnig and Miller, 2007) can be designed in accordance with participant aims to inform them about their performances in tasks and activities (Curtis, 2014; Sullivan, et al., 2014).

The measurement of participant performance, such as identifying the number of days of active participation (Boakes, et al., 2016), number of completed tasks or activities (Diner, et al., 2018), usefulness of data contributed (Sprinks, et al., 2017), or total time spent participating each day (Boakes, et al., 2016), should also be considered in the design of pillars. This includes the idea that researchers identify challenges on how to evaluate outcomes (Jagacinski, et al., 2001), how to deal with a lack of skills among participants, and how to improve data quality (in relation to sensitive information, transparency, and privacy) (Anhalt-Depies, et al., 2019). Then, the planned measurements can be utilized in the adjustment block as well as in the post-implementation block.

5.1. Limitations

This research has certain limitations. First, searches were limited to eight journals, 47 journals recommended by 15 SIGs, and proceedings of five major IS conferences (ICIS, ECIS, HICSS, PACIS, and AMCIS). Papers outside these sources were not included in this research. This limitation may have generated bias and only a partial understanding of CS. However, the majority of IS papers appear in our selection. Thus, we believe that our selection of journals and proceedings was appropriate.

Second, we searched for papers based on keywords. This may have eliminated some papers on CS, which did not contain our keywords. We managed this issue by having four researchers conduct searches independently within the selected databases. Third, the patterns presented in this research may contain biases introduced during data analysis. We solved this issue by analysing data from selected papers and carefully following established research methods (Paré, et al., 2015; Templier and Paré, 2018; Webster and Watson, 2002). Moreover, each paper was analysed by at least two researchers based on the framework. We believe this significantly improves reliability and minimizes bias.

Fourth, we have yet to test the framework for conducting a CS project in practice. Future studies could focus on testing the framework and, as a result, improving this proposal. Moreover, although the content of each block was retrieved from selected papers, researchers need to elaborate by, for example, adding additional topics or revising and replenishing content of the blocks.

Finally, our data collection period ended at the end of May 2021, and articles accepted and published at the beginning of 2021 may not have been indexed by that point and were thus excluded. Moreover, only articles in English were analysed, so articles and the topics in other languages were ignored, which may have biased the results.

5.2. Future research

Our data demonstrated that CS is less studied in IS literature; a few articles with a direct focus on CS itself were found among the resources consulted. A set of practical principles that can be used as a guide to conduct a CS project in IS may help scholars and practitioners to deploy such a project and achieve its aims in practice. There is also a lack of discussion on relationships between scientists and members of the public (Shirk, et al., 2012).

Therefore, we propose the following. First, future research should continue investigating practical guidance of CS projects in IS. Second, researchers should focus on issues related to CS participants, such as the ideal set-up for a specific CS project, and data collection methods that would best suit participants. Third, researchers should also focus on interactions between scientists and members of the public participating in CS projects; communicating scientific processes solely by using scientific terms and concepts might not be easy for some.

Even though the sciences, including the human sciences, have become more data-intensive, data collection and analysis cannot be fully automated. Citizen science has given researchers opportunities to use volunteers to gather, submit, and analyse large quantities of data. Thus, the scale of data collection activities is larger than it would be for a project involving scientists only (Bonney, et al., 2016; Law, et al., 2017). For most of the CS projects discussed in this work, data was collected from participants in the public sphere (Huang, et al., 2018), which may lead to issues related to verification and validation (Lukyanenko, Parsons, et al., 2019; Lukyanenko, Wiggins, et al., 2019). We suggest that future research should focus on addressing those challenges involving issues of verification and validation faced when gathering evidence and evaluating arguments. We emphasize that such research should focus particularly on non-professionals who take part in CS projects, where non-professionals are involved in the processes of observing, taking notes, and collecting and collating data.

 

++++++++++

6. Conclusions

The aim of this research was to understand how citizen science had been studied and used by scholars. We discussed future research of CS in the field of IS. We conducted a literature review based on eight journals, 47 recommended journals of AIS SIGs, and the proceedings of five major IS conferences.

The research makes the following contributions. First, we clarified the concepts CS, CS project, CS participant, and other closely related terms, such as crowdsourcing and user-generated content, after which we proposed a concise characterization of CS. Second, we presented an episode framework for conducting a CS research project.

The framework consists of four blocks: the design of pillars block, the episodes of CS implementation block, the adjustment block, and the post-implementation block (Figure 4). We argued that the process of conducting a CS project is not necessarily linear. We also discussed the important activities in each block, such as motivating and engaging volunteers taking part in a CS project, and addressing challenges when conducting a CS project in the field of IS. In our proposed framework, the blocks and episodes may occur in parallel. The output of one episode may be used as the input of another episode. By using a parallel approach in our framework, we contribute to the literature as the steps of a CS project are often treated as linear in existing studies (Bonney, et al., 2016).

This research also has implications for practitioners. CS projects differ depending on their aims and the environment and field in which they are conducted (Shirk, et al., 2012). This may create confusion for researchers choosing an approach for conducting a CS project. We helped researchers in this matter by proposing an episode framework for conducting a CS project. Researchers can use the framework for reference when establishing a CS project. Each block of the framework has different topics, and those topics can be used as a starting point for researchers to clarify and contribute to CS projects. For example, researchers have to think about approaches in the design block as well as participation in order to achieve the aims of a given CS project in the episodes block. End of article

 

About the authors

Duong Dang, Ph.D., is an assistant professor in the Department of Computer Science, part of the School of Technology and Innovations at University of Vaasa, Finland. His research interests include digital transformation, cyber security, energy informatics, enterprise architecture, and citizen science.
E-mail: duong [dot] dang [at] uwasa [dot] fi

Teemu Mäenpää, Ph.D., is university lecturer in information systems in the School of Technology and Innovations at the University of Vaasa. His research interests include citizen science, open science, digital and scientific literacy, and management of infrastructure networks.
Direct comments to: teemu [dot] maenpaa [at] uwasa [dot] fi

Juho-Pekka Mäkipää, M.Sc., is a doctoral researcher and university teacher of information systems in the School of Technology and Innovations at the University of Vaasa. His research interests include Web accessibility, usability, user experience, and citizen science.
E-mail: juho-pekka [dot] makipaa [at] uwasa [dot] fi

Tomi Pasanen, Ph.D., is university lecturer of computer sciences in the School of Technology and Innovations at the University of Vaasa. His research interests include problem solving, digital games with applications, learning in digital context, and impacts and opportunities of digitalization.
E-mail: tomi [dot] pasanen [at] uwasa [dot] fi

 

Note

1. It goes without saying that without citizen scientists we cannot run a CS project.

 

References

C. Anhalt-Depies, J.L. Stenglein, B. Zuckerberg, P.A. Townsend, and A.R. Rissman, 2019. “Tradeoffs and tools for data quality, privacy, transparency, and trust in citizen science,” Biological Conservation, volume 238, 1081985.
doi: https://doi.org/10.1016/j.biocon.2019.108195, accessed 30 May 2022.

M. Aristeidou, E. Scanlon, and M. Sharples, 2017. “Profiles of engagement in online communities of citizen science participation,” Computers in Human Behavior, volume 74, pp. 246–256.
doi: https://doi.org/10.1016/j.chb.2017.04.044, accessed 30 May 2022.

G.K Awal and K.K. Bharadwaj, 2019. “Leveraging collective intelligence for behavioral prediction in signed social networks through evolutionary approach,” Information Systems Frontiers, volume 21, pp. 417–439.
doi: https://doi.org/10.1007/s10796-017-9760-4, accessed 30 May 2022.

K. Beier, M. Schweda, and S. Schicktanz, 2019. “Taking patient involvement seriously: A critical ethical analysis of participatory approaches in data-intensive medical research,” BMC Medical Informatics and Decision Making, volume 19, article number 90.
doi: https://doi.org/10.1186/s12911-019-0799-7, accessed 30 May 2022.

E.H. Boakes, G. Gliozzo, V. Seymour, M. Harvey, C. Smith, D.B. Roy, and M. Haklay, 2016. “Patterns of contribution to citizen science biodiversity projects increase understanding of volunteers’ recording behavior,” Scientific Reports, volume 6, article number 33051.
doi: https://doi.org/10.1038/srep33051, accessed 30 May 2022.

R, Bonney, T.B. Phillips, H.L. Ballard, and J.W. Enck, 2016. “Can citizen science enhance public understanding of science?” Public Understanding of Science, volume 25, number 1, pp. 2–16.
doi: https://doi.org/10.1177/0963662515607406, accessed 30 May 2022.

F. Cappa, J. Laut, M. Porfiri, and L. Giustiniano, 2018. “Bring them aboard: Rewarding participation in technology-mediated citizen science projects,” Computers in Human Behavior, volume 89, pp. 246–257.
doi: https://doi.org/10.1016/j.chb.2018.08.017, accessed 30 May 2022.

A. Castellanos, M. Tremblay, R. Lukyanenko, and B. Samuel, 2020. “Basic classes in conceptual modeling: Theory and practical guidelines,” Journal of the Association for Information Systems, volume 21, number 4, pp. 1,001–1,044.
doi: https://doi.org/10.17705/1jais.00627, accessed 30 May 2022.

C.C. Conrad and K.G. Hilchey, 2011. “A review of citizen science and community-based environmental monitoring: issues and opportunities,” Environmental Monitoring and Assessment, volume 176, pp. 273–291.
doi: https://doi.org/10.1007/s10661-010-1582-5, accessed 30 May 2022.

A.W. Crall, R. Jordan, K. Holfelder, G.J. Newman, J. Graham, and D.M. Waller, 2013. “The impacts of an invasive species citizen science training program on participant attitudes, behavior, and science literacy,” Public Understanding of Science, volume 22, number 6, pp. 745–764.
doi: https://doi.org/10.1177/0963662511434894, accessed 30 May 2022.

K. Crowston and N.R. Prestopnik, 2013. “Motivation and data quality in a citizen science game: A design science evaluation,” 2013 46th Hawaii International Conference on System Sciences, pp. 450–459.
doi: https://doi.org/10.1109/HICSS.2013.413, accessed 30 May 2022.

K. Crowston, E. Mitchell, and C. Østerlund, 2019. “Coordinating advanced crowd work: Extending citizen science,” Citizen Science: Theory and Practice, volume 4, number 1, p. 16.
doi: https://doi.org/10.5334/cstp.166, accessed 30 May 2022.

K. Crowston, C. Østerlund, T.K. Lee, C. Jackson, M. Harandi, S. Allen, S. Bahaadini, S. Coughlin, A.K. Katsaggelos, S.L. Larson, N. Rohani, J.R. Smith, L. Trouille, and M. Zevin, 2020. ”Knowledge tracing to model learning in online citizen science projects,” IEEE Transactions on Learning Technologies, volume 13, number 1, pp. 123–134.
doi: https://doi.org/10.1109/TLT.2019.2936480, accessed 30 May 2022.

V. Curtis, 2014. “Online citizen science games: Opportunities for the biological sciences,” Applied & Translational Genomics, volume 3, number 4, pp. 90–94.
doi: https://doi.org/10.1016/j.atg.2014.07.001, accessed 30 May 2022.

J.L. Dickinson and R. Bonney, 2012. “Introduction: Why citizen science?” In: J.L. Dickinson and R. Bonney (editors). Citizen science: Public participation in environmental research. Ithaca, N.Y.: Cornell University Press, pp. 1–14.
doi: https://doi.org/10.7591/cornell/9780801449116.003.0001, accessed 30 May 2022.

D. Diner, S. Nakayama, O. Nov, and M. Porfiri, 2018. “Social signals as design interventions for enhancing citizen science contributions,” Information, Communication & Society, volume 21, number 4, pp. 594–611.
doi: https://doi.org/10.1080/1369118X.2017.1299779, accessed 30 May 2022.

M. Eames and J. Egmose, 2011. “Community foresight for urban sustainability: Insights from the Citizens Science for Sustainability (SuScit) project,” Technological Forecasting and Social Change, volume 78, number 5, pp. 769–784.
doi: https://doi.org/10.1016/j.techfore.2010.09.002, accessed 30 May 2022.

L. Ebitu, H. Avery, K.A. Mourad, and J. Enyetu, 2021. “Citizen science for sustainable agriculture — A systematic literature review,” Land Use Policy, volume 103, 105326.
doi: https://doi.org/10.1016/j.landusepol.2021.105326, accessed 30 May 2022.

A. Halavais, 2013. “Home made big data!? Challenges and opportunities for participatory social research,” First Monday, volume 18, number 10, at https://firstmonday.org/article/view/4876/3754, accessed 30 May 2022.
doi: https://doi.org/10.5210/fm.v18i10.4876, accessed 30 May 2022.

C. Harteveld, A. Stahl, G. Smith, C. Talgar, and S.C. Sutherland, 2016. “Standing on the shoulders of citizens: Exploring gameful collaboration for creating social experiments,” 2016 49th Hawaii International Conference on System Sciences, pp. 74–83.
doi: https://doi.org/10.1109/HICSS.2016.18, accessed 30 May 2022.

H.B. Holmer, C. DiSalvo, P. Sengers, and T. Lodato, 2015. “Constructing and constraining participation in participatory arts and HCI,” International Journal of Human-Computer Studies, volume 74, pp. 107–123.
doi: https://doi.org/10.1016/j.ijhcs.2014.10.003, accessed 30 May 2022.

J. Huang, C.E. Hmelo-Silver, R. Jordan, S. Gray, T. Frensley, G. Newman, and M.J. Stern, 2018. “Scientific discourse of citizen scientists: Models as a boundary object for collaborative problem solving,” Computers in Human Behavior, volume 87, pp. 480–492.
doi: https://doi.org/10.1016/j.chb.2018.04.004, accessed 30 May 2022.

C. Jackson, C. Østerlund, K. Crowston, M. Harandi, S. Allen, S. Bahaadini, S. Coughlin, V. Kalogera, A. Katsaggelos, S. Larson, N. Rohani, J. Smith, L. Trouille, and M. Zevin, 2020. “Teaching citizen scientists to categorize glitches using machine learning guided training,” Computers in Human Behavior, volume 105, 106198.
doi: https://doi.org/10.1016/j.chb.2019.106198, accessed 30 May 2022.

C. Jackson, C. Østerlund, M. Harandi, D. Kharwar, and K. Crowston, 2019. “Linguistic changes in online citizen science: A structurational perspective,” ICIS 2019 Proceedings, at https://aisel.aisnet.org/icis2019/crowds_social/crowds_social/28, accessed 30 May 2022.

C. Jackson, C. Østerlund, V. Maidel, K. Crowston, and G. Mugar, 2016. “Which way did they go?: Newcomer movement through the Zooniverse,” CSCW ’16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 624–635.
doi: https://doi.org/10.1145/2818048.2835197, accessed 30 May 2022.

C.B. Jackson, C. Østerlund, G. Mugar, K.D. Hassman, and K. Crowston, 2015. “Motivations for sustained participation in crowdsourcing: Case studies of citizen science on the role of talk,” 2015 48th Hawaii International Conference on System Sciences, pp. 1,624–1,634.
doi: https://doi.org/10.1109/HICSS.2015.196, accessed 30 May 2022.

W. Jaehnig, and M.L. Miller, 2007. &lodquo;Feedback types in programmed instruction: A systematic review,” Psychological Record, volume 57, number 2, pp. 219–232.
doi: https://doi.org/10.1007/BF03395573, accessed 30 May 2022.

C.M. Jagacinski, J.L. Madden, and M.H. Reider, 2001. “The impact of situational and dispositional achievement goals on performance,” Human Performance, volume 14, number 4, pp. 321–337.
doi: https://doi.org/10.1207/S15327043HUP1404_3, accessed 30 May 2022.

V. Jayawardene, S.W. Sadiq, and M. Indulska, 2015. “An analysis of data quality dimensions,” ITEE Technical Report, 2013-01 and 2015-02. School of Information Technology and Electrical Engineering, University of Queensland, at https://espace.library.uq.edu.au/view/UQ:312314, accessed 30 May 2022.

R.C. Jordan, H.L. Ballard, and T.B. Phillips, 2012. “Key issues and new approaches for evaluating citizen-science learning outcomes,” Frontiers in Ecology and the Environment, volume 10, number 6, pp. 307–309.
doi: https://doi.org/10.1890/110280, accessed 30 May 2022.

C. Kullenberg and D. Kasperowski, 2016. “What is citizen science? A scientometric meta-analysis,” PLoS One, volume 11, number 1, e147152 (14 January).
doi: https://doi.org/10.1890/110280, accessed 30 May 2022.

N. Laranjeiro, S.N. Soydemir, and J. Bernardino, 2015. “A survey on data quality: Classifying poor data,” 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 179–188.
doi: https://doi.org/10.1109/PRDC.2015.41, accessed 30 May 2022.

E. Law, A.C. Williams, A. Wiggins, J. Brier, J. Preece, J. Shirk, and G. Newman, 2017. “The science of citizen science: Theories, methodologies and platforms,” CSCW ’17 Companion: Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 395–400.
doi: https://doi.org/10.1145/3022198.3022652, accessed 30 May 2022.

M. Levy and M. Germonprez, 2017. “The potential for citizen science in information systems research,” Communications of the Association for Information Systems, volume 40.
doi: https://doi.org/10.17705/1CAIS.04002, accessed 30 May 2022.

R. Lukyanenko, A. Wiggins, and H. K. Rosser, 2019. “Citizen science: An information quality research frontier,” Information Systems Frontiers, volume 22, pp. 961–983.
doi: https://doi.org/10.1007/s10796-019-09915-z, accessed 30 May 2022.

R. Lukyanenko, J. Parsons, and Y. Wiersma, 2014a. “The IQ of the crowd: Understanding and improving information quality in structured user-generated content,” Information Systems Research, volume 25, number 4, pp. 669–689.
doi: https://doi.org/10.1287/isre.2014.0537, accessed 30 May 2022.

R. Lukyanenko, J. Parsons, and Y. Wiersma, 2014b. “The impact of conceptual modeling on dataset completeness: A field experiment,” International Conference on Information Systems (ICIS 2014).
doi: https://doi.org/10.13140/2.1.4852.6408, accessed 30 May 2022.

R. Lukyanenko, J. Parsons, Y. Wiersma, and M. Maddah, 2019. “Expecting the unexpected: Effects of data collection design choices on the quality of crowdsourced user-generated content,” MIS Quarterly, volume 43, number 2, pp. 623–647.
doi: https://doi.org/10.25300/MISQ/2019/14439, accessed 30 May 2022.

R. Lukyanenko, J. Parsons, J., Y. Wiersma, G. Wachinger, B. Huber, and R. Meldt, 2017. “Representing crowd knowledge: Guidelines for conceptual modeling of user-generated content,” Journal of the Association for Information Systems, volume 18, number 4.
doi: https://doi.org/10.17705/1jais.00456, accessed 30 May 2022.

J.N. Mosier and S.L. Smith, 1986. “Application of guidelines for designing user interface software,” Behaviour & Information Technology, volume 5, number 1, pp. 39–46.
doi: https://doi.org/10.1080/01449298608914497, accessed 30 May 2022.

Y. Nagar, P. de Boer, and A.C.B. Garcia, 2016. “Accelerating the review of complex intellectual artifacts in crowdsourced innovation challenges,” Thirty Seventh International Conference on Information Systems, at https://doi.org/10.5167/uzh-126367, accessed 30 May 2022.

S. Ogunseye, J. Parsons, and R. Lukyanenko, 2020. “To train or not to train? How training affects the diversity of crowdsourced data,” ICIS 2020 Proceedings, at https://aisel.aisnet.org/icis2020/sharing_economy/sharing_economy/9, accessed 30 May 2022.

C. Østerlund, K. Crowston, and C. Jackson, 2020. “Building an apparatus: Refractive, reflective, and diffractive readings of trace data,” Journal of the Association for Information Systems, volume 21, number 1, at https://aisel.aisnet.org/jais/vol21/iss1/10, accessed 30 May 2022.

E. Palermo, J. Laut, O. Nov, P. Cappa, and M. Porfiri, 2017. “Spatial memory training in a citizen science context,” Computers in Human Behavior, volume 73, pp. 38–46.
doi: https://doi.org/10.1016/j.chb.2017.03.017, accessed 30 May 2022.

G. Paré, M. Tate, D. Johnstone, and S. Kitsiou, 2016. “Contextualizing the twin concepts of systematicity and transparency in information systems literature reviews,” European Journal of Information Systems, volume 25, number 6, pp. 493–508.
doi: https://doi.org/10.1057/s41303-016-0020-3, accessed 30 May 2022.

G. Paré, M-C. Trudel, M. Jaana, and S. Kitsiou, 2015. “Synthesizing information systems knowledge: A typology of literature reviews,” Information & Management, volume 52, number 2, pp. 183–199.
doi: https://doi.org/10.1016/j.im.2014.08.008, accessed 30 May 2022.

J.W. Pearce-Higgins, S.R. Baillie, K. Boughey, N.A.D. Bourn, R.P.B. Foppen, S. Gillings, R.D. Gregory, T. Hunt, F. Jiguet, A. Lehikoinen, A.J. Musgrove, R.A. Robinson, D.B. Roy, G.M. Siriwardena, K.J. Walker, and J.D. Wilson, 2018. “Overcoming the challenges of public data archiving for citizen science biodiversity recording and monitoring schemes,” Journal of Applied Ecology, volume 55, number 6, pp. 2,544–2,551.
doi: https://doi.org/10.1111/1365-2664.13180, accessed 30 May 2022.

T.B. Phillips, H.L. Ballard, B.V. Lewenstein, and R. Bonney, 2019. “Engagement in science through citizen science: Moving beyond data collection,” Science Education, volume 103, number 3, pp. 665–690.
doi: https://doi.org/10.1002/sce.21501, accessed 30 May 2022.

N.R. Prestopnik and J. Tang, 2015. “Points, stories, worlds, and diegesis: Comparing player experiences in two citizen science games,” Computers in Human Behavior, volume 52, pp. 492–506.
doi: https://doi.org/10.1016/j.chb.2015.05.051, accessed 30 May 2022.

N. Prestopnik, K. Crowston, and J. Wang, 2017. “Gamers, citizen scientists, and data: Exploring participant contributions in two games with a purpose,” Computers in Human Behavior, volume 68, pp. 254–268.
doi: https://doi.org/10.1016/j.chb.2016.11.035, accessed 30 May 2022.

C.A. Price and H.-S. Lee, 2013. “Changes in participants’ scientific attitudes and epistemological beliefs during an astronomical citizen science project,” Journal of Research in Science Teaching, volume 50, number 7, pp. 773–801.
doi: https://doi.org/10.1002/tea.21090, accessed 30 May 2022.

J. Reed, M.J. Raddick, A. Lardner, and K. Carney, 2013. “An exploratory factor analysis of motivations for participating in Zooniverse, a collection of virtual citizen science projects,” 2013 46th Hawaii International Conference on System Sciences, pp. 610–619.
doi: https://doi.org/10.1109/HICSS.2013.85, accessed 30 May 2022.

H. Riesch and C. Potter, 2014. “Citizen science as seen by scientists: Methodological, epistemological and ethical dimensions,” Public Understanding of Science, volume 23, number 1, pp. 107–120.
doi: https://doi.org/10.1177/0963662513497324, accessed 30 May 2022.

H. Rosser and A. Wiggins, 2019. “Crowds and camera traps: Genres in online citizen science projects,” 52nd Hawaii International Conference on System Sciences.
doi: https://doi.org/10.24251/HICSS.2019.637, accessed 30 May 2022.

H. Sauermann, K. Vohland, V. Antoniou, B. Balázs, C. Göbel, K. Karatzas, P. Mooney, J. Perelló, M. Ponti, R. Samson, and S. Winter 2020. “Citizen science and sustainability transitions,” Research Policy, volume 49, number 5, 103978.
doi: https://doi.org/10.1016/j.respol.2020.103978, accessed 30 May 2022.

A. Sidorova, N. Evangelopoulos, J.S. Valacich, and T. Ramakrishnan. 2008. “Uncovering the intellectual core of the information systems discipline,” MIS Quarterly, volume 32, number 3, pp. 467–482.
doi: https://doi.org/10.2307/25148852, accessed 30 May 2022.

D. Schlagwein and F. Daneshgar, 2014. “User requirements of a crowdsourcing platform for researchers: Findings from a series of gocus Groups,” PACIS 2014 Proceedings, at https://aisel.aisnet.org/pacis2014/195, accessed 30 May 2022.

J. Shirk, H. Ballard, C. Wilderman, T. Phillips, A. Wiggins, R. Jordan, E. McCallie, M. Minarchek, B. Lewenstein, M. Krasny, and R. Bonney, 2012. “Public participation in scientific research: A framework for deliberate design,” Ecology & Society, volume 17, number 2, article number 29.
doi: https://doi.org/10.5751/ES-04705-170229, accessed 30 May 2022.

P.D. da Silva and L. Heaton, 2017. “Fostering digital and scientific literacy: Learning through practice,” First Monday, volume 22, number 6, at https://firstmonday.org/article/view/7284/6302, accessed 30 May 2022.
doi: https://doi.org/10.5210/fm.v22i6.7284, accessed 30 May 2022.

E. Simperl, N. Reeves, C. Phethean, T. Lynes, and R. Tinati, 2018. “Is virtual citizen science a game?” ACM Transactions on Social Computing, volume 1, number 2, article number 6, pp. 1–39.
doi: https://doi.org/10.1145/3209960, accessed 30 May 2022.

J. Sprinks, J. Wardlaw, R. Houghton, S. Bamford, and J. Morley, 2017. “Task workflow design and its impact on performance and volunteers’ subjective preference in virtual citizen science,” International Journal of Human-Computer Studies, volume 104, pp. 50–63.
doi: https://doi.org/10.1016/j.ijhcs.2017.03.003, accessed 30 May 2022.

C. Steger, B. Butt, and M.B. Hooten, 2017. “Safari science: Assessing the reliability of citizen science data for wildlife surveys,” Journal of Applied Ecology, volume 54, number 6, pp. 2,053–2,062.
doi: https://doi.org/10.1111/1365-2664.12921, accessed 30 May 2022.

B.L. Sullivan, J.L. Aycrigg, J.H. Barry, R.E. Bonney, N. Bruns, C.B. Cooper, T. Damoulas, A.A. Dhondt, T. Dietterich, A. Farnsworth, D. Fink, J.W. Fitzpatrick, T. Fredericks, J. Gerbracht,, C. Gomes, W.M. Hochachka, M.J. Iliff, C. Lagoze, F.A. La Sorte, and S. Kelling, 2014. “The eBird enterprise: An integrated approach to development and application of citizen science,” Biological Conservation, volume 169, pp. 31–40.
doi: https://doi.org/10.1016/j.biocon.2013.11.003, accessed 30 May 2022.

J. Tang and N.R. Prestopnik, 2017. “Effects of framing on user contribution: Story, gameplay and science,” AMCIS 2017 Proceedings, at https://aisel.aisnet.org/amcis2017/HumanCI/Presentations/4, accessed 30 May 2022.

M. Templier and G. Paré, 2018. “Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals,” European Journal of Information Systems, volume 27, number 5, pp. 503–550.
doi: https://doi.org/10.1080/0960085X.2017.1398880, accessed 30 May 2022.

R. Tinati, M. Luczak-Roesch, E. Simperl, and W. Hall, 2017. “An investigation of player motivations in Eyewire, a gamified citizen science project,” Computers in Human Behavior, volume 73, pp. 527–540.
doi: https://doi.org/10.1016/j.chb.2016.12.074, accessed 30 May 2022.

T. Turrini, D. Dörler, A. Richter, F. Heigl, and A. Bonn, 2018. “The threefold potential of environmental citizen science — Generating knowledge, creating learning opportunities and enabling civic participation,” Biological Conservation, volume 225, pp. 176–186.
doi: https://doi.org/10.1016/j.biocon.2018.03.024, accessed 30 May 2022.

J.P. Ulahannan, N. Narayanan, N. Thalhath, P. Prabhakaran, S. Chaliyeduth, S.P. Suresh, M. Mohammed, E. Rajeevan, S. Joseph, A. Balakrishnan, J. Uthaman, M. Karingamadathil, S.T. Thomas, U. Sureshkumar, S. Balan, N.N. Vellichirammal, and the Collective for Open Data Distribution-Keralam (CODD-K) Consortium. 2020. “A citizen science initiative for open data and visualization of COVID-19 outbreak in Kerala, India,” Journal of the American Medical Informatics Association, volume 27, number 12, pp. 1,913–1,920.
doi: https://doi.org/10.1093/jamia/ocaa203, accessed 30 May 2022.

J. Webster and R.T. Watson, 2002. “Analyzing the past to prepare for the future: Writing a literature review,” MIS Quarterly, volume 26, number 2, pp. xiii–xxiii, and at http://www.jstor.org/stable/4132319, accessed 30 May 2022.

C. Weinhardt, S. Kloker, O. Hinz, and W.M.P. van der Aalst, 2020. “Citizen science in information systems research,” Business & Information Systems Engineering, volume 62, number 4, pp. 273–277.
doi: https://doi.org/10.1007/s12599-020-00663-y, accessed 30 May 2022.

A. Wiggins and K. Crowston, 2014. “Surveying the citizen science landscape,”First Monday,volume 20, number 1, at https://firstmonday.org/article/view/5520/4194, accessed 30 May 2022.
doi: https://doi.org/10.5210/fm.v20i1.5520, accessed 30 May 2022.

A. Wiggins and K. Crowston, 2011. “From conservation to crowdsourcing: A typology of citizen science,” 2011 44th Hawaii International Conference on System Sciences.
doi: https://doi.org/10.1109/HICSS.2011.207, accessed 30 May 2022.

D.E. Wright, L. Fortson, C. Lintott, M. Laraia, and M. Walmsley, 2019. “Help me to help you: Machine augmented citizen science,” ACM Transactions on Social Computing, volume 2, number 3, article number 11, pp. 1–20.
doi: https://doi.org/10.1145/3362741, accessed 30 May 2022.

X. Zhou, J. Tang, Y. Zhao, and T. Wang, 2020. “Effects of feedback design and dispositional goal orientations on volunteer performance in citizen science projects,” Computers in Human Behavior, volume 107, 106266.
doi: https://doi.org/10.1016/j.chb.2020.106266, accessed 30 May 2022.

X. Zhou, J. Tang, T. Wang, and Y. Ma, 2017. “Investigating the impacts of task characteristics in gamified citizen science,” PACIS 2017 Proceedings, at https://aisel.aisnet.org/pacis2017/138, accessed 30 May 2022.

 

Appendix A: Selected SIG recommended journals

 

NumberJournal
1Academy of Management Journal
2ACM Transactions on Computer-Human Interaction (ACM TOCHI)
3AIS Transactions on Human-Computer Interaction (AIS THCI)
4BMC Medical Informatics and Decision Making
5Communications of Association for Information Systems
6Communications of the ACM
7Computers & Security
8Computers in Human Behavior (CHB)
9Decision Sciences
10Decision Support Systems
11Digital Investigation
12European Journal of Operational Research
13Expert Systems
14Expert Systems with Applications
15First Monday
16Government Information Quarterly
17Health Systems
18Human-Computer Interaction (HCI)
19IEEE Intelligent Systems
20IEEE Transactions on (Engineering) Management
21IEEE Transactions on Software Engineering
22Information and Management
23Information and Organisation
24Information Systems Frontiers
25Information Technology and People
26Information Technology for Development
27Intelligent Systems in Accounting, Finance and Management
28International Journal of Human-Computer Studies (IJHCS)
29International Journal of Information Management
30International Journal of Information Security
31International Journal of Medical Informatics (IJMI)
32Journal of American Medical Informatics Association (JAMIA)
33Journal of Database Management
34Journal of Information Security
35Journal of Information System Security (JISSec)+L13
36Journal of Medical Internet Research (JMIR)
37Journal of Public Administration Research and Theory
38MISQ Executive
39Organization Science
40Organizational Behavior and Human Decision Processes
41Organizational Research Methods
42Public Administration Review
43Requirements Engineering
44Research Policy
45Socio-Economic Planning Sciences
46Technological Forecasting and Social Change
47Telecommunications Policy

 

 

Appendix B: Selected papers

 

NumberSelected paper
1J. Sprinks, J. Wardlaw, R. Houghton, S. Bamford, and J. Morley, 2017. “Task workflow design and its impact on performance and volunteers’ subjective preference in virtual citizen science,” International Journal of Human-Computer Studies, volume 104, pp. 50–63.
doi: https://doi.org/10.1016/j.ijhcs.2017.03.003
2H.B. Holmer, C. DiSalvo, P. Sengers, and T. Lodato, 2015. “Constructing and constraining participation in participatory arts and HCI,” International Journal of Human-Computer Studies, volume 74, pp. 107–123.
doi: https://doi.org/10.1016/j.ijhcs.2014.10.003
3F. Cappa, J. Laut, M. Porfiri, and L. Giustiniano, 2018. “Bring them aboard: Rewarding participation in technology-mediated citizen science projects,” Computers in Human Behavior, volume 89, pp. 246–257.
doi: https://doi.org/10.1016/j.chb.2018.08.017
4R. Tinati, M. Luczak-Roesch, E. Simperl, and W. Hall, 2017. “An investigation of player motivations in Eyewire, a gamified citizen science project,” Computers in Human Behavior, volume 73, pp. 527–540.
doi: https://doi.org/10.1016/j.chb.2016.12.074
5E. Palermo, J. Laut, O. Nov, P. Cappa, and M. Porfiri, 2017. “Spatial memory training in a citizen science context,” Computers in Human Behavior, volume 73, pp. 38–46.
doi: https://doi.org/10.1016/j.chb.2017.03.017
6.R. Prestopnik and J. Tang, 2015. “Points, stories, worlds, and diegesis: Comparing player experiences in two citizen science games,” Computers in Human Behavior, volume 52, pp. 492–506.
doi: https://doi.org/10.1016/j.chb.2015.05.051
7M. Aristeidou, E. Scanlon, and M. Sharples, 2017. “Profiles of engagement in online communities of citizen science participation,” Computers in Human Behavior, volume 74, pp. 246–256.
doi: https://doi.org/10.1016/j.chb.2017.04.044
8J. Huang, C.E. Hmelo-Silver, R. Jordan, S. Gray, T. Frensley, G. Newman, and M.J. Stern, 2018. “Scientific discourse of citizen scientists: Models as a boundary object for collaborative problem solving,” Computers in Human Behavior, volume 87, pp. 480–492.
doi: https://doi.org/10.1016/j.chb.2018.04.004
9N. Prestopnik, K. Crowston, and J. Wang, 2017. “Gamers, citizen scientists, and data: Exploring participant contributions in two games with a purpose,” Computers in Human Behavior, volume 68, pp. 254–268.
doi: https://doi.org/10.1016/j.chb.2016.11.035
10M. Eames and J. Egmose, 2011. “Community foresight for urban sustainability: Insights from the Citizens Science for Sustainability (SuScit) project,” Technological Forecasting and Social Change, volume 78, number 5, pp. 769–784.
doi: https://doi.org/10.1016/j.techfore.2010.09.002
11P.D. da Silva and L. Heaton, 2017. “Fostering digital and scientific literacy: Learning through practice,” First Monday, volume 22, number 6, at https://firstmonday.org/article/view/7284/6302, accessed 30 May 2022.
doi: https://doi.org/10.5210/fm.v22i6.7284
12A. Halavais, 2013. “Home made big data!? Challenges and opportunities for participatory social research,” First Monday, volume 18, number 10, at https://firstmonday.org/article/view/4876/3754, accessed 30 May 2022.
doi: https://doi.org/10.5210/fm.v18i10.4876
13R. Lukyanenko, J. Parsons, and Y. Wiersma, 2014. “The impact of conceptual modeling on dataset completeness: A field experiment,” International Conference on Information Systems (ICIS 2014).
doi: https://doi.org/10.13140/2.1.4852.6408
14. Nagar, P. de Boer, and A.C.B. Garcia, 2016. “Accelerating the review of complex intellectual artifacts in crowdsourced innovation challenges,” Thirty Seventh International Conference on Information Systems, at https://doi.org/10.5167/uzh-126367
15X. Zhou, J. Tang, T. Wang, and Y. Ma, 2017. “Investigating the impacts of task characteristics in gamified citizen science,” PACIS 2017 Proceedings, at https://aisel.aisnet.org/pacis2017/138
16D. Schlagwein and F. Daneshgar, 2014. “User requirements of a crowdsourcing platform for researchers: Findings from a series of gocus Groups,” PACIS 2014 Proceedings, at https://aisel.aisnet.org/pacis2014/195
17K. Crowston and N.R. Prestopnik, 2013. “Motivation and data quality in a citizen science game: A design science evaluation,” 2013 46th Hawaii International Conference on System Sciences, pp. 450–459.
doi: https://doi.org/10.1109/HICSS.2013.413
18C.B. Jackson, C. Østerlund, G. Mugar, K.D. Hassman, and K. Crowston, 2015. “Motivations for sustained participation in crowdsourcing: Case studies of citizen science on the role of talk,” 2015 48th Hawaii International Conference on System Sciences, pp. 1,624–1,634.
doi: https://doi.org/10.1109/HICSS.2015.196
19J. Reed, M.J. Raddick, A. Lardner, and K. Carney, 2013. “An exploratory factor analysis of motivations for participating in Zooniverse, a collection of virtual citizen science projects,” 2013 46th Hawaii International Conference on System Sciences, pp. 610–619.
doi: https://doi.org/10.1109/HICSS.2013.85
20K. Crowston, E. Mitchell, and C. Østerlund, 2019. “Coordinating advanced crowd work: Extending citizen science,” Citizen Science: Theory and Practice, volume 4, number 1, p. 16.
doi: https://doi.org/10.5334/cstp.166
21C. Harteveld, A. Stahl, G. Smith, C. Talgar, and S.C. Sutherland, 2016. “Standing on the shoulders of citizens: Exploring gameful collaboration for creating social experiments,” 2016 49th Hawaii International Conference on System Sciences, pp. 74–83.
doi: https://doi.org/10.1109/HICSS.2016.18
22C. Jackson, C. Østerlund, K. Crowston, M. Harandi, S. Allen, S. Bahaadini, S. Coughlin, V. Kalogera, A. Katsaggelos, S. Larson, N. Rohani, J. Smith, L. Trouille, and M. Zevin, 2020. “Teaching citizen scientists to categorize glitches using machine learning guided training,” Computers in Human Behavior, volume 105, 106198.
doi: https://doi.org/10.1016/j.chb.2019.106198
23X. Zhou, J. Tang, Y. Zhao, and T. Wang, 2020. “Effects of feedback design and dispositional goal orientations on volunteer performance in citizen science projects,” Computers in Human Behavior, volume 107, 106266.
doi: https://doi.org/10.1016/j.chb.2020.106266
24R. Lukyanenko, J. Parsons, Y. Wiersma, and M. Maddah, 2019. “Expecting the unexpected: Effects of data collection design choices on the quality of crowdsourced user-generated content,” MIS Quarterly, volume 43, number 2, pp. 623–647.
doi: https://doi.org/10.25300/MISQ/2019/14439
25C. Østerlund, K. Crowston, and C. Jackson, 2020. “Building an apparatus: Refractive, reflective, and diffractive readings of trace data,” Journal of the Association for Information Systems, volume 21, number 1, at https://aisel.aisnet.org/jais/vol21/iss1/10
26K. Beier, M. Schweda, and S. Schicktanz, 2019. “Taking patient involvement seriously: A critical ethical analysis of participatory approaches in data-intensive medical research,” BMC Medical Informatics and Decision Making, volume 19, article number 90.
doi: https://doi.org/10.1186/s12911-019-0799-7
27J.P. Ulahannan, N. Narayanan, N. Thalhath, P. Prabhakaran, S. Chaliyeduth, S.P. Suresh, M. Mohammed, E. Rajeevan, S. Joseph, A. Balakrishnan, J. Uthaman, M. Karingamadathil, S.T. Thomas, U. Sureshkumar, S. Balan, N.N. Vellichirammal, and the Collective for Open Data Distribution-Keralam (CODD-K) Consortium. 2020. “A citizen science initiative for open data and visualization of COVID-19 outbreak in Kerala, India,” Journal of the American Medical Informatics Association, volume 27, number 12, pp. 1,913–1,920.
doi: https://doi.org/10.1093/jamia/ocaa203
28H. Sauermann, K. Vohland, V. Antoniou, B. Balázs, C. Göbel, K. Karatzas, P. Mooney, J. Perelló, M. Ponti, R. Samson, and S. Winter 2020. “Citizen science and sustainability transitions,” Research Policy, volume 49, number 5, 103978.
doi: https://doi.org/10.1016/j.respol.2020.103978
29C. Jackson, C. Østerlund, M. Harandi, D. Kharwar, and K. Crowston, 2019. “Linguistic changes in online citizen science: A structurational perspective,” ICIS 2019 Proceedings, at https://aisel.aisnet.org/icis2019/crowds_social/crowds_social/28
30S. Ogunseye, J. Parsons, and R. Lukyanenko, 2020. “To train or not to train? How training affects the diversity of crowdsourced data,” ICIS 2020 Proceedings, at https://aisel.aisnet.org/icis2020/sharing_economy/sharing_economy/9
31H. Rosser and A. Wiggins, 2019. “Crowds and camera traps: Genres in online citizen science projects,” 52nd Hawaii International Conference on System Sciences.
doi: https://doi.org/10.24251/HICSS.2019.637

 

 

Appendix C: Episode framework and main content

 

BlockReferences
Design of pillarsApproach: Eames and Egmose, 2011; Harteveldet al., 2016; Holmeret al., 2015; Lukyanenko, et al., 2014b; Lukyanenko, et al., 2019; Bonney, et al., 2016; Ulahannan, et al., 2020; Price and Lee, 2013; Shirk, et al., 2012; Palermo, et al., 2017; Prestopnik and Tang, 2015; Reed, et al., 2013; Silva and Heaton, 2017; Turrini, et al., 2018
Participation: Beier, et al., 2019; Cappa, et al., 2018; Harteveld, et al., 2016; Jackson, et al., 2015; Phillips, et al., 2019; Prestopnik and Tang, 2015; Prestopnik, et al., 2017; Reed, et al., 2013; Sauermann, et al., 2020; Sprinks, et al. 2017; Tinati, et al., 2017; Zhou, et al., 2017
Consolidation: Anhalt-Depies, et al., 2019 ;Aristeidou, et al., 2017; Beier, et al., 2019; Bonney, et al., 2016; Crowston and Prestopnik, 2013; Crowston, et al., 2019; Dickinson and Bonney, 2012; Eames and Egmose, 2011; Huang, et al., 2018; Jackson, et al., 2019; Jackson, et al., 2020; Jayawardene, et al., 2015; Jordan, et al., 2012; Laranjeiro, et al., 2015; Nagar, et al., 2016; Ogunseye, et al., 2020; Pearce-Higgins, et al., 2018; Riesch and Potter, 2014; Rosser and Wiggins, 2019; Steger, et al., 2017; Østerlund, et al., 2020
Episodes of CS implementationHarteveld, et al., 2016; Jackson, et al., 2015; Prestopnik and Tang, 2015; Sprinks, et al., 2017; Tinati, et al., 2017; Zhou, et al., 2017
Adjustment activitiesAristeidou, et al., 2017; Bonney, et al., 2016; Crowston, et al., 2019; Zhou, et al., 2020; Østerlund, et al., 2020
Post-implementationCrowston and Prestopnik, 2013; Dickinson and Bonney, 2012; Jackson, et al., 2015; Turrini, et al., 2018

 

 


Editorial history

Received 30 June 2022; revised 29 September 2022; accepted 2 October 2022.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The anatomy of citizen science projects in information systems
by Duong Dang, Teemu Mäenpää, Juho-Pekka Mäkipää, and Tomi Pasanen.
First Monday, Volume 27, Number 10 - 3 October 2022
https://firstmonday.org/ojs/index.php/fm/article/download/12698/10719
doi: https://dx.doi.org/10.5210/fm.v27i10.12698