First Monday

Increasing user participation: An exploratory study of querying on the Facebook and Twitter platforms by Caren Crowley, Wilfried Daniels, Rafael Bachiller, Wouter Joosen, and Danny Hughes



Abstract
Participatory applications frequently rely upon a crowd–sourced community of users who contribute data and content to deliver a service. The success or failure of participatory applications is dependent on developing and maintaining a community of responsive users. This paper reports the results of an exploratory 30–day study examining user responsiveness to query messages. In total 3,055 check–in requests were sent via the online social networks Facebook or Twitter to 70 participants who were randomly recruited using a chain referral process, wherein existing users recruited others to participate.

Contents

Introduction
Querying in participatory applications: A network perspective
Methodology
Results
Discussion: Maximizing user responsiveness
Conclusions and future work

 


 

Introduction

Participatory applications are defined as online applications that use data and content, contributed by a group of participants, to deliver a service. Data and content may be provided voluntarily or in response to specific requests from the application. The quality of service offered by a participatory application is thus dependent upon the size and responsiveness of the participant group. Participants in this class of applications may donate their resources freely, participate in return for free access to the resulting application, or may receive direct financial compensation. There is strong evidence that payments do not need to be high in order to stimulate participation, as can be seen in the case of Amazon’s Mechanical Turk (AMT) platform with users completing Human Intelligence Tasks (HITs) for relatively low payments, averaging US$1.38 per hour (Mason and Watts, 2009). The participatory sensing model is scalable, low cost and enables the development of a new class of user–centric applications.

Participatory applications may be divided into two classes, automatic or manual. Automatic participatory applications are defined as participatory applications wherein software, running on a participant’s device, responds automatically to queries from the participatory application. For example, Nericell (Mohan, et al., 2008) realize a live traffic service using sensor data provided by an application running on a participant’s smart phone. On the other hand, manual participatory applications require participants to actively respond to queries. For example, the application may request data or content in relation to a specific location or current news topic. For example, Demirbas, et al. (2010) contribute a real–time weather monitoring service using weather reports composed by users over the Twitter Online Social Network (OSN). Compared to automatic participatory applications, manual participatory applications allow for more flexible modes of participation, however, their performance is limited by the responsiveness of the participants. For example, a weather monitoring system might be rendered useless if participants take too long to upload weather observations.

The participatory application model is being applied in a growing range of application domains. Examples of automatic participatory applications include the sensing of real–time traffic conditions (Mohan, et al., 2008), human activity recognition (Eisenman, et al., 2007) and sleep cycle analysis (Borazio, et al., 2010). These automatic participatory applications use sensors embedded in smart phones and a client application, to automatically gather the data that is required to realize the application. The role of the user is thus limited to installing the necessary application. Examples of manual participatory applications include a participatory weather monitoring application (Demirbas, et al., 2010) and on–demand generation of multimedia microblogs (Gaonkar, et al., 2008). Manual participatory applications are capable of producing a new generation of user centric applications where data and content are tailored to suit the needs of specific users. This can be clearly seen in the piloting of the Micro–Blog application where users request and receive travel recommendations for specific locations (Gaonkar, et al., 2008). Such applications are low cost, with users providing data and content for little or no payment, and scalable across a large and growing user base. However the benefits of manual participatory applications must be balanced against the inherent risks of expanded human involvement.

Humans are unreliable and the risks of increased user responsiveness are twofold: Firstly, there is a risk that query messages will be lost if the user does not notice them or chooses to ignore them. Secondly, users may take an arbitrarily long time to respond to a message, which is a significant problem for time–sensitive applications. This leads us to an important question: What can be done to increase user responsiveness in manual participatory applications? While research has been performed on making applications more attractive to participants (Chen and Sakomoto, 2013), less research has been performed to identify the application–independent factors that impact user responsiveness. We seek to address this gap and answer the following two research questions: Firstly, how do the contextual factors of a query message effect user responsiveness? Secondly, how do user attributes effect responsiveness to query messages? Driven by their increasing use in participatory applications for both recruitment and communication, we use Online Social Networks (OSNs) as our communication medium.

Communication in participatory applications

Research on automatic participatory applications has focused primarily on building appropriate software architectures (Roo, et al., 2012; Rachuri, et al., 2011) and the deployment of sensing capabilities in real world settings (Miluzzo, et al., 2008; Eisenman, et al., 2007). To highlight the importance of responsiveness to query messages and understand why it has received limited attention in the literature, we examine the difference between querying (or tasking) in automatic and manual participatory application models respectively. Christin, et al. (2011) provide a detailed overview of a typical automatic participatory sensing application as shown in Figure 1. In such a model, Querying or Tasking by the application is simplified as communication occurs between the application and software installed on the participants devices. The Tasking component thus automatically distributes sensing tasks to the devices. The task specifies the data to be collected (e.g., traffic or weather conditions), the modalities used to collect it (e.g., accelerometer or microphone), the location and timeframe of interest. Data is collected automatically by the application running on participants’ devices whenever the participant happens to be in the desired location during the desired time frame and transmitted to the servers of the participatory application.

 

Architecture of an automatic participatory application
 
Figure 1: Architecture of an automatic participatory application, taken from Christin, et al. (2011).

 

Communication in manual participatory applications

In manual participatory applications the user is an active participant with at least some element of direct communication between the user and the application. In such a model Querying is between the device and user. The user must decide whether or not they want to complete a query. Users may have to alter their behaviour and/or location to undertake tasks on demand and then report the required data to the application. Critically, such queries need to be noticed and responded to within the lifetime of the query. In the case of very time–sensitive information, such as weather monitoring, this may be limited to 60 minutes or less. Lane, et al. [1] argue that in the case of applications where users must consciously choose to answer queries, socio–technical techniques must be developed to encourage user involvement. Such techniques are necessary in order to ensure the participation of a large community of users and to help mitigate the negative impact that arises when a participatory application interrupts normal user behaviour.

Responsiveness in manual participatory applications

Prior research has confirmed that users are willing to create and capture data or other forms of content for use by participatory applications (Christin, et al., 2011; Hoseini–Tabatabaei, et al., 2013). Brabham (2008) surveyed 651 users who supplied photos to the iStockphoto application. Brabham (2008) found that in addition to monetary payment, a desire to develop individual skills and to have fun were important motivators to participate in the application. However, even in cases where users are motivated to participate other more contextual factors can significantly affect participation. In particular, high rates of message loss and slow response times remain a significant impediment to the development and widespread adoption of the participatory application model (Demirbas, et al., 2010). Lane, et al. [2] argue that in the case of applications where users must actively answer query messages, socio–technical approaches must be developed to encourage user involvement. Such techniques are necessary to minimise the negative effects of unreliable human behaviour (Xiao, et al., 2013). The responsiveness of users is likely to vary due to differences in how they interact with the application and their current context and relationships with other users. By better understanding general usage patterns and the interactions between user attributes and contextual conditions, we can reduce the burden of participation on users and increase user responsiveness.

 

++++++++++

Querying in participatory applications: A network perspective

This paper applies a social network perspective to the study of user responsiveness to manual participatory applications. Prior work on participatory applications focuses primarily on technical attributes of the application architecture (Christin, et al., 2011; Hoseini–Tabatabaei, et al., 2013). A social network perspective focuses attention on the interactions or ties between the application and the participants. In this paper queries are envisioned as ties directed from the application to the user. While response messages are ties directed from the user to the application. The application and its users are therefore embedded in a network that is coordinated through the queries sent by the application (Tsai, 2001). We use a network perspective to focus attention on the combined influence of both user and query attributes on response messages. In so doing, we are able to understand how variations in query messages and user attributes affect the likelihood of message loss and the time delay of user response messages (i.e., latency).

Attributes of queries

Querying is a significant component of existing manual participatory applications however, to date, research has not explicitly focused on enhancing responsiveness to query messages. Gaonkar, et al., (2008) developed an experimental application ‘Micro–Blog’, which was tested with a small group of 12 users. Using the embedded capabilities of smart phones, participants were asked to upload blog entries involving a mix of audio, image and text content. While users were free to upload content according their personal preferences, the trial found that users were much more likely to upload content in response to query requests [3]. While Gaonakar, et al. (2008) do not address response times specifically, they do give users an option to set queries as active for a specified time only, for instance one hour in the case of highly time sensitive requests. In addition, Gaonkar report that most microblogs are uploaded after 3 PM with the highest density of blog uploads between 5 PM and 9 PM.

The timing of sent messages is then clearly an important attribute that is likely to influence responsiveness. Prior research on survey responsiveness found that surveys are more likely to be completed if they are received on a Tuesday or Wednesday afternoon relative to other times of the week (Dillman, et al., 2009). The importance of time of day has also been recognised by social applications such as GroupOn, who send promotional e–mail messages to their members at 10 AM, as this time was found to be optimal for their target audience of stay at home parents (Park and Chung, 2012).

The rate of querying is also important (i.e., the number of messages that are sent to a user within a given time period). While more frequent querying may improve the freshness or spatial resolution of application data, it is intuitive that users will only be capable of handling a limited number of query messages. Lane, et al. (2008) argue that the probability of human cooperation is likely to fall as the daily barrage of queries causes fatigue and eventually annoyance.

Online social networks (OSN) are increasingly being used as the primary communication channel for participatory applications. While early participatory applications tended to use custom–designed communication architectures (Christin, et al., 2011) this approach has two major disadvantages. Firstly, it is difficult to build an adequate user community for a single participatory application. Secondly, it requires that users adopt yet another communication medium. The popularity of OSNs is driven by two primary factors. Firstly, OSNs such as Facebook or Twitter have a massive existing user population from which the participatory application developer can recruit. At the time of writing, Facebook has over 1.2 billion monthly active users (Facebook, 2014), while Twitter has over 500 million (Twitter, 2014). Secondly, these platforms provide a natural and well understood mechanism through which applications can interact with users across many different devices (from desktops to mobile phones). Examples of participatory applications that use OSNs as their communication medium include CenceMe (Miluzzo, et al., 2008), SociableSense (Rachuri, et al., 2011) and Weather Radar (Demirbas, et al., 2010). While these proof–of–concept applications show the feasibility of using OSNs to support participatory sensing, more detailed research is necessary to understand the factors that drive user participation on these networks.

Attributes of users

In terms of user attributes, there is an increasing emphasis on the social and emotional issues that influence participation in applications (Chen and Sakomoto, 2013). In this paper, we examine the influence of remotely observable social factors on participation. Specifically, we focus on the tie strength between individual participants and the person who recruited them to join the application. Granovetter (1973) developed the concept of tie strength, distinguishing between strong and weak ties. Gilbert and Karahalios (2009) apply this concept of tie strength to online networks, demonstrating through a lab–based experiment that it is possible to use OSN profile data remotely measure tie strength between OSN users with an accuracy of 85 percent. DiMaggio and Powell (1983) find that under conditions of uncertainty potential adopters turned to prior adopters for guidance. Based upon these prior results, we anticipate that stronger ties between participants and their recruiters will lead to greater levels of participation. If this holds true, it makes the case for a decentralized recruitment process, wherein current participants play a role in recruiting new members.

In addition we examine the path length between the application user and the individuals managing the application. We use this measure as a proxy for trust in the application and to test scalability. The more distant a participant is from those using their data the more likely they are to have trust and privacy concerns. Efstratiou, et al. (2012) investigate user perceptions of privacy and acceptability for participatory applications using surveys and interviews and find that users had significant concerns over how data was released to other users of the application. As participatory applications evolve from the small–scale research prototypes available today to large–scale systems, it is infeasible for all participants to know the application managers. As such, if path length were found to have a significant effect on participation this would limit the scalability of such applications.

The devices through which users respond to queries are also likely to be important. Advances in smart phone technology mean that non–professionals are likely to possess all the necessary support to quickly and conveniently create, capture and report data and other content to the application (Hoseini–Tabatabaei, et al., 2013). However, while smart phones are becoming increasingly ubiquitous, the limited user interface of these devices and prohibitions against their use in the workplace may have a negative effect on the responsiveness of their users.

Attributes of responses

When considering the question of user responsiveness, two attributes of response messages are critical: how many of the queries are lost and the latency of the responses. A lost query is one that is not responded to within the lifespan of the message. The application manager is responsible for deciding this lifespan, for instance time sensitive queries, on live traffic data, may only be ‘live’ for 60 minutes. Any query not responded to within this timeframe will be deemed lost. Latency of responses refers to how long the participant took to respond. User responsiveness is especially critical for time sensitive applications. This can be clearly seen in the case of the Twitter Weather Radar application which implements a participatory weather monitoring system (Demirbas, et al., 2010). The application used a dedicated Twitter account to send query ‘tweets’ (i.e., Twitter messages) to its ‘followers’ (i.e., subscribers to the account). However, in terms of user participation, Demirbas, et al. (2010) reported only a 15 percent response rate to queries (i.e., 85 percent of queries were lost) and slow response times, with 50 percent of responses having a latency of over 30 minutes. While the high accuracy of the application demonstrates the potential of participatory sensing applications, slow user response times and high loss rates are a clear pitfall of manual sensing approaches.

Nazir, et al. (2008) developed and launched three experimental social gaming applications on Facebook and used the data collected from these applications to examine the response times of messages sent between users (i.e., the time period that a user must wait for a response from another user). Message requests were sent in an unpredictable fashion and users were often located in different countries, which increased the unpredictability of when messages are sent. The authors found that response times for users located in foreign countries were the same as for those users located in the same country as the application, demonstrating that geographic dispersion is not a barrier. The authors report an average response time of 16.52 hours with the longest response taking as much as 567 hours. If such high response times were inherent in participatory application architectures, this would preclude the development of time–sensitive applications. This clearly motivates further research into user responsiveness.

Conceptual framework

A network perspective provides an understanding of message responsiveness as the outcome of the combined influence of actor attributes and tie attributes. In this case, the actor types are the participants and the application (i.e., a 2–mode network). The ties, which link these actors, are directed. Queries are sent from the application to the participants while responses are sent from participants to the application. While a large body of previous work has investigated how to build participatory sensing architectures (Hoseini–Tabatabaei, et al., 2013), the relationship between query attributes, user attributes and responsiveness has yet to be systematically explored. Motivated by this issue we conducted an empirical study that involved the recruitment and automatic querying of experimental participants. Figure 2 visualises this theoretical framework and illustrates how this network operates along with the relevant Dependent Variables (DV) and Independent Variables (IV). The independent variables are those variables that are controlled and varied during the experiment, in order to understand their impact on the dependent variables (Kerlinger, 1986). The precise definition of the variables and how they were measured can be found later in this paper.

 

Theoretical framework - A network perspective on user responsiveness
 
Figure 2: Theoretical framework — A network perspective on user responsiveness.

 

 

++++++++++

Methodology

In order to examine how contextual factors effect message loss and user responsiveness, we designed an original experiment that ran for 30 days from 15 December 2012 to 14 January 2013. Four of the authors of this paper sent messages to 10 random contacts on both Facebook and Twitter, requesting that they (i.) register to participate in the study and (ii.) forward the participation request to their OSN contacts. No explicit incentive was offered for participation. Volunteers were thus recruited using a chain referral process, wherein each user who received the recruitment message and registered online was invited to share a personalized recruitment message with their friends. In this way, the recruitment message spread across the social networks according to how the users chose to redistribute it. User recruitment peaked within the first five days, but continued throughout the 30 days of the experiment. In total 70 users were recruited in this way.

During the registration process users were clearly informed that the purpose of the experiment was scientific research and would involve a simple online action. Once registered users began to receive query messages from the application. In total 3,055 messages were sent. The messages simply asked users to ‘check–in’ with the experiment by clicking a Web link in the message.

We then varied a number of factors to examine their effect on message loss and message response time. In particular, we varied the rate of messaging and the time of messaging. In addition, we were able to study natural variance in the user community in terms of: the tie strength between participants and their recruiter, the path length between themselves and the application manager, the type of device used and the OSN used (Facebook or Twitter). Following the experiment, an online survey of participants was conducted to gather demographic information and to support the assessment of tie strength between participants and their recruiter.

The detailed methodology employed in this study is broken down into the following three stages and explained below: recruitment of participants, empirical study and follow–up survey. These stages of the methodology are described in the following sections.

Recruitment of participants

The purpose of the recruitment phase was to gather sufficient experimental participants to support the empirical study. We selected viral recruitment using OSNs as this approach is increasingly being applied to build and support participatory sensing applications. Furthermore, this kind of viral recruitment is essential for any participatory application that does not have access to a large existing user base. As with the work of Demirbas, et al. (2010) and Miluzzo, et al. (2008), the success of our experiment shows that OSNs are a feasible mechanism to recruit and maintain contact with application participants.

In order to recruit users, we created a short message asking individuals to participate in our experiment. This message included a HTTP link to a Web–based registration system. Using the registration system, users were invited to register their name, social network username(s) and e–mail address. The message used to recruit participants was limited to 140 characters as this is the maximum size of a single Twitter message. The message reads as follows: “KU Leuven is conducting an experiment on user performance in participatory applications: please help. More information and sign up at [link]”, where [link] is a HTTP link to the sign–up and information Web page. Volunteers were recruited using a chain referral process with four of the authors of this paper inviting contacts in their chosen OSNs (Twitter or Facebook) to participate in the study. Four of the authors of this paper shared the link with 10 randomly selected friends on Facebook and Twitter. In addition, each incoming participant was requested to share the recruitment link with their own OSN friends during the sign–up phase. The message thus propagated across the OSNs according to the decisions of users to participate (or not) and share the recruitment message (or not). In total 70 participants were recruited for the study in this manner. Figure 3 shows private Facebook messages (a) and directed tweets as sent by the application right — (b) as users would have received them.

 

Check in message on FacebookCheck in message on Twitter
 
Figure 3: Check–in messages on (a) Facebook and (b) Twitter.

 

The graphs shown in Figure 4 are sociograms wherein experimental participants are represented as nodes and social ties are represented as edges. The ties shown in light grey are prior relationships (i.e., friendships in Facebook and followers in Twitter). The links shown in dark blue are relationships that were successfully used to recruit an experimental participant. Of these participants, one user (one percent) used only Twitter, 16 users (23 percent) used only Facebook and 53 users (76 percent) used both social networks. The maximum path length that a successful recruitment message travelled in the OSN was three hops. For context, the average path length separating two users of Facebook and Twitter is approximately four hops (Backstrom, 2011).

 

Recruitment sociogram for FacebookRecruitment sociogram for Twitter
 
Figure 4: Recruitment sociograms for Facebook (left) and Twitter (right).

 

In terms of geography, participants were widely distributed, being located in 11 countries. Participants volunteered from Belgium (57 percent), the U.K. (18 percent), Australia (nine percent), the U.S. (five percent), Spain (three percent), Poland (two percent), India (two percent), Ireland (one percent), Sri Lanka (one percent), Portugal (one percent) and Andorra (one percent).

Empirical study

Following self–registration, participants began to receive messages from the client application at a random time during the day, but at a controlled rate of messaging. The distribution of messages to users via the online social network clients was scheduled using standard Linux CRON jobs. Custom OSN client applications were created for managing the automatic messaging of clients. In the case of Twitter, the client sent a directed tweet (Twitter message) to participants; in the case of Facebook, the client sent a private chat message to participants. During the 30–day experiment a total of 3,055 query messages were sent to the 70 participants in the fashion described above.

A simple query message and task was deliberately chosen over an interactive participatory application as the purpose of this exploratory study was to examine the application–independent effect of social and contextual factors on user responsiveness, regardless of the inherent appeal of a given application. Messages sent to participants contained the following text: “OSN connectivity — click this link to help by checking–in: [link]”. Where [link] is a HTTP link to the online check–in Web page. The check–in Web page is distinct from the registration Web page and presents users with background information on the experiment and on how many ‘check–ins’ they have completed. The Web page also allows participants to either drop out of the experiment or share the recruitment message with their OSN contacts. This simple check–in activity is similar to the effort required to participate in applications such as FourSquare (www.foursquare.com) and can be viewed as a minimalist task.

We also explored the effect of rate of querying on user responsiveness, as follows. Initially all users received one message per day (24 hours), with messages automatically scheduled and sent at random times of the day and night. However, on day 15, the group was randomly split into two groups (based on odd and even user IDs). Half of the users were placed into the “rate–experiment” group and half into the “control” group. The rate of querying was then increased every few days for the “rate–experiment” group such that, by the end of the experiment, half of the users were receiving five messages per day.

As users ‘checked–in’ with the application we were able to gather information on the attributes of the response message. Each request that is sent to an OSN user embeds a unique Web link that associates each response to its corresponding query, OSN and username. When a user follows the check–in link, the Web server resolves the IP address of the user to a location, which is used to calculate their local time. The browser agent string, a standard feature of the HTTP protocol (Fielding, et al., 1999) is then used to identify the device type through which the user is responding. It should be noted that our approach to gathering user attributes records no more data than a standard Web site tracker. Naturally, the gathering of additional metadata on users would allow for richer models, however this comes at the cost of more invasive monitoring.

Follow–up survey

Following completion of the survey all enrolled participants were sent an e–mail message asking them to complete a short online survey. In addition to basic demographic information regarding participants we collected information allowing us to calculate the tie strength between the participant and the person who recruited them to join the experiment. Two reminder notices were sent, three days apart, resulting in 78 percent (54) completion rate for the surveys. Responses to the demographic questions indicated that the majority of respondents were male (80 percent) and aged between 18–35 years. Clearly our sample is not representative of the general public, however the age and gender profile is representative of social media users (Duggan and Brenner, 2013).

The concept of tie strength was introduced by Granovetter [4] and is defined as a ‘combination of the amount of time, emotional intensity, intimacy (mutual confiding), and the reciprocal services which characterise the tie’. Gilbert and Karahalios [5] take this definition of tie strength and develop five indicator questions to measure tie strength amongst Facebook users. In the follow up survey we adapt three of these questions to develop a measure of tie strength [6] between individual participants and the person who recruited them to join the study, see Table 1.

 

Table 1: Tie strength between users and recruiter.
Q.1. How strong is your relationship with the person who recruited you for the experiment?
barely know them ⇒ we are very close.
Q2. How upset would you be if this person unfriended you (Facebook) or unfollowed you (Twitter)?
not upset at all ⇒ very upset.
Q3. If you moved to a new online social network, how important would it be for this person to also migrate to this new online social network?
Would not matter ⇒ must bring them.

 

Measurement of variables

The independent variables associated with Query messages were controlled through a custom experimental software system, as described above, which also measured the dependent variables associated with Response messages. The path length that separates a participant from the application managers (i.e., the authors of this paper), was calculated when a user signed up to the experiment. The device type was recorded whenever a user performed a check–in and tie strength was estimated based upon responses to a follow–up survey.

As can be seen from Figure 2, a QUERY has three important attributes, which can be controlled by the APPLICATION: the time–of–day when the user is queried, the rate of querying and the OSN used to query the user. Each USER has three important attributes: the social distance or path length from the application managers, the tie strength between the user and their recruiter and the device used to access the OSN. While an application cannot control the attributes of a user, it can use these attributes to select users from the population to receive a particular query as described by Hughes, et al. (2014).

 

Table 2: Explanation of variables.
Query variableMeasurement
Time of Day Dependent {1–4}, OrdinalIndicates the time at which the message was sent. Four categories were used: morning (06:00–11:59), afternoon (12:00–17:59), evening (18:00–23:59) and night (24:00–05:59).
Rate of Querying Dependent {1–5}, OrdinalRate of querying is the number of messages sent to a user within one day (24 hours). Participants received from one to five messages per day during the experiment as described earlier in this paper.
OSN Dependent {0,1}, NominalThis variable denotes the communication channel used to query the user. Queries were delivered to via two online social networks (OSNs) — Facebook and Twitter. As a private Facebook message and directed tweet respectively.
Response variableMeasurement
Response Time Independent {seconds}, ScaleDenotes the time difference (measured in seconds) between when a query message is sent and when the response to that message is received.
Loss Independent {0,1}, NominalIndicates whether the message was responded to, or lost. As is common in computer networks, we bounded the “lifetime” of a query message to a known value — 24 hours. Any query not responded to within this time was deemed lost.
User variableMeasurement
Path length Dependent {1–4}, ScaleIndicates the number of links that separate a participant from the authors of this paper who served as application managers. A path length of one indicates a direct connection to the core authors.
Tie Strength Dependent {3–15}, ScaleIndicates the strength of relationship between a participants and the person who recruited them to join the study. The scale is based on responses to three questions used to diagnose tie (relationship) strength, as described earlier in this paper.
Device Dependent {0,1}, NominalIndicates whether the device was cellular (i.e., a phone) or non–cellular (i.e., a desktop or laptop). This data was collected when a user performed the check–in activity.

 

 

++++++++++

Results

In this section, we examine the effect of various contextual variables on user responsiveness. We first identify significant variables that impact message loss, and then analyse variables that influence message response times. The implications of these results are then discussed in Discussion.

Message loss

We used logistic regression analysis to examine the effect of relevant predictor variables, time of day, online social network, rate of messaging and path length on message loss. We examined both the main effects and interaction effects. Table 3 shows the results of the logistic regression analysis.

 

Table 3: Factors influencing message loss.
Note: R2=.06 (Cox and Snell), .08 (Nagelkerke). Model x2(7)=183.58, p<.01, **p<.05.
 95% CI for odds ratio
 B (SE)LowerOdds ratioUpper
Constant-1.70 (0.28)   
Time of day0.09 (0.08)0.941.091.27
OSN1.24** (0.24)2.143.445.54
Rate0.34** (0.07)1.231.401.60
Path length0.11 (0.07)0.971.111.27
OSN*Time of day-0.10 (0.07)0.780.901.04
OSN*Rate-0.32** (0.05)0.650.730.81
Rate*Time of day0.06** (0.02)1.011.061.11

 

As shown in Table 4 the main effects for both OSN and Rate have a statistically significant effect on message loss.

As only 54 of the 70 participants responded to the survey, we present the results of this regression analysis separately in Table 4.

 

Table 4: The effect of tie strength on message loss.
Note: R2=.002 (Cox and Snell), .003 (Nagelkerke). Model x2(7)=183.58, p<.01, **p<.05.
 95% CI for odds ratio
 B (SE)LowerOdds ratioUpper
Constant0.028 (0.12)   
Tie strength- 0.03** (0.01)0.951.971.00

 

As can be seen from Table 4, there is a significant and positive relationship between tie strength and message loss, indicating that the higher tie strength between participants and the person who recruited them the more likely that messages will be lost. As such while tie strength may cause people to join the application it does not necessarily positively influence their participation.

Message response times

In order to analyse the effect of the relevant predictor variables (Time of Day, OSN, Device, Path Length) on query response time, we perform a four–way ANOVA. We examined both the main effects and interaction effects. Due to the difficulty in interpreting interactions with more than two variables we only interpret interactions involving two variables. The results of our analysis are shown in Table 5.

 

Table 5: Factors influencing message response time.
Note: R2=.35, Adjusted R2=.25, **p<.05.
 Fdfsig
Intercept98.061.00
Time of Day9.96**3.00
OSN0.301.59
Device0.381.55
Rate of Messaging3.00**4.01
Path Length2.43**3.06
Time of Day*OSN0.053.99
Time of Day*Device4.69**3.00
Time of Day*Rate of Messaging0.7012.75
OSN*Device0.961.32
OSN* Rate of Messaging4.19**4.00
OSN*Path Length1.123.34
Device*Rate of Messaging0.304.88
Device*Path Length1.463.22
Rate of Messaging*Path Length0.9212.53

 

As can be seen from Table 5:

The main effect of OSN was not found to be significant.

 

++++++++++

Discussion: Maximizing user responsiveness

In this section, we analyse the implications of our results for the designers of manual participatory applications.

The effect of online social networks: In our analysis we found evidence highlighting the importance of carefully selecting the most appropriate online social network for a given application type. We observed a significant interaction effect between the OSN used, rate of messaging and message loss. Specifically, while Facebook exhibits significantly lower message loss at rates of one or two queries per day, when the querying rate rises past three queries per day, the advantage of Facebook is lost. In terms of response times, Twitter was much slower response times at querying rates of one or two queries per day, equalized at three messages per day and became faster at four or five messages per day.

Finding 1: Facebook performs better than Twitter for low–rate querying of users for non–time sensitive information, however Twitter offers superior performance when higher rates of querying are necessary or the requested information is time–sensitive.

The effect of device type: The device variable identifies the device that was used to send response messages. It is therefore not possible to examine the effect of device type on message loss but we can examine the effect of device on response time. Interestingly, we find a significant interaction effect between time of day and device used on message response time. On average, messages responded to via cellular devices have lower response times compared to non–cellular devices, however this relationship varies significantly based upon the time–of–day. During the morning (06:00–11:59) and evening (18:00–23:59) cellular devices have quicker response times. However, non–cellular devices provide quicker responses in the afternoon (12:00–17:59). Intuitively, both classes of device exhibit slow response times at night (24:00–05:59) when most users are asleep. We argue that this interaction effect is likely due to the work routine of participants. During the morning and evening when users are likely to be out of the office or travelling, a cellular device is the most convenient mechanism for accessing their OSNs. However, during the afternoon when most users are at work, they respond primarily using their desktop or laptop. While the increasing ubiquity of smart phones is an exciting trend for participatory applications, this result illustrates that non–cellular devices should also be exploited to ensure user maximum user responsiveness throughout the day.

Finding 2: Users tend to respond quicker using cellular devices in the morning and evening, while non–cellular devices perform better in the afternoon.

The effect of social distance: It has been argued that online social networks hold great promise for recruiting users to participatory applications (Demirbas, 2010). To investigate the impact of social distance on participation, we created two variables: tie strength and path length. Tie strength examines the effect of a strong or weak relationship between a participant and their recruiter on message loss and response times. Interestingly, we find a significant negative relationship between tie strength and message loss. This is the opposite of what we would expect, and implies that as tie strength increases the likelihood of message loss also increases. As such, participants recruited via weak ties are more likely to have a greater intrinsic interest in the application and are more active participants as opposed to those participated to satisfy a ‘friends’ request. In addition, we created the variable Path–length as a proxy for trust, which indicates the number of indirect links between a user and the application managers. We find that Path–length has no significant effect on message loss, however we do find a significant positive effect between path length and message response times. This finding again supports the importance of weak ties and the scalability of manual participatory applications.

Finding 3: Participants who have a weak tie with their recruiter performed better in terms of message loss than those who have a strong tie. As such, while strong ties may positively influence users decision to join the application, non–social elements are likely to be more important in determining levels of participation. In addition Path–length was not found to have a significant effect on message loss although users of higher Path–length demonstrated higher response times. Thus, large–scale viral recruitment for participatory applications may be feasible using online social networks, even in cases where the ties between recruiters and potential users are relatively weak.

It is our hope that the findings detailed above will allow practitioners to maximize user responsiveness in participatory applications, while providing researchers with inspiration for their further study. In the following section we discuss the limitations of our study and directions for future work.

 

++++++++++

Conclusions and future work

Participatory applications are attracting considerable attention from industry and academia as they offer the possibility of a scalable and low cost development approach for large–scale user centric applications. Participatory applications are however critically dependent upon user responsiveness. In this paper, we used an original 30–day study, involving 3,055 messages sent to 70 participants to investigate the effect of query and user attributes on message responsiveness. Through this analysis we demonstrate that: (i.) a network perspective that considers the attributes of both users and query messages is an appropriate framework to understand user responsiveness. (ii.) Responsiveness is significantly effected by observable query and user attributes such as: device type, rate of messaging, time of day, OSN used, tie strength and path length. (iii.) We identify significant interaction effects between a number of variables. Considering the implications of these findings for the research community, our exploratory study provides a first insight into the effect of remotely observable query and user attributes on responsiveness. We hope that these promising initial findings may motivate a new stream of research that builds a deeper understanding of the determinants of user responsiveness in participatory applications. For practitioners, the findings of our study may be used to more effectively target queries to users and thereby improve user responsiveness.

Building on this exploratory study, we intend to undertake two complementary streams of further research. Firstly, research is required to build a better understanding of how online social networks can be used to build communities for participatory applications. Secondly, we aim to explore the effect of specific communication channels impact user responsiveness in social applications.

Building communities for participatory applications: While this paper focuses on understanding user responsiveness in participatory applications, the successful use of online social networks to virally recruit a population of 70 users for a 30–day task confirms the potential of online social networks for building participatory applications, thus validating prior work by Demirbas, et al. (2010). Further work is needed (i.) to identify the factors that promote successful viral recruitment on OSNs and (ii.) to build an understanding of how tie strength between participants can be exploited to maximise the reach of viral user recruitment.

Effect of communication channels on responsiveness: Using public messages broadcasted to users of the Twitter network, Demirbas, et al. (2010) observed a loss rate of 85 percent and slow response times, with 50 percent of responses taking longer than 30 minutes. In contrast, our methodology used private query messages sent to individual users of both Facebook and Twitter, achieving a much lower rate of message loss at 50 percent, but with higher response times, with only 34 percent of responses received within 30 minutes and a median response time of one hour and 13 minutes. Further investigation is required to discover the impact of communication channel on user responsiveness. Specifically this analysis should consider the targeting of queries (individual or broadcast) and the privacy level of queries (private or public).

To support our further work, we plan to design and implement a larger scale experiment in the second quarter of 2014. In this experiment we aim to recruit a larger user community, which enables a statistically significant study of factors affecting both user recruitment and participation. End of article

 

About the authors

Caren Crowley is a post–doctoral researcher with the iMinds–DistriNet research group at the Department of Computer Science of the KU Leuven, Belgium. Her research focuses on applying network theory and analysis to understand communication in online and offline social networks. Caren holds a Ph.D. in management from the Whitaker Institute, National University of Ireland, Galway.
E–mail: caren [dot] crowley [at] cs [dot] kuleuven [dot] be

Wilfried Daniels is a Ph.D. researcher with iMinds–DistriNet. His research focuses on middleware for resource constrained systems such as sensor nodes and mobile devices. Wilfried holds an M.Sc. in computer science from KU Leuven.
E–mail: wilfried [dot] daniels [at] cs [dot] kuleuven [dot] be

Rafael Bachiller is a Ph.D. researcher with iMinds–DistriNet. His research focuses on building mobile crowdsourcing applications. Rafa holds an M.Sc. in telematics from the University of Seville, Spain.
E–mail: rafael [dot] bachiller [at] cs [dot] kuleuven [dot] be

Wouter Joosen is full professor at the Department of Computer Science of KU Leuven in Belgium. His research interests focus on software architecture and middleware, and in security aspects of software, including security in component frameworks and security architectures. Wouter holds a Ph.D. in computer science from KU Leuven.
E–mail: wouter [dot] joosen [at] cs [dot] kuleuven [dot] be

Danny Hughes is an assistant professor at the Department of Computer Science of KU Leuven in Belgium. His research interests focus on distributed systems, wireless sensor networks and middleware. Danny holds a Ph.D. in computer science from Lancaster University, U.K.
E–mail: danny [dot] hughes [at] cs [dot] kuleuven [dot] be

 

Notes

1. Lane, et al., 2008, p. 12.

2. Lane, et al., 2008, p. 12.

3. Gaonkar, et al., 2008, p. 174.

4. Granovetter, 1973.

5. Gilbert and Karahalios, 2009, p. 212.

6. Two of the questions were dropped as they dealt with willingness to lend money and helpfulness in getting a job which were deemed too culturally specific.

 

References

L. Backstrom, 2011. “Anatomy of Facebook” (21 November), at https://www.facebook.com/notes/facebook-data-team/anatomy-of-facebook/10150388519243859, accessed 14 January 2014.

M. Borazio, U. Blanke and K. Van Laerhoven, 2010. “Characterizing sleeping trends from postures,” Proceedings of the 2010 International Symposium onWearable Computers (ISWC), pp. 1–2.
doi: http://dx.doi.org/10.1109/ISWC.2010.5665853, accessed 27 July 2014.

D. Brabham, 2008. “Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application,” First Monday, volume 13, number 6, at http://firstmonday.org/article/view/2159/1969, accessed 27 July 2014.

R. Chen and Y. Sakomoto, 2013. “Perspective matters: Sharing crisis information in social media,” Proceedings of the 46th Hawaii International Conference on Systems Sciences, pp. 2,033–2,041, at http://www.computer.org/csdl/proceedings/hicss/2013/4892/00/4892c033.pdf, accessed 27 July 2014.

D. Christin, A. Reinhardt, S. Kanhere and M. Hollick, 2011. “A survey on privacy in mobile participatory sensing applications,” Journal of Systems and Software, volume 84, number 11, pp. 1,928–1,946.
doi: http://dx.doi.org/10.1016/j.jss.2011.06.073, accessed 27 July 2014.

M. Demirbas, M. Bayir, C. Akcora, Y. Yilmaz and H. Ferhatosmanoglu, 2010. “Crowd–sourced sensing and collaboration using Twitter,” Proceedings of 2010 IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks (WoWMoM), pp. 1–9.
doi: http://dx.doi.org/10.1109/WOWMOM.2010.5534910, accessed 27 July 2014.

D. Dillman, J. Smyth and L. Christian. 2009. Internet, mail, and mixed–mode surveys: The tailored design method. Third edition. Hoboken, N.J.: Wiley.

C. Efstratiou, I. Leontiadis, M. Picone, K. Rachuri, C. Mascolo J. Crowcroft, 2012. “Sense and sensibility in a pervasive world,” In: J. Kay, P. Lukowicz, H. Tokuda, P. Olivier and A. Krüger (editors). Pervasive computing. Lecture Notes in Computer Science, number 7319, pp. 406–424.
doi: http://dx.doi.org/10.1007/978-3-642-31205-2_25, accessed 27 July 2014.

S. Eisenman, E. Miluzzo, N. Lane, R. Peterson, G.–S. Ahn and A. Campbell. 2007. “The BikeNet mobile sensing system for cyclist experience mapping,” SenSys ’07: Proceedings of the Fifth International Conference on Embedded Networked Sensor Systems, pp. 87–101.
doi: http://dx.doi.org/10.1145/1322263.1322273, accessed 27 July 2014.

R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach and T. Berners–Lee, 1999. “Hypertext transfer protocol –– HTTP/1.1,” at http://www.ietf.org/rfc/rfc2616.txt, accessed 14 January 2014.

S. Gaonkar, J. Li, R. Choudhury, L. Cox, and A. Schmidt, 2008. “Micro–Blog: Sharing and querying content through mobile phones and social participation,” MobiSys ’08: Proceedings of the Sixth International Conference on Mobile Systems, Applications, and Services, pp. 174–186.
doi: http://dx.doi.org/10.1145/1378600.1378620, accessed 27 July 2014.

E. Gilbert and K. Karahalios, 2009. “Predicting tie strength with social media,” CHI ’09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 211–220.
doi: http://dx.doi.org/10.1145/1518701.1518736, accessed 27 July 2014.

M. Granovetter, 1978. “The strength of weak ties,” American Journal of Sociology, volume 78, number 6, pp. 1,360–1,380.

S. Hoseini–Tabatabaei, A. Gluhak, and R. Tafazolli 2013. “A survey on smartphone–based systems for opportunistic user context recognition,” ACM Computer Surveys, volume 45, number 3, article 27.
doi: http://dx.doi.org/10.1145/2480741.2480744, accessed 27 July 2014.

D. Hughes, C. Crowley, W. Daniels, R. Bachiller and W. Joosen, 2014. “User–Rank: Generic query optimization for participatory social applications,” HICSS ’14: Proceedings of the 2014 47th Hawaii International Conference on System Sciences, pp. 1,874–1,883.
doi: http://dx.doi.org/10.1109/HICSS.2014.236, accessed 27 July 2014.

F. Kerlinger, 1986. Foundations of behavioral research. Third edition. New York: Holt, Rinehart, and Winston.

N. Lane, S. Eisenman, M. Musolesi, E. Miluzzo and A. Campbell, 2008. “Urban sensing systems: Opportunistic or participatory?” HotMobile ’08: Proceedings of the Ninth Workshop on Mobile Computing Systems and Applications, pp. 11–16.
doi: http://dx.doi.org/10.1145/1411759.1411763, accessed 27 July 2014.

E. Miluzzo, N. Lane, K. Fodor, R. Peterson, H. Lu, M. Musolesi, S. Eisenman, X. Zheng and A. Campbell, 2008. “Sensing meets mobile social networks: The design implementation and evaluation of the CenceMe application,” SenSys ’08: Proceedings of the Sixth ACM Conference on Embedded Network Sensor Systems, pp. 337–350.
doi: http://dx.doi.org/10.1145/1460412.1460445, accessed 27 July 2014.

P. Mohan, V. Padmanabhan and R. Ramjee, 2008. “Nericell: Rich monitoring of road and traffic conditions using mobile smartphones,” SenSys ’08: Proceedings of the Sixth ACM Conference on Embedded Network Sensor Systems, pp. 323–336.
doi: http://dx.doi.org/10.1145/1460412.1460444, accessed 27 July 2014.

A. Nazir, S. Razir and C.–N. Chuah, 2008. “Unveiling Facebook: A measurement study of social network based applications,” IMC ’08: Proceedings of the Eighth ACM SIGCOMM Conference on Internet Measurement, pp. 43–56.
doi: http://dx.doi.org/10.1145/1452520.1452527, accessed 27 July 2014.

J. Park and C.–W. Chung, 2012. “When daily deal services meet Twitter: Understanding Twitter as a daily deal marketing platform,” WebSci ’12: Proceedings of the Fourth Annual ACM Web Science Conference, pp. 233–242.
doi: http://dx.doi.org/10.1145/2380718.2380748, accessed 27 July 2014.

K. Rachuri, C. Mascolo, M. Musolesi and P. Rentfrow, 2011. “SociableSense: Exploring the trade–offs of adaptive sampling and computation offloading for social sensing,” MobiCom ’11: Proceedings of the 17th Annual International Conference on Mobile Computing and Networking, pp. 73–84.
doi: http://dx.doi.org/10.1145/2030613.2030623, accessed 27 July 2014.

Y. Xiao, P. Simoens, P. Pillai, K. Ha and M. Satyanarayanan, 2013. “Lowering the barriers to large–scale mobile crowdsensing,” HotMobile ’13: Proceedings of the 14th Workshop on Mobile Computing Systems and Applications, article number 9.
doi: http://dx.doi.org/10.1145/2444776.2444789, accessed 27 July 2014.

 


Editorial history

Received 21 April 2014; revised 16 July 2014; accepted 17 July 2014.


Creative Commons License
This paper is in the Public Domain.

Increasing user participation: An exploratory study of querying on the Facebook and Twitter platforms
by Caren Crowley, Wilfried Daniels, Rafael Bachiller, Wouter Joosen, and Danny Hughes.
First Monday, Volume 19, Number 8 - 4 August 2014
https://firstmonday.org/ojs/index.php/fm/article/download/5325/4102
doi: http://dx.doi.org/10.5210/fm.v19i8.5325.