What are we missing? An empirical exploration in the structural biases of hashtag-based sampling on Twitter
First Monday

What are we missing? An empirical exploration in the structural biases of hashtag-based sampling on Twitter by Evelien D'heer, Baptist Vandersmissen, Wesley De Neve, Pieter Verdegem, and Rik Van de Walle



Abstract
The hashtag is a recognized and often used method to collect Twitter messages. However, it has its limits with respect to the inclusion of follow-messages, or @replies, that do not contain a hashtag. This paper explored to what extent the inclusion of non-hashtagged responses affected the study of interactions between Twitter users. We drew from the Twitter debate on the 2014 Belgian elections, collected under the #vk2014 hashtag. Our dataset included non-hashtagged responses to assess (1) how they differ from hashtagged responses; and, (2) how this affects the conversation network. The findings showed that (1) hashtagged responses were more likely to include other interactive elements (e.g., hyperlinks); and, (2) the inclusion of non-hashtagged responses generated larger and more reciprocal networks. However, central users further strengthened their position in the network.

Contents

1. Introduction
2. Hashtag-based sampling
3. Methodology
4. The construction of hashtag debate
5. Discussion and conclusion

 


 

1. Introduction

Social media platforms are both objects of analysis and methodological tools. In this paper, we explored social media as a method. Compared to conventional methods in social science research (such as surveys and in-depth interviews), social media as a method is far from being fully understood. In particular, we assessed a popular sampling procedure for Twitter studies, i.e., the hashtag approach (e.g., Ausserhofer and Maireder, 2013; Bruns and Burgess, 2011; Iannelli and Giglietto, 2015). Hashtags are valuable to study the emergence and evolution of Twitter debates around particular topics. However, scholars do acknowledge not all relevant tweets are captured (Ausserhofer and Maireder, 2013; Bruns and Moe, 2014; Larsson and Moe, 2012). In particular, “we may significantly underestimate the full volume of @replies which was prompted by hashtagged tweets[1]. What are we missing? And does it matter?

The objective of this paper is the empirical examination of the impact of hashtag sampling on conversation networks (i.e., user-user networks based on @replies). First, we assessed how hashtagged and non-hashtagged responses differed for a number of structural characteristics (e.g., hyperlinks). Second, we used social network analysis (SNA) to compare (1) a “hashtag only” conversation network; and, (2) a conversation network which included non-hashtagged responses. To situate our empirical work, we first discuss how Twitter data is constructed through Twitter’s application programming interfaces (APIs). Following, we outline three research questions that guided our empirical work. These questions concern (1) the characteristics of (non-) hashtagged responses; (2) the changes in the network structure, and, (3) the relative positions of the users. In the methodology section, we elaborate on the data collection, which occurred in the advent of the 2014 elections in Belgium. After, we provide the results, followed by an extensive discussion.

 

++++++++++

2. Hashtag-based sampling

Social media data is produced within closed, commercial organizations. APIs form the gateways through which developers and in extension researchers can access data. Compared to Facebook for example, Twitter is a public and fairly accessible platform for research. It offers two different public APIs; the REST API (which can go back in time) and the streaming API (which captures data in real-time and therefore requires continuous connection with Twitter’s server). Only a handful of scholars critically assessed the usage of the API for scientific research (e.g., Burgess and Bruns, 2012; Driscoll and Walker, 2014; Lomborg and Bechmann, 2014; Vis, 2013). Fewer studies empirically assessed the reliability of Twitter data samples comparing different APIs, such as the Streaming API, the Search API and Twitter data from Gnip (Driscoll and Walker, 2014; González-Bailón, et al., 2014). The present study aims to add to the methodological inquiry of the API, focusing on the queries Twitter offers and how this affects our findings. By filtering Twitter data, we can distinguish hashtag/keyword-based sampling, actor-based sampling (i.e., tweets sent by specific actors) (Stieglitz and Dang-Xuan, 2012) or marker-based sampling (e.g., language, location or Twitter client) (Gerlitz and Rieder, 2013).

This study combined both hashtag- and actor-based sampling procedures in order to overcome “the single-layer problem” of hashtag-based research (Bruns and Stieglitz, 2014). The exact implementation of our two-step approach is outlined in the methodology section. Both the actor- and hashtag-based approach have been applied to the analysis of political debates on Twitter, albeit often separately. We find research that is based on a selection of politicians (e.g., Graham, et al., 2013; Vergeer, et al., 2011), a selection of users tweeting about politics (e.g., Ausserhofer and Maireder, 2013) or hashtags, dedicated to the elections (e.g., Bruns and Burgess, 2011; D’heer and Verdegem, 2014; Larsson and Moe, 2012).

In short, we value the usage of the hashtag as a collection method for the study of events, such as elections. In this respect, our study contributes to its continuing usage for Twitter research. The inclusion of non-hashtagged responses offers a benchmark to increase our understanding of consequences of the hashtag as a sampling method. Below, we outline three research questions that structured our empirical work.

Hashtag-based sampling is a specific type of keyword sampling. Hashtags are explicit markers for particular topics or issues (Bruns and Stieglitz, 2014). Users can track hashtagged tweets, independent from their follower-followee networks (Bruns and Moe, 2014). Whether users actually follow the feed of hashtagged messages, is unsure. Their interactive nature does make them easily discoverable; hence, hastagged tweets may receive more retweets (Bruns and Stieglitz, 2014). In extension, we explored whether other structural characteristics of the tweet, such as additional hashtags, @usernames and URLs, differed for hashtagged versus non-hashtagged tweets. In a qualitative study on Twitter usage during a TV show, users argued for the exclusion of the hashtag in responses when they target only one or a few selected users (D’heer and Verdegem, 2015). In other words, non-hashtagged reponses may be less informative for the overall “hashtag public”. In turn, this would show in the absence of a hyperlink or other interactive elements in non-hashtagged responses. In short, the first research question for this study is the following:

RQ1: How do hashtagged responses and non-hashtagged responses differ in their structural characteristics?

Scholars acknowledge the limitation of the hashtag approach to construct conversation networks amongst users (Bruns and Burgess, 2011; Larsson and Moe, 2012). Therefore, we assume the exclusion of follow-up messages can affect the structural characteristics of user-user networks. In this paper, we relied on a number of network level SNA metrics we used in hashtag-based research on Twitter and 2012 elections (D’heer and Verdegem, 2014). More specifically, we compared network centralization, density, reciprocity and transitivity for (1) the “hashtag only” conversation network; and, (2) the conversation network including the non-hashtagged responses. In short, these metrics describe how dispersed, centralized and stable connections between users in the respective networks are. We explain these measures more in detail in the methodology and results section. Hashtag publics are “ad hoc publics” as users interact with one another beyond their fairly stable follower-followee networks (Bruns and Stieglitz, 2014). The second research question goes as follows:

RQ2: How do structural network features alter when we include non-hashtagged responses?

Following the analysis of the entire network, we took a look at user metrics and how they changed after including non-hashtagged responses. More specifically, we assessed in-degree (i.e., replies received), out-degree (i.e., replies sent) and reciprocity (i.e., replies reciprocated). Since we studied election debates in particular, it was fruitful to distinguish elites (i.e., parties/politicians or mass media/journalists) and non-elites (i.e., citizens). Research has shown that central positions in Twitter networks are occupied by elite rather than non-elite users (Ausserhofer and Maireder, 2013; D’heer and Verdegem, 2014; Larsson and Moe, 2012). In other words, their in-degree centrality is higher. More so, elite users have not yet fully embraced the “interactive logic” characterizing online media (Esser, 2013). In sum, we explored to what extent the inclusion of non-hashtagged responses altered users’ positions in the network and the reciprocal nature of their interactions with other users. Since we distinguished between elite and non-elite users, our final research question goes as follows:

RQ3: How do elites versus non-elites’ in-degree and out-degree positions and reciprocity alter when we include non-hashtagged responses?

 

++++++++++

3. Methodology

3.1. The 2014 Belgian elections on Twitter

This study was part of a large-scale quantitative study on the national elections in Belgium, held 25 May 2014. More specifically, we focused on Flanders, the northern part of Belgium and home to the Dutch-speaking community. Since the late 1960s, Belgian political parties and traditional media are organized along regional lines, i.e., Flanders (ca. six million inhabitants) and Wallonia (ca. four million inhabitants). Consequently, we find two separate electoral campaigns. The political landscape is characterized by a highly fragmented multiparty system. Parties compete against one another but must work together with each other to form a coalition. Based on the models of media and politics Hallin and Mancini (2004) distinguish, Belgium represents a democratic corporatist model.

The usage of Twitter in Flanders lags behind other social media platforms, such as Facebook (68 percent), as about 21 percent has an active account. Twitter usage by politicians is higher. Based on survey data, we know 52 percent of the Flemish candidates running for the 2014 elections has a Twitter account. In line with cross-national European research, we acknowledge traditional media still play a significant role in election campaign in Flanders. Social media add but do not replace traditional media.

3.2. Data collection: Combining hashtag and user streams

Via the self-hosted open source tool yourTwapperKeeper (Bruns and Liang, 2012), we collected tweets containing the dedicated hashtags of the elections (i.e., #vk14/#vk2014; “vk” refers to “verkiezingen” or elections in English) from the end of April 2014 until the beginning of June 2014. yourTwapperKeeper uses the Twitter streaming and search API to collect Twitter messages (Gaffney and Puschmann, 2014). Since elections are planned events, we were able to capture tweets as they occur (hence, via the stream).

The role of Twitter in the 2014 national election debate was discussed in a different paper (which follows research we conducted on the 2012 local elections in Belgium; see D’heer and Verdegem, 2014). Both papers discussed the election debate on Twitter. Compared to the present study, they drew from hashtagged messages only to construct user-user networks based on @replies. Further, these studies aimed to explain whether two-way conversation occured between politicians and journalists (defined as elites) and citizens (defined as non-elites). The present study serves a different finality, as the hashtag itself is under analysis. It is based on the construction of a so-called “pull-out” our sub-sample for in-depth analysis (Tufekci, 2014).

For a selected period during the pre-election campaign (early May 2014), we combined hashtag and user streams to capture follow-up responses that do not contain the hashtag. First, all tweets containing the dedicated hashtag (i.e., #vk14 or #vk2014) were captured via the public stream. Second, for each harvested tweet the original sender was tracked, capturing all tweets from and to this specific Twitter user. Following, the combination of the hashtag and user streams allowed us to reconstruct full conversations containing both hashtagged tweets and non-hashtagged responses. In total, our sub-sample consisted of 1,719 tweets from 868 unique users which reflects about 10 percent of the original #vk14 pre-election debate and spreaded over two consecutive days. The streaming API cannot include retweet counts since data were gathered real time. Therefore, counts were retrieved later, per tweet, 24 hours after the tweet was sent. Below we outline the analyses we conducted to answer our research questions outlined above.

3.3. Data processing and analysis

A Twitter database, as retrieved via the streaming API, provides a chronological representation the messages. It does not group tweets by conversation (although this makes more sense from a research perspective). Online, users can easily access conversations by clicking the hyperlink “view conversation” or “details”. These functions interrupt the chronological stream of messages and provide an overview of the conversation between users. For this study, we re-constructed our Twitter database, grouping messages and their responses. Per hashtagged root (i.e., a message unrelated to any former conversation), all linked response tweets (hashtagged and non-hashtagged) were added.

For the first research question, we coded hashtagged and non-hashtagged responses. We compared tweet length, the presence of additional hashtags, hyperlinks, @usernames and retweet counts via regression analyses in SPSS Statistics 21. For the second and third research question, we relied on the SNA software UCINET 6.0 (Borgatti, et al., 2002) to analyze the networks including and excluding non-hashtagged responses. In network terminology, the type of relation studied here is “interaction” (i.e., talking with) (Borgatti and Halgin, 2011). For these interactional ties, tie strength was measured by frequency, hence, quantitatively determined. User-user interactions are based on @replies (if applicable, in the form of a multi-turn @reply chain which contains multiple @usernames). We prepared.csv datafiles containing @names in two columns (i.e., from and to) for each of the networks.

Using SNA, we assessed both network and user-based statistics. The former allows to describe the structure of the network; the number of ties/actors, density, reciprocity, centralization and transitivity. The value and meaning of these measures become clear throughout the discussion of the results. Concerning user-based metrics, we focused on degree centrality and reciprocity. Since we compared elites and non-elites, users were coded accordingly. Concerning elites, we defined political parties, politicians, spokespersons, traditional media outlets and journalists. Concerning non-elites, we coded citizens; users that are not professionally active in politics or traditional media. In sum, our categorization reflects Ausserhofer and Maireder’s (2013) distinction between professional and non-professional actors in Twitter election networks.

 

++++++++++

4. The construction of hashtag debate

The results section contains three sub-sections, related to the three research questions we outlined for this study.

4.1. Comparing responses with(out) the dedicated hashtag

As previously mentioned, we built on a slice of the data we collected in the light of the 2014 elections in Belgium. Our sample of 1719 tweets consisted of 985 original tweets (no retweets/responses) and 734 follow-up messages. The original tweets all contained the dedicated hashtag (i.e., #vk14/#vk2014), whereas for the responses only 154 (out of the 734 responses) contained a hashtag. In this respect, and as expected by Bruns and Stieglitz (2014), we underestimated the number of @replies prompted by hashtagged tweets. Most of the users responded without the inclusion of the hashtag. In other words, it reflects their “default” behavior. We did not ask users to explain their behavior, but we did compare hashtagged and non-hashtagged responses on a number of structural charateristics. In Table 1 below, we present (1) the length of the messages; (2) additional hashtags; (3) the number of @usernames; (4) the presence of a hyperlink; and, (5) the number of retweets. Following, Table 2 presents a regression analysis with hashtagged versus non-hashtagged responses as dependent variable and the five above mentioned characteristics as independent variables. Below, we discuss the findings from the respective tables.

 

Table 1: A comparison between hashtagged and non-hashtagged responses.
  Hashtagged responses
(N=154)
Non-hashtagged responses
(N=580)
Tweet length
  • Mean (SD)/Median
  • Distribution
15.55 (SD=4.93)/16
Q1=12
Q2=16
Q3=19
Q4=26
13.97 (SD= 5.69)/15
Q1=9
Q2=15
Q3=19
Q4=27
Additional hashtag(s) 52.3% 13.6%
Number of @usernames
  • Mean (SD)/Median
1.74 (SD=1.06)/1 1.80 (SD=0.97)/2
Hyperlink(s)26.1%5.9%
Retweets
  • Average
  • % with 0 retweets
0.47 (SD=1.26)
75.9%
0.15 (SD=0.72)
93.5%

 

 

Table 2: Logistic regression with the inclusion of the hashtag as the dependent variable.
Note: ***p<.001.
The inclusion of the dedicated hashtagExp (b) (SE)
Intercept
Tweet length
Additional hashtag(s)
Number of @usernames
Hyperlinks
.055 (.39)***
1.078 (.02)***
3.570 (.15)***
.809 (.11)
9.164 (.31)***
N734
Nagelkerke pseudo R2.309

 

Table 1 contains the average tweet length of hashtagged and non-hashtagged responses. The distribution, and Q1 in particular, shows non-hashtagged responses have a slightly lower word count. For Q2 to Q4, we notice similarities between both types of responses. For longer responses, the hashtag might be excluded to comply with the 140-character limit. The regression analysis, as presented in Table 2, defines a significant positive relation between tweet length and the usage of the dedicated hashtag. This indicates that shorter responses “are not deemed important enough by their authors to receive a hashtag themselves[2].

The number of hashtagged responses containing additional hashtags was much higher compared to non-hashtagged responses. The regression analysis presented in Table 2 shows it is more common to either include multiple hashtags or to include no hashtag at all. A closer look at the data showed a number of election-related hashtags were added in Twitter messages such as TV shows about the elections, political parties or policy issues discussed during the elections.

Tweets with a lot of responses can result in multi-turn reply chains when users choose to respond to the original sender of the messages and all the other respondents. We assessed to what extent additional usernames are related to the in/exclusion of the hashtag. Multiple usernames indicate discussion amongst a select group of users. However, Table 1 and Table 2 show no differences on the average number of included @usernames for hashtagged compared to non-hashtagged messages. Rather, the number of included @usernames indicated the average number of follow-up messages original tweets received, which was 2.96 (SD=3.35, Median=2).

Alike additional hashtags, we noticed that a larger percentage of the hashtagged responses enclosed a hyperlink compared to non-hashtagged responses. Indeed, the regression analysis presented in Table 2 shows a strong relation between the presence of a hyperlink and the usage of the dedicated hashtag. Tweets with a hyperlink might be considered more informative, and in this respect a hashtag is added to increase the potential reach of the message.

Last, we calculated the number of retweets. As the presence of a hashtag increases message visibility, we assume hashtagged responses receive more retweets (Bruns and Stieglitz, 2014). Table 1 shows the average number of retweets is very low for both types of responses (i.e., <1). Further, we found large standard deviations for the averages, as a lot of follow-up messages do not receive a single retweet. The overall low retweet counts can be related to the lower visibility of response tweets, compared to original tweets. Therefore, we added the proportion of messages that received no retweets (75.9 percent versus 93.5 percent). Although both percentages are very high, we did find differences between hashtagged and non-hashtagged responses. Further, Table 3 presents a negative binominal regression analysis to model the difference in retweet counts between hashtagged and non-hashtagged responses, accounting for skewed distribution of the dependent variable.

 

Table 3: Negative binominal regression with retweet count as the dependent variable.
Note: ***p<.001; + Due to missing data, N dropped from 734 to 506.
Retweet countExp(b) (SE)
Tweets excluding the dedicated hashtag (= baseline) 
N+506
Deviance (Value/df).754

 

The exponential betas (cf., Exp(b)) reflect the odds ratios or the chance by which the inclusion of the dedicated hashtag resulted in a higher retweet count. Since the ratio is over one and significant, we can say that responses including the dedicated hashtag were more likely to receive a higher retweet count. As shown in Table 3, the baseline is set at responses excluding the dedicated hashtag as this category was more common.

In sum, hashtagged responses were characterized by the inclusion of additional interactive elements. These elements were informational rather than conversational in nature. Concerning the former, we refer to hyperlinks and hashtags whereas for the latter we refer to @usernames. In turn, hashtagged responses seemed to generate more retweets, which links up to the inclusion of the additional interactive elements.

4.2. Conversation networks with(out) hashtagged responses

In this second subsection, we discuss the impact of the inclusion of non-hashtagged responses on the user-user conversation network (based on @replying). As previously mentioned, most of the follow-up messages did not include the hashtag, hence, we underestimated follow-up conversation. As shown in Table 4 below, the network grew in size when we included non-hashtagged responses. Both the number of users (i.e., nodes) and interactions (i.e., ties) increased.

 

Table 4: Structural characteristics of the user-user networks excluding and including non-hashtagged responses.
 User-user network excluding non-hashtagged responsesUser-user network including non-hashtagged responses
Network sizeNodes: 161
Ties: 156
Nodes: 518
Ties: 1,082
CentralizationOut-degree: 2.73%
In-degree: 1.47%
Out-degree: .42%
In-degree: .55%
Density0.6%0.4%
Reciprocity.79%14.69%
Transitivity.88%3%

 

The table outlines the structural characteristics of each of the respective networks. First, centralization indicates the network’s tendency towards centrality or the concentration of interactions around a few users. Both for out-degree (i.e., outgoing messages) and in-degree (i.e., incoming messages), centralization was low. This means activity of (i.e., out-degree) and attention for (i.e., in-degree) the individual actors varied substantially. Network centralization further decreased; hence, more variation occured amongst the users in the network when we included non-hashtagged responses.

Second and related, network density (or the number of connections between users divided by the total number of possible connections) shows how tightly knit the network is. Overall, network density was very low. This means that the probability of any given tie between two random actors in the networks was very low. In turn, this is related to the relatively large sizes of the networks (which inevitably decreases density) (Prell, 2012) and the type of relation, i.e., communication (rather than for example friendship). In addition, the presence of the hashtag might have instigated conversation beyond one’s follower-followee network. As such, this generated more dispersed communication patterns. Whereas the inclusion of non-hashtagged responses hardly affected network density, it did show a substantial effect on network reciprocity. Network reciprocity accounts for dyadic relationships, i.e., to what extent ties between two nodes are reciprocated. As shown in Table 4, reciprocity increased to 15 percent. This means that of all pairs that had a connection, 15 percent were reciprocated connections. Last, we assessed transitivity or triadic closure. Transitivity defines to what extent people are connected if they share a connection (i.e., if A-B and B-C than A-C). Although percentages were quite low, we did find triadic closure increased when we included non-hashtagged responses. To put it another way, the proportion of cases where a single link could complete the triad increased. This was 12 percent for the “hashtag only” network and 21 percent for the network including non-hashtagged responses.

In sum, the inclusion of non-hashtagged responses generated larger, less centralized but equally dispersed networks. Yet, the increase in transitivity and especially reciprocity showed more cohesion and stronger ties for “the micro layer of communication” (Bruns and Moe, 2014). Conversation did not occur across a wide variety of users discussing the elections, but between a small number of users in the network. These interactions were stronger when we accounted for non-hashtagged responses. Below, we provide a visual representation of both networks, conducted in NetDraw (UCINET). The networks are presented in Figure 1 and 2 are illustrative of the findings in Table 4.

 

Spring-embedding representation of the network excluding non-hashtagged responses
 
Figure 1: Spring-embedding representation of the network excluding non-hashtagged responses.

 

 

Spring-embedding representation of the network excluding non-hashtagged responses
 
Figure 2: Spring-embedding representation of the network excluding non-hashtagged responses.

 

Figure 1 presents a dispersed network of multiple disconnected conversations between users. The “main component” or largest component of connected nodes consisted of 37 nodes. Figure 2 shows a denser network of users. For this network, the largest component consisted of 359 nodes, hence, more users were added and more connections between users occurred. However, these connections occurred amongst small sets of users, rather than across the entire network. The thicker red lines reflect reciprocal ties, which are more present in Figure 2. The collection of nodes on the left (in Figure 2) reflects a collection of separate conversations, although they appear somewhat connected.

4.3. Alterations in users’ positions in the networks

Since the network grew in size and new users were included, existing users’ positions altered. This final subsection discusses the users that were present in both networks and how their positions altered when we included non-hashtagged responses. In Table 5 below, we outline how users’ positions changed for both networks. The original “hashtag only” network counted 166 users. We present the proportion of users that strengthen, maintain or weaken their position in Table 5, distinguishing between incoming messages (i.e., “in-degree”) and outgoing messages (i.e., “out-degree”).

 

Table 5: Percentage of users that take a different position in the network after including non-hashtagged responses (N=161). The percentages are based on the normalized in-degree and out-degree scores per user.
 In-degreeOut-degree
Users that strengthen their position51.9%54.1%
Users that maintain their position27.1%25.6%
Users that weaken their position21%20.3%

 

For half of the users (i.e., 51.9 percent and 54.1 percent), we found that the relative amount of messages they sent or received increased when we included non-hashtagged responses. For a smaller percentage of users, their relative position decreased when we included non-hashtagged responses (i.e., 21 percent and 20.3 percent). Only close to 25 percent of the users maintained their relative position in the network. In sum, the addition of non-hashtagged responses changed the relative positions of the majority of the actors in the debate (i.e., about 73–74 percent).

Research on Twitter and elections has shown that elite rather than non-elite actors occupy central positions in Twitter networks (Ausserhofer and Maireder, 2013; D’heer and Verdegem, 2014; Larsson and Moe, 2012). In other words, politicians, journalists and established experts receive most attention in the election debate. Their in-degree centrality is higher compared to non-elite actors (i.e., citizens) in the network. When we included non-hashtagged responses, we noticed that elites strengthen their position. More specifically, Table 6 shows that 75 percent of the elites strengthened their in-degree position, whereas for non-elites this was only 43 percent. However, for out-degree we noticed the opposite. The majority of non-elites (i.e., 65 percent) strengthened their out-degree position. In sum, we underestimate non-elites’ outgoing activity and elites’ incoming activity when we account for the hashtagged conversation only. The inclusion of non-hashtagged responses further reinforced the existing tendencies, rather than subverting them.

 

Table 6: Percentage of (non-) elites that take strengthen their position in the network after including non-hashtagged responses (N=161). The percentages are based on the normalized in-degree and out-degree scores per user.
 Elites
(N=65)
Non-elites
(N=96)
Users that strengthen their in-degree position75%43.04%
Users that strengthen their out-degree position29.17%64.56%

 

Elite actors maintained their dominant positions as communication receivers and non-elites maintained their positions as senders when we included non-hashtagged responses. Alike citizens, the majority of the politicians and journalists do not include the hashtag in their responses.

Finally, we discuss how user reciprocity changed for elites versus non-elites. As previously mentioned, the overall network showed an increase in reciprocity. Table 7 below shows the increase in reciprocity was stronger for non-elites compared to elites.

 

Table 7: Percentage of (non-)elites that are more reciprocal (N=161). The percentages are based on the normalized reciprocity scores per user.
 Elites
(N=65)
Non-elites
(N=96)
Users that are more reciprocal12.5%24.05%

 

These insights are consistent with the findings presented in Table 6, which showed a larger increase in outgoing activity for non-elites. In addition, the findings resonate with existing research. Social media’s interactive potential is not yet fully acknowledged by politicians (D’heer and Verdegem, 2014; Graham, et al., 2013; Klinger, 2013) or journalists (Herrera-Damas and Hermida, 2014; Larsson, 2013).

 

++++++++++

5. Discussion and conclusion

Social media attract intense scholarly interest and will continue to do so in the future. Twitter presents a special case because of the public nature of its API and related, its opportunities for research. Despite recent advances for an established framework (e.g., Bruns and Moe, 2014) and common metrics (e.g., Bruns and Stieglitz, 2014), empirical methodological inquiries are still emerging. We made a concrete contribution to the field via the methodological inquiry of the impact of hashtag samples on the study of interactions between users through @replies.

The usage of the hashtag is often used within research on Twitter and politics, especially in relation to preplanned events such as elections (see Jungherr, 2016, for an extensive overview). However, beyond the context of this study, hashtag data collection is applied in research on Twitter and television as well (Wohn and Na, 2011; Highfield, et al., 2013) or Twitter and protest (Papacharissi and de Fatima Oliveira, 2012). As such, the insights we retrieved from this study can be considered valuable in relation to a wider variety of events and issues.

First, our study showed that we underestimate interaction taking place between users when we include hashtagged messages only. The majority of the users did not include the hashtag in their responses. Second, hashtagged responses differ from non-hashtagged responses. In particular, the findings showed the inclusion of the dedicated hashtag (i.e., #vk14/#vk2014) co-occured with in inclusion of additional interactive, informational elements (i.e., hashtags and hyperlinks). In turn, responses including the dedicated hashtag generate more retweets compared to non-hashtagged responses. When drawing from hashtag samples to analyze the content of Twitter messages (as done in studies on Twitter and television for example; Wohn and Na, 2011; Highfield, et al., 2013), we need to acknowledge that the presence of the hashtag comes with a number of structural message characteristics.

Third, the networks we built from @replies between users showed structural differences when we included non-hashtagged responses. Within research on Twitter and politics, the usage of network approaches is ample (e.g., Ausserhofer and Maireder, 2013; Bruns and Burgess, 2011; see Jungherr, 2016, for an overview). Often network visualization rather than formal SNA procedures are used to study interactions between users. However, we argue the network measures presented in this paper do allow testing for biases related to the Twitter sampling method. Indeed, the inclusion of non-hashtagged responses showed a clear increase in reciprocity between users. Therefore, conclusions with respect to users’ level of interaction with other users ought to be understood in relation to the sampling method. With respect to politicians interactions with citizens for example, hashtag-based research found very little evidence for two-way interaction (D’heer and Verdegem, 2014) whereas research that built on politicians’ activity did find substantial evidence (Graham, et al., 2013; Larsson and Ihlen, 2015). Hashtag studies differ from user-based approaches drawing from politicians’ activity as hashtag samples include both incoming and outgoing user activity, whereas the studies that start from politicians’ Twitter activity only take into account outgoing activity. In this study, which combines hashtag and user-based Twitter data retrieval, we acknowledge that indeed politicians’ interaction with citizens was higher compared to hashtag-only samples. However, also citizens were more interactive. This brings us to the fourth and final finding of this study which showed that elites (here: politicians and journalists) reinforce their central positions in the network when we include non-hashtagged responses. In this respect, the findings resemble hashtag-based research which also finds that politicians and journalists take central positions in election debates (e.g., Ausserhofer and Maireder, 2013; D’heer and Verdegem, 2014; Larsson and Moe, 2012). In short, relying on hashtag samples, we underestimate interaction, we overestimate the usage of interactive elements (i.e., hyperlinks and additional hashtags), but we correctly estimate the key actors in the debate.

We acknowledge that our research focused on online behavior only. In this respect,our findings call for a number of follow-up questions which can applied on the political debate on Twitter as well as other issues and events. First, qualitative off-line work could investigate to what extent users actually follow the hashtag conversation, hence, go beyond their timeline. In other words, to what extent are the technological affordances of the hashtag (such as searchability and clickability) put into practice? Second, we can broaden step one of our two-step procedure. More specifically, we can combine the hashtag with a number of keywords to select a sample of users (or vice versa; Ausserhofer and Maireder, 2013). Third, we can equally argue to account for hashtagged messages only as these are users’ explicit attempts to contribute to the wider debate. In turn, this raises questions for qualitative research, e.g., Can we assume that the use of a hashtag is that strategic all the time? Or do we over-interpret its usage and equally, its non-usage? In short, we need to continue the ongoing effort to assess and develop analytical procedures to understand the data we gather. End of article

 

About the authors

Evelien D’heer is a postdoctoral researcher studying political communication on social media. She is based in the Department of Communication Sciences at Ghent University, research group imec — MICT, Belgium.
Direct comments to: Evelien [dot] Dheer [at] ugent [dot] be

Baptist Vandermissen is a research fellow in the IDLab of the Department of Electronics and Information Systems at Ghent University — imec, Belgium.
E-mail: Baptist [dot] vandersmissen [at] ugent [dot] be

Wesley De Neve is Professor in the IDLab of the Department of Electronics and Information Systems at Ghent University — imec, Belgium and in the Image and Video Systems Lab of the Department of Electrical Engineering at KAIST, South Korea.
E-mail: Wesley [dot] deneve [at] ugent [dot] be

Pieter Verdegem is Senior Lecturer in the Communication and Media Research Institute (CAMRI) at the University of Westminster, U.K.
E-mail: Pieter [dot] verdegem [at] ugent [dot] be

Rik Van de Walle is Full Professor and head of the IDLab of the Department of Electronics and Information Systems at Ghent University — imec, Belgium.
E-mail: Rik [dot] vandewalle [at] ugent [dot] be

 

Notes

1. Bruns and Stieglitz, 2014, p. 75.

2. Ibid.

 

References

Julian Ausserhofer and Axel Maireder, 2013. “National politics on Twitter: Structures and topics of a networked public sphere,” Information, Communication & Society, volume 16, number 3, pp. 291–314.
doi: http://dx.doi.org/10.1080/1369118X.2012.756050, accessed 10 December 2016.

Stephen P. Borgatti and Daniel S. Halgin, 2011. “On network theory,” Organization Science, volume 22, number 5, pp. 1,168–1,181.
doi: http://dx.doi.org/10.1287/orsc.1100.0641, accessed 10 December 2016.

Stephen P. Borgatti, Martin Everett and Lin Freeman, 2002. Ucinet for Windows: Software for social network analysis. Lexington, Ky.: Analytic Technologies, and at https://sites.google.com/site/ucinetsoftware/, accessed 10 December 2016.

Axel Bruns and Halvard Moe, 2014. “Structural layers of communication on Twitter,” In: Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt and Cornelius Puschmann (editors). Twitter and society. New York: Peter Lang, pp. 15–28.

Axel Bruns and Stefan Stieglitz, 2014. “Metrics for understanding communication on Twitter,” In: Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt, and Cornelius Puschmann (editors). Twitter and society. New York: Peter Lang, pp. 69–82.

Axel Bruns and Yuxian Eugene Liang, 2012. “Tools and methods for capturing Twitter data during natural disasters,” First Monday, volume 17, number 4, at http://firstmonday.org/article/view/3937/3193, accessed 2 January 2015.
doi: http://dx.doi.org/10.5210/fm.v17i4.3937, accessed 10 December 2016.

Axel Bruns and Jean Burgess, 2011. “#Ausvotes: How twitter covered the 2010 Australian federal election,” Communication, Politics & Culture, volume 44, number 2, pp. 37–56.

Jean Burgess and Axel Bruns, 2012. “Twitter archives and the challenges of ‘big social data’ for media and communication research,” M/C Journal, volume 15, number 5, at http://journal.media-culture.org.au/index.php/mcjournal/article/view/561, accessed at 3 January 2015.

Evelien D’heer and Pieter Verdegem, 2015. “What social media data mean for audience research: A multidimensional investigation of Twitter use during a current affairs TV programme,” Information, Communication & Society, volume 18, number 2, pp. 221–234.
doi: http://dx.doi.org/10.1080/1369118X.2014.952318, accessed 10 December 2016.

Evelien D’heer and Pieter Verdegem, 2014. “Conversations about the elections on Twitter: Towards a structural understanding of Twitter’s relation with the political and the media field,” European Journal of Communication, volume 29, number 6, pp. 720–737.
doi: http://dx.doi.org/10.1177/0267323114544866, accessed 10 December 2016.

Kevin Driscoll and Shawn Walker, 2014. “Working within a black box: Transparency in the collection and production of big Twitter data,” International Journal of Communication, volume 8, pp. 1,745–1,764, and at http://ijoc.org/index.php/ijoc/article/view/2171, accessed 10 December 2016.

Frank Esser, 2013. “Mediatization as a challenge: Media logic versus political logic,” In: Hanspeter Kriesi, Sandra Lavenex, Frank Esser, Jörg Matthes, Marc Bühlmann and Daniel Bochsler (editors). Democracy in the age of globalization and mediatization. New York: Palgrave Macmillan, pp. 155–176.
doi: http://dx.doi.org/10.1057/9781137299871_7, accessed 10 December 2016.

Devin Gaffney and Cornelius Puschmann. 2014. “Data collection on Twitter,” In: Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt and Cornelius Puschmann (editors). Twitter and society. New York: Peter Lang, pp. 55–68.

Carolin Gerlitz and Bernhard Rieder, 2013. “Mining one percent of Twitter: Collections, baselines, sampling,” M/C Journal, volume 16, number 2, at http://journal.media-culture.org.au/index.php/mcjournal/article/view/620, accessed at 3 December 2015.

Sandra González-Bailón, Ning Wang, Alejandro Rivero, Javier Borge-Holthoefer and Yamir Moreno, 2014. “Assessing the bias in samples of large online networks,” Social Networks, volume 38, pp. 16–27.
doi: http://dx.doi.org/10.1016/j.socnet.2014.01.004, accessed 10 December 2016.

Todd Graham, Marcel Broersma and Karin Hazelhoff, 2013. “Closing the gap? Twitter as an instrument for connected representation,” In: Richard Scullion, Roman Gerodimos, Daniel Jackson and Darren G. Lilleke (editors). The media, political participation and empowerment. London: Routledge, pp. 71–88.

Daniel C. Hallin and Paolo Mancini, 2004. Comparing media systems: Three models of media and politics. Cambridge: Cambridge University Press.

Susana Herrera-Damas and Alfred Hermida, 2014. “Tweeting but not talking: The missing element in talk radio’s institutional use of Twitter,” Journal of Broadcasting & Electronic Media, volume 58, number 4, pp. 481–500.
doi: http://dx.doi.org/10.1080/08838151.2014.966361, accessed 10 December 2016.

Tim Highfield, Stephen Harrington and Axel Bruns, 2013. “Twitter as a technology for audiencing and fandom: The #Eurovision phenomenon,” Information, Communication & Society, volume 16, number 3, pp. 315–339.
doi: http://dx.doi.org/10.1080/1369118X.2012.756053, accessed 10 December 2016.

Laura Iannelli and Fabio Giglietto, 2015. “Hybrid spaces of politics: The 2013 general elections in Italy, between talk shows and Twitter,” Information, Communication & Society, volume 18, number 9, pp. 1,006–1,021.
doi: http://dx.doi.org/10.1080/1369118X.2015.1006658, accessed 10 December 2016.

Andreas Jungherr, 2016. “Twitter use in election campaigns: A systematic literature review,” Journal of Information Technology & Politics, volume 13, number 1, pp. 72–91.
doi: http://dx.doi.org/10.1080/19331681.2015.1132401, accessed 10 December 2016.

Ulrike Klinger, 2013. “Mastering the art of social media: Swiss parties, the 2011 elections and digital challenges,” Information, Communication & Society, volume 16, number 5, pp. 717–736.
doi: http://dx.doi.org/10.1080/1369118X.2013.782329, accessed 10 December 2016.

Anders Olof Larsson, 2013. “Tweeting the viewer — Use of Twitter in a talk show context,” Journal of Broadcasting & Electronic Media, volume 57, number 2, pp. 135–152.
doi: http://dx.doi.org/10.1080/08838151.2013.787081, accessed 10 December 2016.

Anders Olof Larsson and Øyvind Ihlen, 2015. “Birds of a feather flock together? Party leaders on Twitter during the 2013 Norwegian elections,” European Journal of Communication, volume 30, number 6, pp. 666–681.
doi: http://dx.doi.org/10.1177/0267323115595525, accessed 10 December 2016.

Anders Olof Larsson and Hallvard Moe, 2012. “Studying political microblogging: Twitter users in the 2010 Swedish election campaign,” New Media & Society, volume 14, number 5, pp. 729–747.
doi: http://dx.doi.org/10.1177/1461444811422894, accessed 10 December 2016.

Stine Lomborg and Anja Bechmann, 2014. “Using APIs for data collection on social media,” Information Society, volume 30, number 4, pp. 256–265.
doi: http://dx.doi.org/10.1080/01972243.2014.915276, accessed 10 December 2016.

Zizi Papacharissi and Maria de Fatima Oliveira. 2012. “Affective news and networked publics: The rhythms of news storytelling on #Egypt,” Journal of Communication, volume 62, number 2, pp. 266–282.
doi: http://dx.doi.org/10.1111/j.1460-2466.2012.01630.x, accessed 10 December 2016.

Christina Prell, 2012. Social network analysis: History, theory & methodology. London: Sage.

Stefan Stieglitz and Linh Dang-Xuan, 2012. “Social media and political communication: A social media analytics framework,” Social Network Analysis and Mining, volume 3, number 4, pp. 1,277–1,291.
doi: http://dx.doi.org/10.1007/s13278-012-0079-3, accessed 10 December 2016.

Zeynep Tufekci, 2014. “Big questions for social media big data: Representativeness, validity and other methodological pitfalls,” Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, at http://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/download/8062/8151, accessed 18 January 2015.

Maurice Vergeer, Liesbeth Hermans and Steven Sams, 2011. “Online social networks and micro-blogging in political campaigning: The exploration of a new campaign tool and a new campaign style,“ Party Politics, volume 19, number 3, pp. 477–501.
doi: http://dx.doi.org/10.1177/1354068811407580, accessed 10 December 2016.

Farida Vis, 2013. “A critical reflection on Big Data: Considering APIs, researchers and tools as data makers,” First Monday, volume 18, number 10, at http://firstmonday.org/article/view/4878/3755, accessed 22 December 2014.
doi: http://dx.doi.org/10.5210/fm.v18i10.4878, accessed 10 December 2016.

D. Yvette Wohn and Eun-Kyung Na, 2011. “Tweeting about TV: Sharing television viewing experiences via social media message streams,,” First Monday, volume 16, number 3, at http://firstmonday.org/article/view/3368/2779, accessed 19 September 2013.
doi: http://dx.doi.org/10.5210/fm.v16i3.3368, accessed 10 December 2016.

 


Editorial history

Received 16 December 2015; revised 12 May 2016; accepted 10 December 2016.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

What are we missing? An empirical exploration in the structural biases of hashtag-based sampling on Twitter
by Evelien D’heer, Baptist Vandersmissen, Wesley De Neve, Pieter Verdegem, and Rik Van de Walle.
First Monday, Volume 22, Number 2 - 6 February 2017
http://firstmonday.org/ojs/index.php/fm/article/view/6353/5758
doi: http://dx.doi.org/10.5210/fm.v22i2.6353





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.