First Monday

The awkward semantics of Facebook reactions by John C. Paolillo

Facebook originally had a single reaction feature, like, which was updated in 2016 with five other options, and again in 2020 with a sixth. The intended meanings of these features remains unclear: they are both constrained in expressive range and potentially confused with one another. This study examines the distribution of these reactions across seven samples of Facebook posts to evaluate their potential meanings and assess their comparability across contexts, in terms of three current approaches to interpreting social media meanings: engagement, sentiment, and face-work. The reactions’ distribution is complex and unstable across samples and the available aggregate data does not reveal face-work patterns which are otherwise readily observable. In conclusion, cautions are offered regarding interpreting quantitative analyses of reaction features.


1. Introduction
2. Page lists and searches
3. Sample period and data
4. Quantitative analysis: Poisson log-normal PCA
5. Models
6. Results
7. Examples
8. Discussion
9. Conclusions



1. Introduction

In 2015, Facebook introduced a set of five reaction options (henceforth “reactions”) alongside the longstanding like, comment and share features, these being love, haha, wow, sad, and angry; in 2020, care was added to this set, ordered in the interface between love and haha (Facebook Careers, 2020). Each reaction is indicated by an icon resembling common graphic substitutions for ASCII emoticons. They stand alongside several other features whose purpose is also to convey emotive meaning, including feelings, punctuation/Unicode emoji, stickers, and moods specific to stories. Such features have complex development histories (Stark and Crawford, 2018), though they influence each other. Facebook, like other social media companies, already supported graphic substitutions for punctuation emoji in text, although it lagged other sites in supporting a dislike reaction feature alongside the like it helped make ubiquitous. While Facebook didn’t fully explain the original feature’s design or its later revision, the interface order implicitly suggests a scale of positive to negative feeling. This addressed the demand for a dislike button, while making Facebook’s reactions different from other platforms’ features. In their resistance to a dislike feature, they cited a desire to promote “positive” interactions, an agenda threatened by facilitating expression of negative affect (Oremus, 2014). This stance failed to acknowledge that positive connection information already proved sufficient for social media users to cultivate disconnection, e.g., “echo chambers” (Conover, et al., 2011; Pariser, 2011). Moreover, being confined to like, Facebook users had already developed many ways of using it, making it ambiguous and polysemous. As with like, the new reactions are also constrained in what they can represent. Do they also acquire semantics not intended by their designers? How do they compare to other features such as commenting and sharing? And what contextual conditions may affect their use?

Social media companies and marketers reliant on social media tend to regard “engagement” as a metric of a post’s value to the platform and/or users: it is generally defined as some kind of total or average measure of all of a post’s interactions from users — a highly engaging post is one that provokes a large amount of response in users [1]. The simplest engagement measure simply totals the counts of all likes, comments, shares, etc., while more elaborate versions may weight these in some way. Its meaning to social media companies is largely commercial: engaging posts generate more advertising opportunities and more salable user information. For prominent public platforms like Facebook, engagement also intersects with concerns about spam, false information, and scams. For users, engagement arises for yet other reasons: recent and newsworthy events, high emotional value, political controversy, etc. In general, engagement is the extent to which a post commands attention, although the nature of this meaning depends on whether the perspective adopted is that of the user or the platform.

A second type of meaning is what is known as “sentiment” (Cambria, et al., 2017; Pang and Lee, 2008), defined as the emotional meaning of a text. While usually theorized as a multi-dimensional construct similar to intelligence, most practical applications of sentiment reduce it to good versus bad feeling, measured on either a single scale or two separate scales, one for positive emotion and one for negative. Sentiment is regarded industry-wide as an assay of people’s attitudes, and is taken to be implicated in a range of social and psychological processes and potentially predictive of economic behavior (Bollen, et al., 2011), brand reception (Rambocas and Gama, 2013), etc. [2]. With respect to Facebook, Kramer, et al. (2014) claimed to show “social contagion” of sentiment, i.e., that the expression of good or bad feeling in people’s posts follows that of their friends’ posts they are exposed to. This was positioned in contrast to social comparison theories of depression among social media users (e.g., Vogel, et al., 2014), and demonstrated through the manipulation of users’ feeds across a large number of Facebook users for a two-week period. The “Facebook experiment,” as it became known, caused considerable consternation among both users and social scientists, leading to calls for reform and a revision of the Common Rule for human subjects research in the U.S. (U.S. Department of Health and Human Services, et al., 2018; Code of Federal Regulations, 2016). This revision is probably more notable for the discourse that precipitated it than for the practical changes it imposes on human subjects research. Since that time, however, social media companies including Facebook have increasingly restricted access to their data for research purposes. A different theoretical framework for meaning in social media focuses on user goals of maintaining relationships via face-work. In this view, when a Facebook user creates a post they bid for attention among the people they are linked with. Other user responses via reactions, comments, etc., permit mutual relationships to be constructed and maintained by validating other users’ social stakes, i.e., by acknowledging users’ face wants (Goffman, 1955). The linguistic theory relating social face wants to the forms of expressions known as linguistic politeness (Brown and Levinson, 1987) has been applied extensively to the linguistic analysis of social media text (Graham, 2007; Jeon and Mauney, 2014; Locher, et al., 2015; Theodoropoulou, 2015; West and Trester, 2013), but it may also be applied to the communicative uses of reactions (cp., Blommaert and Varis, 2015; Maíz-Arévalo, 2017).

Briefly, there are two kinds of face wants: negative face, a desire to be unimpeded, and positive face, a desire for one’s desires be shared (not to be confused with negative and positive emotion). Various communicative acts may threaten face. For example, making a request imposes on the other and therefore threatens negative face. Similarly, expressing dislike of someone else’s attributes threatens positive face. Such communications are face threatening acts (FTAs); in speech they are generally accompanied by remediation (“paying face”) through verbally acknowledging the interlocutor’s other face wants. Remediation paying positive face is positive politeness, a strategy that tends to be used more among intimates and for lesser threats to face; that paying negative face is negative politeness, which tends to be used for greater social distance and greater face threats. Other strategies for performing FTAs are bald, on record (i.e., with no face remediation), usually used for especially small threats or for circumstances that require efficiency, and don’t do the FTA, for particularly large threats. A variety of off-record FTA strategies such as sarcasm and irony occupy the region between not doing the FTA and negative politeness (Brown and Levinson, 1987).

Posting and interacting with posts on social media entail face-threats. A user may choose to post, apply a reaction, add a comment, and/or share a post elsewhere; each has different consequences for face management. Posting and commenting require work (typing, perhaps reading others’ comments first to avoid saying something redundant, off-topic or unwelcome), and may have low or even negative payoff for the user if someone is offended by the comment, but they also permit the full range of linguistic FTA strategies. Sharing shifts the locus of discourse to a place of the user’s choosing, perhaps to their own audience of friends and others, where primarily the face of the sharing user is risked, not that of the original poster. Reactions expedite payment of positive face as one potential function but may also cause face threats on their own. For example, a post may contain a photograph and a statement about it. In posting, the user expresses a relation to the image, thereby risking face. Reactions such as like register approval and thereby pay positive face, although their target is ambiguous between the post itself (or image), the poster, and the poster’s audience, and they may even simultaneously act differently toward each. This can lead to confusion and face loss, e.g., if liking the post is somehow construed as disapproval of the poster or members of the audience. Comments and reactions are non-exclusive within Facebook (though the interface does not clearly link the two communications), and face payment is occasionally clarified if a simple reaction is too ambiguous (e.g., commenting, “I liked your comment, not the article”).

Facebook reactions may therefore convey any of three kinds of meaning: engagement (the ability of posts to command attention), sentiment (emotional meaning, especially good/bad feeling), and face-work (the management of users’ social stakes). To what extent then are these three types of meaning manifest in the distribution of Facebook reactions? Are they able to be identified in typical, modest-sized samples of Facebook posts? And to what extent do these different kinds of meaning cohere with one another? To answer these questions, this study undertakes an empirical analysis of the distribution of Facebook reactions across seven samples of posts. Each sample covers several thousand posts harvested using various search criteria from CrowdTangle, representing the sort of sampling common in scholarly studies of Facebook. These are compared to suggest how far one might be able to generalize about the reactions and what characterizes their meaning. We compare the contexts represented by the samples and discuss selected examples to suggest interpretations for the observed differences alongside expectations for future studies of reaction features in social media. The remainder of the paper is organized as follows. The next two sections describe the selection of data samples and analysis procedures. Following this, results of the quantitative analyses are presented, followed by consideration of a selection of posts examined specifically in terms of face-work. These observations are discussed in terms of engagement, sentiment and face-work. The conclusion section suggests general implications of these findings and cautions for future research.



2. Page lists and searches

Quantitative analysis of Facebook reactions requires access to data at an appropriate scale; ideally we prefer data which is unbiased. Unfortunately, unbiased samples of Facebook data are unobtainable for several reasons. First, Facebook’s scope is vast. According to Google, in June 2021 Facebook had 2.85 billion users, or more than a third of all people in the entire world. While this estimate may not be fully accurate, all of Facebook (or “typical Facebook use”) is simply not a tractable scope for research, even before considering its contextual heterogeneity, changes in user behavior to national or local events, etc. A second issue is that Facebook’s data is protected as a proprietary resource. Some limitations exist for privacy reasons, though the platform otherwise treats these as user affordances rather than as a contract governing its commercial conduct or compliance boundaries for ethical research. Prior to the Cambridge Analytica scandal (Vaidhyanathan, 2018), Facebook’s API access was broad enough to permit targeted voter intimidation (Burzynski, 2020); now API access for research even to public data is sharply restricted.

However they arise, the impact of the restrictions on quantitative research is clear: all samples of Facebook data are biased in arbitrary and unknown ways. Hence, they are neither random nor truly representative. In the present research, we attempt to use samples whose context is known enough to permit cautious generalization. For this study, Facebook is accessed through the CrowdTangle platform (CrowdTangle Team, 2021), a social media research application purchased by Facebook in 2016. CrowdTangle provides tools for accessing an indexed set of publicly-available Facebook posts, although this is necessarily not a comprehensive index: there are gaps in certain historical periods, regions, etc., reflecting operating constraints such as Facebook company policies, legal restrictions, staff time, resources, and when users have made special requests for certain pages.

We employ three of CrowdTangle’s features: curated page lists, custom page lists, and post search. Page lists are lists of Facebook pages curated by staff to provide starting and/or reference points for research. A large number of lists covers a broad range of topics, making this a convenient direction of approach for researchers. Four such lists are used here: US Celebrities (US Cel, in the tables and figures), Sri Lankan Celebrities (SL Cel), Science Media (Science), and Indiana Media (Indiana). US Cel contains pages for musicians and actors; these are fairly typical foci in social media research where branding, popularity or marketing are involved. SL Cel presents a regionally focused list intended as a comparison for the custom list SL Pol introduced below. SL Cel appears to over-represent Sinhala-speaking Christian Females (Sri Lanka has a Sinhala Buddhist majority), but it is unclear if that bias reflects local entertainment industry characteristics, Facebook use, or CrowdTangle’s curation. It also includes “Island Cricket” which, on inspection, is perhaps better regarded as a regional or national sports media page. Individual cricket players, who are celebrities in their own right (e.g., their actions off-field generate news headlines) also have their own pages, though these are not included in the celebrity list curated by CrowdTangle. The Science and Indiana lists are fairly typical lists of news organization pages; for general U.S. news media, problems arose around identifying a suitable list and a too-large potential post volume. Indiana addresses the latter issue, but only for a local/regional U.S. context.

Page searches are another important entry point, as they allow one to pull posts indexed for specific keywords in a given time frame. The two searches used here, “Flat Earth” (FlatE) and “QAnon” (QAnon) represent topics of potential current research interest; searching these keywords corresponds to asking “what is going on with this topic on Facebook at this time?” Custom page lists are similar to curated lists, except that the user/researcher is responsible for maintaining the list. They represent an alternative to searches for tracking activity over a longer period of time, spread out over a number of pages, e.g., coordinated information campaigns. Pages are added via a “manage” feature that allows one to search the complete page database, manually adding them one at a time. The custom list used here, Sri Lanka Politics (SL Pol), was created for a project on political uses of Facebook during the Sri Lankan presidential and parliamentary elections, 2019–2020 [3]. It includes pages of current and former members of parliament, national and regional political party pages, pages of celebrities involved in campaigning, nationalist political pages, and political humor pages. Since this list was created, one page (Mùa Hạ Năm Đó) has been found to be off-topic: it currently contains only video posts of Vietnamese celebrities; reviewing this page’s history revealed that it changed its name from “Sri lanka podhujana peramun” (a Sri Lankan political party known as the Sri Lanka People’s Front [4]) on 3 May 2021. The page is kept in the present list, noting that its activity might represent opportunistic or inauthentic behavior, and that it may perform as an outlier.



3. Sample period and data

For each of the lists and searches above, we used the CrowdTangle search interface to collect reaction counts for representative sets of posts as CSV files, to be imported for statistical analysis in R (R Core Team, 2018). All data collection was done on 12 June 2021, trying to keep as many sample parameters as possible unchanged. The resulting sample characteristics are listed in Table 1. For most samples, the 1 March–31 May date range was preferred, with the expectation of collecting the full set of posts for that time period. However, it was not possible to know how large the post sample would be. CrowdTangle’s default limit for historical data is 10,000 posts, but may be increased, reaching a hard limit of 300,000; we aimed for complete samples close to the 10K limit for each list. CrowdTangle’s interface also posed challenges, e.g., by filling in unwanted defaults for dates, and by defaulting to listing posts in order of total interactions, rather than date.


Table 1: Characteristics of the seven samples.
SampleTypePagesStartEndTotal postsPosts engaged
SL CelCT List661 May12 June4,2244,222
SL PolCustom7371 May31 May20,32516,832
ScienceCT List661 March31 May10,00010,000
US CelCT List2481 March31 May10,00010,000
IndianaCT List1011 March31 May10,00010,000
FlatESearch4,5511 March31 May7,7506,385
QAnonSearch8,1981 March31 May20,86716,055




4. Quantitative analysis: Poisson log-normal PCA

Reactions are reported as counts by CrowdTangle; the usual statistical model of count data is a Poisson model. Since multiple reactions are counted for each post, a multivariate model is required, to acknowledge the shared variation among the reaction counts. The Poisson Log Normal (PLN) model, written as (1), is a relatively tractable extension of the univariate Poisson model that permits the residual shared variation to be expressed in a manner similar to probabilistic principal components analysis (PCA), in which the aim is to represent a set of observed input variables in terms of a smaller number of unobserved (latent) variables, called principal components (PCs). Models of this structure may be computed using the R package PLN models (Aitchison and Ho, 1989; Chiquet, et al., 2018). The working hypothesis is that some small number of PCs should suffice to represent the shared variation in reactions, and that these should be recognizable in terms of the kinds of semantics proposed in the introduction.




In this model, the Wi are normally distributed, orthogonal, unobserved latent variables similar to PC scores; the matrix B corresponds to the PC loadings of the original variables adjusted by their means μ, which may also be replaced with a set of regression terms Oi + ΘXi, i.e., covariates Xi weighted by regression parameters Θ plus an optional offset term Oi. The offset is used to adjust for the effect of “sampling effort” on each observed row of counts, a concern in the package authors’ application of sampling environmental genetic material, but not relevant here. The regression covariates permit the counts to be modeled using other factors. For the present purpose, we restrict model covariates to a single intercept term representing the relative propensities of the different reaction types, yielding the simplest model that can be estimated with this package. Finally, the predictions of the model Zi are exponentially transformed to match the Poisson probability distribution, the fit to which may be measured by three different measures of model fit: log-likelihood, Bayesian information criterion (BIC), and Integrated Completed Likelihood (ICL). Here, the maximum ICL is used as the model selection criterion as it tends to be more conservative than the other two in the number of principal components that it selects. The PLN model offers improved statistical rigor over common alternatives such as PCA of log+1 transformed frequency counts (Aitchison and Ho, 1989; Chiquet, et al., 2018). It uses fitting and evaluation criteria shared by other statistical methods, and does not employ mathematically distorting transforms like log+1 for its final solution. The same criteria permit principled evaluation of how many dimensions are needed in the PCA, an improvement over alternatives such as scree plots. The model therefore permits investigating the shared distribution of Facebook reactions in a statistically rigorous fashion.

The primary output for interpretation of the PLN PCA model is a set of dimensions, their relative size, and the loadings of the original variables on these dimensions. Customarily, these are presented in scatterplots; this conveys the shared variation among the original variables, and the variables and distribution of observations could then be interpreted together. Here, we forego this presentation as the focus is on the shared variation, and consistency across the samples. Hence, after justifying the models, we present line charts of the relevant model parameters (weights or loadings), using the x-axis for samples, keeping their order the same in each plot. In this arrangement, consistency of variation across samples is easily read as parallel lines (whatever shape they have being an artifact of inter-sample variation), whereas inconsistency is revealed by the crossing of lines at some point (line color and symbol shape are also used to attempt to bring out these patterns). With respect to the kinds of meaning of interest, engagement should be reflected by a high degree of shared variation among all of the reactions (along with comments and shares), while sentiment should be reflected by a positive-to-negative scale of feeling, possibly with opposing clusters of reactions, reflecting our expectations about positive and negative reaction types from the interface design. Face-work requires closer examination of the dimensions and/or posts to identify.



5. Models

The PLN model framework was applied to the seven samples described in Table 1. Along with the six reaction types, share and comment counts were included in the analysis, for a total of eight input variables. This is also the maximum number of PCs possible; any more would no longer be orthogonal. Characteristics of the optimal models for the seven samples are presented in Table 2.


Table 2: Optimal models for the seven samples selected using ICL.
SL Cel7-2.1E568.93%15.00%8.39%
SL Pol5-4.8E578.72%11.48%7.72%
US Cel8-6.6E542.85%29.54%12.65%


The first observation from Table 2 is a strong preference for full-rank solutions, i.e., ones with eight dimensions, or as many as the original system of variables (the six reactions, share and comment counts). Since the system of variables doesn’t simplify with a full-rank solution, at least some part of these models is probably idiosyncratic to the sample and unsafe to interpret generally. For all of them, the explained variation from the first one or two components is very high — typically in excess of 90 percent (if we add the variances of PC1 and PC2), whereas those of the additional components is low. Beyond the third component, model explained variance (R2) is typically reported as 1, meaning that the model is nearly fully saturated and the contribution of additional dimensions to the PCA is negligible. The smallest PLN model for these samples has five dimensions (SL Pol), but this may still be too many to interpret. We should therefore exercise care in interpreting especially the higher dimensions of the PLN models as these have extremely small amounts of variation associated with them; here we confine ourselves to the first three PCs for each of the models in question.

The second observation is that the amount of variation explained by the first three components across samples is not stable, and this instability is not clearly related to sample size. The first component, always the largest dimension of shared variation in a PCA, ranges from as much as 81.92 percent (QAnon) to as little as 34.07 percent (Indiana). The three samples SL Pol, FlatE, and QAnon have the strongest first components, followed by SL Cel, and the remaining three samples. Controversial posts may be more typical among SL Pol, FlatE, and QAnon; such posts characteristically have polarized audience reception (large numbers selecting both positive and negative reactions). However, since the number of engaged posts in these is smaller than the full sample (Table 1), these may have been more thoroughly sampled than the others, where truncation of the sample results in none of the posts lacking interactions. In other words, this characteristic may merely reflect the thoroughness in sampling.

For the models themselves, there are four parameters we will consider: the intercept coefficients and the loadings on the first three principal components. These four model parameters are presented as line graphs in Figures 14. Because the solutions of PCA loadings are not mathematically identified for sign (+/–), the sign is chosen arbitrarily by software. Consequently, models with the same loading structure may have + and – values assigned in reverse order. To clarify the ordering relationships in the plots, the signs of PC loadings have been flipped in these specific instances: for SL Cel, Indiana and FlatE on PC1 and PC2, and for FlatE and QAnon on PC3. This minimizes the crossing of lines, bringing out the order of the reaction types within the samples on the different PCs and making the graphs easier to read.



6. Results

We first consider the model intercept terms (Figure 1); these measure the relative propensity of the different reactions within each sample on the log scale; these are given in Figure 1. The order of the reaction types remains relatively constant across samples: like is the most common reaction, and angry the least in most samples, similar to what has been reported internally within Facebook (Merrill and Oremus, 2021). Samples QAnon and Indiana behave differently in having more frequent use of angry (with care the least frequent reaction), suggesting that the news stories that predominate those samples might have more polarizing or toxic posts than other samples. The comment and share activities are close together in all of the samples, sometimes exchanging position, while care, sad, haha, and wow are less frequent than comment and share, and also exchange order from sample to sample. In Indiana, there is less variance in the mean frequency of reaction types overall. FlatE appears to have lower overall incidence of reactions on posts than the other samples, suggesting lower overall engagement on that topic. This first result suggests that while the relative frequencies of the reaction types are broadly similar across samples, there are variations across contexts for the individual reaction and interaction types, possibly reflecting context-specific meanings.


Intercepts of Poisson Log-Normal models for the seven samples
Figure 1: Intercepts of Poisson Log-Normal models for the seven samples.


The second parameter set to consider are the loadings on PC1, the largest dimension of shared variation. For all but one data set (Indiana) this dimension has loadings for all of the reaction types to the same direction from the origin, i.e., they are positively inter-correlated. This corresponds to Facebook’s notion of engagement. CrowdTangle also reports “total interactions” as a simple sum of all eight counts. Should engagement be a meaningful property of posts, we should expect such a dimension in the PCA. A feed algorithm such as Facebook’s amplifies the correlation by summing the (weighted) reaction types and boosting the visibility of posts with more reactions (Merrill and Oremus, 2021), thereby creating a feedback for reactions/comments/shares within the platform. The reaction types all strongly inter-correlate, contributing to a PC1 explaining a large proportion of the variance.

However, as the Indiana sample demonstrates, this does not always happen. In that sample, angry, haha, comments, and wow are positively correlated with each other, and negatively correlated with care, love, and shares; like and sad remain relatively uncorrelated. Another way to read Indiana’s PC1 is to regard the order of interactions as a scale suggesting positive to negative emotion: care > love > share > like > sad > wow > comment > haha > angry. Notably, haha and angry are similarly negative, which contrasts with the positive value usually assumed for haha. The reactions on the portion of the scale from love to sad are close together, so strong interpretation should not be made from these differences.


First principal component loadings
Figure 2: First principal component loadings.


This reading of Indiana’s PC1 compares closely with what we find for PC2 among the other six samples, as in Figure 3: for samples other than Indiana, PC2 also resembles a scale of positive to negative emotion. In this arrangement, sad, angry, or haha occupies the top of the PC loading scale, while love occupies the bottom end. Care and like are close to the bottom end, comments appear in the middle, and shares wander around the middle, appearing a bit more often toward the top, negative-affect end.

This suggests confirmation of the positive-negative scale most commonly used in sentiment analysis. However, two other observations emerge. The first is that the reactions’s positions on this scale are not fixed: while the extreme ends of this scale are somewhat consistent, individual reactions float about the scale across samples. Possibly they reflect other elements of meaning somehow differentially relevant across contexts, and so the extent to which they reflect positive or negative emotion. The second is that haha most strongly patterns with the negative emotion end of this scale, a characteristic that deviates from what is typically assumed.


Second principal component loadings
Figure 3: Second principal component loadings.


At PC3 (Figure 4), the models become less stable and less interpretable. Except for in Indiana, haha dominates the lower end of this scale, with sad and/or angry at the top end and the remainder in the middle. Yet order changes are too large and too frequent among the PC3 loadings to make further generalizations. In prior investigations of the SL Pol list, a dimension of “surprise” in which haha and wow separated from the other reactions appeared to emerge as PC3. This would have been an intriguing result to confirm, but it is not supported here given the variable position of wow in Figure 4, where it is usually separate from and opposite haha. Hence we have to take PC3 in these samples as primarily separating haha from the other reactions, a meager and not very satisfying observation.


Third principal component loadings
Figure 4: Third principal component loadings.


Among the seven samples studied here, it is clear that Indiana must be taken as a special case: unlike the others, Indiana’s PC1 represents something closer to the positive-negative scale that appears on PC2 in the other samples. Similarly, Indiana’s PC2 appears to have more in common with PC3 of the other samples. What then of Indiana’s PC3? Is it more like PC4 among the others? Or like the missing PC1 pattern? Recall that PCs beyond PC3 have small amounts of variance associated with them, and that the model R2 changes very little with further increased complexity. Hence, PC3 in Indiana and higher PCs of the other models do not seem safe to interpret. US Cel and QAnon represent distinct approaches to using CrowdTangle that are both typical of Facebook research; Indiana provides a comparison point whose distribution of reactions sharply differs from the other two.



7. Examples

At this point, we have identified patterns suggestive of engagement and sentiment meanings from the quantitative patterns of users’ interaction options for posts. We do not yet have observations suggestive of face-work components of meaning. To see if we can find any confirmation of it, we turn to specific examples from three of the samples: US Cel, Indiana, and QAnon. These are chosen to streamline the presentation, as being broadly representative of how CrowdTangle reflects Facebook [5]. For these samples, a random selection of five posts was drawn from the first 4,000 posts in each sample. The CrowdTangle reports used in these samples place them in descending order by total interactions, so in drawing posts off from the earlier parts of the report, those with low numbers of interactions (for which less can be said about their reception) are excluded. The randomly selected posts were drawn simultaneously and without inspection. This approximates what one might encounter in a sample of Facebook posts selected for manual coding. The 15 posts and their associated reaction counts are given in Tables 3, 4, and 5.

The five celebrity posts in Table 3 have very large counts of like, love, and larger counts of care than found in Table 4 and Table 5, the counts of wow, haha, sad, and angry being proportionately lower. In each of these posts, the posting celebrity risks face by bidding for approval, and the audience overwhelmingly responds with positive face payment via positive reactions. Katy Perry’s and Gwen Stefani’s posts seek approval for an outfit or accessory, Lady Gaga’s for an artistic project, and Timbaland and Jada Pinkett Smith for aesthetic experiences recorded in the linked video. Whereas the small numbers of haha, sad, and angry on Perry, Timbaland, Stefani, and Gaga’s posts might be accounted for by errors in using the interface (e.g., the user’s finger slipped to the wrong spot on a touch screen), those on Smith’s post are unlikely to be so, as they are much larger in proportion than for the others. This means that some part of the audience reacts negatively enough that they are willing to threaten face. Notably, the Smith post alludes approvingly to lesbian romance, a topic which potentially polarizes users. Since it is easier and carries less face risk to pass over a post without reacting, it is notable that some users choose to threaten face by expressing disapproval. Semantic analysis of celebrities’ posts and their audience reception therefore needs to countenance such meanings.


Randomly selected Facebook posts from CrowdTangle US Celebrities list (US Cel)
Table 3: Randomly selected Facebook posts from CrowdTangle US Celebrities list (US Cel).


Another pattern may be read from the volume of comments as compared to shares: in the Timbaland and Smith posts, shares outnumber comments, whereas the remaining posts have either the reverse patterns or something closer to parity. Both posts are videos, a kind of content that users value and readily share. The ratio of shares to comments is greater for the Timbaland post, a 30-second video, as opposed to the 27-minute full episode linked by Smith. For posts with more shares than comments, the locus of discourse and face risk or threat is shifted to users’ own audiences, rather than that of the celebrity. Sharing potentially suggests prior approval by the celebrity of a sharing user’s bid for approval (by exposing the original celebrity poster’s name and avatar) and this may also bias the subsequent discourse in the sharing user’s feed. Navigation by clicking on the shared post may return the user to the original celebrity’s post, but the two discursive domains remain simultaneously relevant yet socially distinct locales for interaction, where different peoples’ face is at stake. A variety of complex interactions are possible, such as a user encountering a celebrity’s post via a friend’s share, navigating to the original post and registering a face threat (angry or haha) while leaving no (or contradictory) interaction on the friend’s post. Meanings expressed by such interactions are not registered in the summary counts of reactions with any fidelity.


Randomly selected Facebook posts from CrowdTangle Indiana media list (Indiana)
Table 4: Randomly selected Facebook posts from CrowdTangle Indiana media list (Indiana).


The five posts from Indiana in Table 4 are all from Indiana television stations and link to video/article pages on the stations’ home Web sites. Here, an organization’s ultimate aim is primarily to drive traffic to their own site, whether through users navigating through clicking on the media or sharing the post on their own feeds. Approval reactions such as like and love may help amplify their reach in people’s feeds, but do not necessarily contribute directly to their end goal. These posts differ from one another in their content and rhetorical approach, revealing different patterns in users’ reaction choices.

The WRTV post links to a locally oriented story about a community service. While the post indirectly raises the polarized LGBTQ topic, the reactions are overwhelmingly positive and there is no boost in angry reactions as there was for celebrity Smith’s post. A non-trivial number of hahas strongly suggests mocking or trolling (Fichman and Sanfilippo, 2016), i.e., disapproval and direct face threat: youths who are socially different often face mocking from peers and strangers.

The WSBT-TV post links to a story about vaccine-related perimyocarditis, a prominent 2021 vaccine panic; the share-to-comment ratio is extremely high, suggesting that users seeing this story attempted to warn other users via sharing. COVID-19 vaccine discourse is also polarizing [6], entailing face risk for sharing such warnings, but approval could nonetheless be signaled by surprised and negative reactions. It was here: the post has a strikingly low number of likes, with more wow and angry reactions. The haha reactions here also suggest face threat against vaccine proponents by anti-vaccine constituents.

The 14 NEWS post links to a local story that rapidly became national news. As a tragic story, sad is the predominant reaction suggesting alignment and positive face payment to others in the audience over the shared grief, i.e., commiseration.

The WANE 15 post linking to a gun control story elicits a strong negative reaction; its polarized topic is framed provocatively. A substantial Indiana constituency favors gun freedoms, yielding many angry reactions, suggesting alignment and positive face payment to others who share this disapproval. Discourse remains predominantly on the WANE 15 post itself, with only a handful of shares, so the provocation’s payoff might be less than the station desires.

The final Indiana post links to a story about an honorary injustice to a military service member’s remembrance committed by law enforcement. While the juxtaposition of law enforcement against the military is potentially polarizing, politics makes expressing this opposition awkward, and angry and wow reactions are less common than care, a pattern shared with posts expressing grief or loss and suggestive of commiseration. The post also features a two-part leading narrative characteristic of clickbait, though appears not well-executed [7]. As with the celebrity posts, Indiana has uses of reactions reflecting meanings not well-characterized by their quantitative distribution. While there is overlap with those of the celebrity list (e.g., mocking haha), other uses arise that appear to require further context to recognize (e.g., commiserating sad and shared outrage angry).


Randomly selected Facebook posts from CrowdTangle: QAnon search (QAnon)
Table 5: Randomly selected Facebook posts from CrowdTangle: QAnon search (QAnon).


Because of the timing of the QAnon search sample, the final set of posts is characterized by links to articles on NGO and media Web sites regarding QAnon, its adherents and tenets, rather than to posts directly communicating among QAnon believers, as might have been found a year or more earlier. The sample period coincides with a failed QAnon prediction of a March 2021 handover of power to the former president, whose 2020 electoral loss was a rebuke to the movement and a factor precipitating the 6 January 2021 insurrection at the U.S. Capitol. Regardless, QAnon sympathizers and those with aligned political causes are well represented on Facebook, permitting one to examine audience reception of QAnon messages through Facebook reactions to relevant stories.

The Democratic Coalition appears to stir up its readership with its post on the prevalence of QAnon belief, delivering large numbers of sad and angry reactions. Since the purpose of the page is to politically mobilize its readership against causes such as QAnon, these reactions reflect alignment among its audience via negative emotion, and hence positive face payment to the organization and audience; like, wow, and haha are also present, but only a single love, which is exceptional on a high-volume post. Love, of course, would potentially register a face-threat, suggesting disalignment with the organization’s aims.

The posts by Raw Story, Newsweek, and the New Civil Rights Movement have predominantly haha reactions, which again must be taken to be mocking, thereby paying positive face to the page and audience. Whereas Newsweek’s right-of-center stance makes mockery less safe than on the other two pages, both of which are left-of-center, at the point in time of the sample QAnon is perhaps sufficiently stigmatized by the failure of the Capitol insurrection to make mockery socially safe.

The post by Vote Common Good attempts to dissociate Christianity from QAnon, linking to an external article on evangelical Christianity within the QAnon movement; following likes, the predominant reaction is sad, suggesting positive face payment by commiserating over the stigmatization of Christian political activity by its association with QAnon.

In each of the above 15 posts, one can make sense of the distribution of reactions applied to each through considering the posters, the pages and their context, their audiences, and the specific social stakes raised by the post. Consideration of posters’ and users’ face wants plays a prominent, though not exclusive, role in the earlier interpretations. These in turn highlight some of the correlations observed in the previous section, e.g., the dimension of positive versus negative emotion. At the same time, it is possible to recognize distinctive meanings for some of the reactions: mockery for haha, condolence for care, and commiseration for sad. These meanings are not exclusive of other meanings, and other posts and samples may yield evidence of further distinctive meanings not revealed here. However, these observations are only made by importing considerable background information into the interpretations, information which is not available to the quantitative analysis of reactions. Hence, while we may confirm some broad outlines of the observations of the previous section, interpretations such as those offered in this section are distinct from those of the quantitative patterns of reactions. The quantitative patterns are therefore a meager indicator of the meanings of posts in the contexts where they occur, and considerable interpretive labor is necessary to identify potential users’ meanings for Facebook’s reactions. We turn now to discussion of the findings in this section and the previous section.



8. Discussion

The quantitative results suggest cautious support for a dimension of total engagement as well as a dimension of positive to negative emotion. Furthermore, total engagement appears to validate a bias of social media platforms to regard different forms of engagement as largely interchangeable, at least for the operations of their platforms. Likewise, the positive-to-negative scale appears to validate the industry of sentiment analysis in social media (Cambria, et al., 2017; Pang and Lee, 2008; Kramer, et al., 2014). Both, however, should be regarded with considerable caution, not least because the combination of the two does not satisfactorily explain the distribution of the reaction types. As regards engagement, while its use may be intended as a kind of popularity scoring similar to Page Rank (Page, et al., 1998), its application within Facebook confounds it with the feedback mechanisms of content promotion. As a consequence, it conveys little about what a user does or experiences when viewing and reacting to a post. Worse, the engagement dimension is not uniformly present in the samples, its absence in Indiana being anomalous and difficult to explain [8]. Consequently, interpretations built around “engagement” are unsafe without further, extrinsic validation.

Similarly, while sentiment stands as a ready explanation for PC1 in Indiana and for PC2 elsewhere, this also needs to be taken cautiously. Nowhere does the sentiment approach to social media meaning explain why or how sentiment markers should be re-ordered in different contexts, particularly when that order is otherwise an obtrusive feature of the user interface. As with engagement, sentiment is also potentially bound up in the platform’s content promotion process (cf., Kramer, et al., 2014). In fact, we now know that from 2017 to 2021, Facebook somewhat inconsistently tried to limit the spread of polarizing messages by changing the weighting of reactions in their feed algorithms, especially by demoting angry multiple times (Merrill and Oremus, 2021). Consequently, any quantitative patterns of reactions observed may well be artifacts of their weighting in Facebook’s feed algorithms. The failure to find quantitative patterns reflecting other semantic dimensions is also concerning: while “surprise” is not obtrusively designed into the interface in the way that sentiment is, it is potentially conveyed by the iconic emoji-like representations of wow and haha, assuming graphics are a potential source for meaning. The absence of any such signal from the quantitative analysis is a disappointment.

Facebook’s reactions also admit considerable ambiguity with respect to face threat and remediation. Whereas angry represents a strong negative emotion, expression of which is a clear threat to positive face (Brown and Levinson, 1987), it also pays face by indicating alignment with the poster or the intended audience with respect to a post’s evaluation, as found among the Indiana vaccine and firearms posts. Haha and love are similar: they usually pay positive face to at least some participants, but potentially threaten face by mocking or indicating dis-alignment with poster or a segment of the audience, as among the QAnon posts or the Indiana posts cuing condolence or commiseration. For the majority of users, avoiding such an FTA is probably preferable, but enough users nonetheless choose these reactions in contexts where it is necessary to acknowledge such minority communicative intents. One can even imagine guilt-tripping with sad, or self-deprecatingly deflating another participant with wow, uses which would require extensive context to identify. Such strategies are off-record (Brown and Levinson 1987), and may require additional verbal or interactional work to successfully execute. Resolving any of these uses of reactions requires recognizing audiences with potentially complex internal social structure, and locating the precise stakes of the poster, the subject of the post, and members of the audience with respect to each other, so as to recognize whose face is implicated and how. While these factors may be read into a post by a sufficiently committed observer, they are not available in the CrowdTangle reports for quantitative analysis.

A persistent worry among Facebook’s own researchers is the potential for engagement feedback to promote controversial posts. For a social media researcher, accidental promotion of controversial posts raises their visibility so that one might see what users are motivated to argue about, but with the collateral cost of polarizing the Facebook social environment, and submerging other potentially interesting interactions. Engagement alone also does not offer a sense of the relevant positions. Instead, the meanings involved in a controversy would need to be investigated through clustering competing groups of users. Face considerations do help explain what is at stake for users in the use of reactions in individual posts, but do not explain the distributional issues without deeper contextual information than available when reactions are aggregated over an entire post.

Observing audience internal structure is not possible from per-post aggregated reactions, but it could be done by identifying users across posts and tabulating reactions simultaneously by user and post, and conducting a PCA much like the one used here. While many public pages have the volume of response that would permit this kind of analysis, and the information is available within Facebook itself (clicking the reaction count of a post brings up its user-reaction list), collecting such data directly is likely to raise privacy concerns. Some may equate it with what Cambridge Analytica did at a very large scale and it is probably outside of what Facebook regards as permissible under its terms of service [9]. However, such an analysis could be done on samples of posts similar in size to those used for this paper. Users’ literal identities would not be necessary. Something (e.g., a hash) that allows one to match users across posts without linking those users to their profile or other Facebook activity could be used and restricted in scope to the samples.



9. Conclusions

This investigation began with the assumption that quantitative data provided by CrowdTangle might help identify and interpret the meanings of the reaction features on Facebook. We arrive at a partial answer that affirms some general patterns, namely the dimensions of engagement and positive-negative affect (sentiment), although with major caveats. Both dimensions are highly sensitive to the data samples, and both dimensions are strongly bound up in the platform’s mechanics. These points are worrying. On the one hand, we would like to be able to accomplish analyses of social media semantics that are independent of the platform’s operation in some meaningful way. When engagement, fed back through the system’s mechanisms, is such a central part of any potential meaning, that does not seem possible. A further worry is that the second meaning component, sentiment, while found in all of the samples, must be acknowledged to have different associated scales for the same reactions. While we can suggest a research design that would help resolve that, doing so empirically, even for data sets of modest size like those used here, seems out of reach for technical, legal and ethical reasons, let alone the problems of sampling and potential bias we have faced from the outset with data from Facebook or CrowdTangle.

Consequently, Facebook reactions provide only a coarse guide to the semantics of posts, and empirical dimensions like those discovered here cannot be interpreted in a general way but require firm anchoring to context. Though different, all reaction types encode ambiguous, overlapping meanings. It matters little whether they were intended as user affordances or as data selectors from which Facebook could create training sets for machine learning models. If researchers are inclined to employ reactions to quantitatively infer users’ meaning, this effort must be supplemented by close reading of the messages, and careful attention to the social stakes involved in the context. While one could assemble the necessary data for a modest-sized inquiry, were one to brave the accompanying ethical and legal challenges, the required contextual information will probably exceed that carried by the reactions themselves, thereby deflating the goal of inquiry, namely to be able to use the reactions to understand what people are doing with them.

An irony of this circumstance is that Facebook took several years to finally accede to a user-driven feature request whose ultimate design and contribution to the platform does not seem to justify either the delay, development cost, maintenance effort or user confusion. Users are now stuck with an ambiguous, overlapping set of categories that is unlikely to change under further pressure. Instead, it will change at the company’s whim, as it did in 2020 when it added the care reaction, and as it does occasionally for Mother’s Day. It will also change opaquely, as when Facebook manipulates the reactions’ promotional weighting in its attempts to mitigate the spread of misinformation and political division. On its platform, Facebook unilaterally decides what the interface presents, much as it decides what user feeds look like. When it changes these features, it does so irrespective of their consequences for the user experience in conveying meaning. In the end, the true meaning of the reactions will continue to elude the platform, those who research it and perhaps even users, until enough of the context is assembled to allow recognizing the social actions being taken by users. In the mean time, quantitative metrics incorporating reactions such as “engagement” or “sentiment” must be acknowledged to be highly dependent on sample and context, and should be regarded with extreme caution. End of article


About the author

John C. Paolillo is an associate professor of Informatics at Indiana University, Bloomington. His research focuses on social media from the perspective of its uses and users.
E-mail: paolillo [at] indiana [dot] edu>



This work was supported in part by a Knight Foundation grant to the Indiana University Observatory on Social Media (OsoMe). The author would like to thank Brian Harper and various anonymous reviewers for comments on previous versions. Any errors of fact or interpretation remain the sole responsibility of the author.



1. A tension exists over the definition of engagement between the purposes of marketing, which has more nuanced aims (e.g., Haven, 2007), and those of platforms, which aim to provide easily-computed metrics (Meese and Hurcombe, 2021).

2. But see Paolillo (2019) for critique.

3. A reviewer asks for an account of the Sri Lankan and Indiana locales. Suffice it to say that Sri Lanka has been the site of increased inter-ethnic tension since at least 2014, and both the government and outside observers see Facebook as a key instrument in spreading violence, also prompting Facebook to focus on local hate-speech policy enforcement efforts. Indiana is local to the author.

4. The Sri Lanka Podujana Peramuna is a populist political party with an extensive organizing effort on Facebook having many pages and covering many candidates and campaigns; this specific page, with its peculiar spelling and lower-case typography appears to have been an inauthentic page attempting to exploit the party’s popularity, which later shifted its efforts toward similarly inauthentic behavior for a different audience.

5. The lists SL Cel and SL Pol in particular involve multilingual posts in a complex multi-ethnic, political, and international context that would require considerable explanation.

6. Indiana had considerable vaccine resistance that was less widely publicized than that in Florida and Texas.

7. Research on clickbait tends to refer to an “information gap” and seldom remarks on or employs the two-part narrative strategy in detection efforts, yet it is used in paper titles (e.g., Molina, et al., 2021), suggesting tacit acknowledgment of its existence.

8. A reviewer suggests a relationship to Facebook’s downweighting of news generally; see, e.g., Bailo, et al. (2021).

9. There is substantial ambiguity about what is allowed, although it is likely that such actions would be claimed to violate Meta’s developer terms of service as per section 3, Data use, in



J. Aitchison and C.H. Ho. 1989. “The multivariate Poisson-log normal distribution,” Biometrika, volume 76, number 4, pp. 643–653.
doi:, accessed 3 August 2023.

F. Bailo, J. Meese, and E. Hurcombe, 2021. “The institutional impacts of algorithmic distribution: Facebook and the Australian news media,” Social Media + Society (23 June).
doi:, accessed 3 August 2023.

J. Blommaert and P. Varis, 2015. “The importance of unimportant language,” Multilingual Margins, volume 2, number 1, pp. 4–9, and at, accessed 3 August 2023.

J. Bollen, H. Mao, and X. Zeng, 2011. “Twitter mood predicts the stock market,” Journal of Computational Science, volume 2, number 1, pp. 1–8.
doi:, accessed 3 August 2023.

P. Brown and S.C. Levinson, 1987. Politeness: Some universals in language usage. New York: Cambridge University Press.

J. Burzynski, 2020. “Voter intimidations and misinformation on Facebook: Using Section 11 (B) of the Voting Rights Act to protect the vote of the people of color,” Rutgers Race & the Law Review, volume 22, number 1, pp. 91–121.

E. Cambria, D. Das, S. Bandyopadhyay, and A. Feraco (editors), 2017. A practical guide to sentiment analysis. Cham, Switzerland: Springer International.
doi:, accessed 3 August 2023.

J. Chiquet, M. Mariadassou, and S. Robin, 2018. “Variational inference for probabilistic Poisson PCA,” Annals of Applied Statistics, volume 12, number 4, pp. 2,674–2,698.
doi:, accessed 3 August 2023.

Code of Federal Regulations, 2016. “45 CFR 46 Protection of human subjects,” pp. 128–148, at, accessed 4 November 2021.

M. Conover, J. Ratkiewicz, M. Francisco, B. Gonçalves, F. Menczer, and A. Flammini, 2011. “Political polarization on Twitter,” Proceedings of the International AAAI Conference on Web and Social Media, volume 5, number 1.
doi:, accessed 3 August 2023.

CrowdTangle Team. 2021. “CrowdTangle, List IDs: 1561430, 1561431, 1561432, 1561434, 1561435, 1561441,” and, accessed 3 August 2023.

Faceboook Careers, 2020. “Can I get a hug? The story of Facebook’s care reaction” (15 July), at, accessed 4 November 2021.

P. Fichman and M.R. Sanfilippo, 2016. Online trolling and its perpetrators: Under the cyberbridge. Lanham, Md.: Rowman & Littlefield.

E. Goffman, 1955. “On face-work: An analysis of ritual elements in social interaction,” Psychiatry, volume 18, number 3, pp. 213–231.
doi:, accessed 3 August 2023.

S.L. Graham, 2007. “Disagreeing to agree: Conflict, (im)politeness and identity in a computer-mediated community,” Journal of Pragmatics, volume 39, number 4, pp. 742–759.
doi:, accessed 3 August 2023.

B. Haven, 2007. “Marketing’s new key metric: Engagement” (8 August), at, accessed 3 August 2023.

L. Jeon and S. Mauney, 2014. “‘As much as I love you, I’ll never get you to understand’: Political discourse and ‘Face’ Work on Facebook,” Proceedings of the 22nd Annual Symposium about Language and Society-Austin, pp. 67–75, and at, accessed 3 August 2023.

A.D. Kramer, J.E. Guillory, and J.T. Hancock, 2014. “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences, volume 111, number 24 (2 June), pp. 8,788–8,790.
doi:, accessed 3 August 2023.

M.A. Locher, B. Bolander, and N. Höhn, 2015. “Introducing relational work in Facebook and discussion boards,” Pragmatics, volume 25, number 1, pp. 1–21.
doi:, accessed 3 August 2023.

C. Maíz-Arévalo, 2017. “‘Small talk is not cheap’: phatic computer-mediated communication in intercultural classes,” Computer Assisted Language Learning, volume 30, number 5, pp. 432–446.
doi:, accessed 3 August 2023.

J. Meese and E. Hurcombe, 2021. “Facebook, news media and platform dependency: The institutional impacts of news distribution on social platforms,” New Media & Society, volume 23, number 8, pp. 2,367–2,384.
doi:, accessed 3 August 2023.

J.B. Merrill and W. Oremus, 2021. “Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation,” Washington Post (26 October), at, accessed 3 August 2023.

M.D. Molina, S.S. Sundar, M.M.U. Rony, N. Hassan, T. Le, and D. Lee, 2021. “Does clickbait actually attract more clicks? Three clickbait studies you must read,” CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, article number 234, pp. 1–19.
doi:, accessed 3 August 2023.

L. Page, S. Brin, R. Motwani, and T. Winograd, 1998. The PageRank citation ranking: Bringing order to the Web (29 January), at, accessed 3 August 2023.

B. Pang and L. Lee, 2008. “Opinion mining and sentiment analysis,” Foundations and Trends in Information Retrieval, volume 2, numbers 1–2, pp. 1–135.
doi:, accessed 3 August 2023.

J.C. Paolillo, 2019. “Against ‘sentiment’,” SMSociety ’19: Proceedings of the 10th International Conference on Social Media and Society, pp. 41–48.
doi:, accessed 3 August 2023.

E. Pariser, 2011. The filter bubble: What the Internet is hiding from you. New York: Penguin Press.

R Core Team, 2018. “The R Project for Statistical Computing,” at, accessed 3 August 2023.

M. Rambocas and J. Gama, 2013. “Marketing research: The role of sentiment analysis,” FEP Working Papers, number 489, at, accessed 3 August 2023.

L. Stark and K. Crawford, 2015. “The conservatism of emoji: Work, affect, and communication,” Social Media + Society (8 October).
doi:, accessed 3 August 2023.

I. Theodoropoulou, 2015. “Politeness on Facebook: The case of Greek birthday wishes,” Pragmatics, volume 25, number 1, pp. 23–45.
doi:, accessed 3 August 2023.

U.S. Department of Health and Human Services, U.S. Department of Homeland Security, U.S. Department of Agriculture, U.S. Department of Energy, U.S. National Aeronautics and Space Administration, U.S. Department of Commerce, U.S. Consumer Product Safety Commission, U.S. Social Security Administration, U.S. Agency for International Development, U.S. Department of Housing and Urban Development, U.S. Department of Labor, U.S. Department of Defense, U.S. Department of Education, U.S. Department of Veterans Affairs, U.S. Environmental Protection Agency, U.S. National Science Foundation, and U.S. Department of Transportation, 2018. “49 CFR Part 11. Federal Policy for the Protection of Human Subjects: Six Month Delay of the General Compliance Date of Revisions While Allowing the Use of Three Burden-Reducing Provisions During the Delay Period,” Federal Register, volume 83, number 118 (19 June), pp. 28,497–28,520, and at, accessed 3 August 2023.

S. Vaidhyanathan, 2018. Antisocial media: How Facebook disconnects us and undermines democracy. New York: Oxford University Press.

E.A. Vogel, J.P. Rose, L.R. Roberts, and K. Eckles, 2014. “Social comparison, social media, and self-esteem,” Psychology of Popular Media Culture, volume 3, number 4, pp. 206–222.
doi:, accessed 3 August 2023.

L. West and A.M. Trester, 2013. “Facework on Facebook: Conversations on social media,” In: D. Tannen and A.M. Trester (editors). Discourse 2.0: Language and new media. Washington, D.C.: Georgetown University Press, pp. 133–154.


Editorial history

Received 15 March 2023; revised 1 August 2023; accepted 3 August 2023.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The awkward semantics of Facebook reactions
by John C. Paolillo.
First Monday, Volume 28, Number 8 - 7 August 2023