First Monday

Mixed findings in directly replicated experimental studies on fake news by C. Sean Burns, Renee Kaufmann, and Anthony Limperos



Abstract
Fake news mimics the look of legitimate news articles even if it does not mimic the standards of journalistic reporting. An increase in fake news has developed along with heightened concern about the veracity of news information, which has been highly politicized as fake news. These problems suggest whether standards of journalistic reporting can overcome the mimicry of real news, and whether the public can correctly identify real news. Here we ask two research questions. Does source information about the news article or its presentation influence the perception that a news article is fake news? What factors influence the perception of fake news? We conducted directly replicated experimental studies that presented four news articles to four subject pools. We show that source information and presentation have limited influence on participants’ judgments of a real news article as fake. Among those who evaluated the articles as fake news, our results show that the less participants thought the article presented a fair, balanced, evidence-based view, the more likely they were to judge it as fake news. These findings warrant discussion about the purpose of news organizations and news reporting as well as about how evidence and fairness work in news information.

Contents

Introduction
Literature review
Research questions and hypotheses
Methods
Stimulus material
Participants
Procedures
Measures
Data analysis
Results
Discussion
Limitations
Future studies
Conclusion

 


 

Introduction

There has been heightened concern in recent years about the veracity of information as it relates to the politicization of news as fake news (LaPierre and Kitzie, 2019; Lim, 2020), but the concept of fake news has multiple meanings (Rubin, et al., 2015). The term fake news may refer to content that is intentionally fake, i.e., disinformation (Cooke, 2017), or it could refer to content that is journalistic but sloppy and careless, i.e., misinformation (Quandt, et al., 2019). Rubin and colleagues (2015) created a typography to identify three types of fake news: serious fabrications, large-scale hoaxes, and humorous fakes. Recently, the term fake news has been weaponized by politicians and used to assault and discredit legitimate news articles, stories, or publishers (Vosoughi, et al., 2018; Wardle, 2017). As a result, legitimate news publishers are sometimes labeled fake news by politicians, and political followers may accept those sources as fake news sources, generally, or specifically as fake news articles or stories (Jack, 2017).

As a non-politicized concept, the term fake news has been defined as “fabricated information that mimics news media content in form but not in organizational process or intent” [1]. This definition implies two primary characteristics of a news article. The first characteristic is its form or presentation. This encompasses the overall look of a news article, which are features that can be mimicked or fabricated by fake news purveyors. These features include document layout, typography, the use and position of images, and branding. The second, the organizational process and intent of a news item, includes characteristics of a news article that are more social, historical, and institutional. These characteristics function as signals of a news publisher’s reputation and its dedication to journalistic principles (News Leader Association, 2020). These signals are captured on the article’s page — in the byline, article title, publication title, and publication date — as information that are generally used to capture an article’s descriptive metadata. For example, a news article published by an established news site (e.g., New York Times, Globe and Mail) and by an established journalist will by virtue of its provenance suggest that it follows the norms, conventions, and principles of established journalism. Thus, fake news may mimic the presentation of valid news sources through the look and feel of legitimate news sources, but it typically lacks reputational characteristics and does not follow journalistic principles.

Taken together, the definition provides two methods to identify a fake news article: first, by the presentation of the article, and second, by the source information that captures the article’s organizational credentials. The presentation of established news sites is easily mimicked given the “sociotechnical arrangements” afforded by current Web technologies and “infrastructures” [2]. For example, a content management system can be configured to look like a professional news site (Joe, 2015). Also, the writing may mimic the structure of real news articles that use the inverted pyramid structure and therefore may have the feel of legitimate news reporting (Purdue Writing Lab, n.d.; Tandoc, Jr., et al., 2021). It might be more difficult to fake journalistic practices (organizational intent). Information presented in fake news articles are often oversimplified and propagandist (Ali and Zain-ul-abdin, 2021). These practices, signaled in the document’s source information (e.g., an article from the Wall Street Journal or Los Angeles Times), provide information about the reputation of the source and the institutional and professional processes that produce the reporting. When combined, the presentation of a news article along with the source information that is embedded on the page function as heuristic devices that help readers determine the legitimacy of the content or information (Bobkowski and Younger, 2020).

Fake news content creators leverage these heuristic devices to play against our cognitive biases. Among the factors that influence whether we accept information as true (Lazer, et al., 2018) are those that appeal to preexisting beliefs (i.e., selective exposure), that confirm those preexisting beliefs (i.e., confirmation bias), or provide desirable information (i.e., desirability bias) (Lazer, et al., 2018). These biases influence whether information is accepted as true even if the information is false, or accepted as false even if it is true. Although we know that source information influences whether we accept messages as true (Jennings, 2019), we are still learning how biases such as these are influenced by the presentation of these documents.

Thus, the purpose of this study is to examine the presentation of news documents or articles and how their accompanying source information influences perceptions of fake news. Accordingly, we propose the following two research questions: Does source information about the news article or the way the news article is presented (styled) influence the perception that the news article is fake news? What biases influence the perception of a news article as fake news?

 

++++++++++

Literature review

Attempts to identify fake news are broadly approached using two different methods that might be referred to as user-based approaches and system-based approaches. Research on how machine learning, network analyses, or information retrieval techniques can automate the identification of fake news content and sources are ongoing (Jang, et al., 2018; Singh, et al., 2021). User-based approaches can be broadly categorized into two areas: studies that seek to understand psychological or cognitive aspects of users who believe fake news stories, and studies that focus on information and literacy instruction that align with how information literacy theories, espoused and in-use, are practiced (Kerr, 2013).

Cognitive biases provide the theoretical grounding for many user-based approaches to the study of fake news. For example, selective exposure is a form of avoiding information that is in disagreement (or discordant) with prior beliefs and views while seeking information that is in agreement (or concordant) with them (Case, 2012). Selective exposure is theoretically abetted by the fractioniation of media that allows people to consume or avoid media depending on the media’s perceived or real political biases (Lewandowsky, et al., 2012) and, by consequence, can result in homogeneity bias (Nikolov, et al., 2019). However, selective exposure of major news sources, which have not always aimed for objectivity and are pressured by commercial interests (Scheufele and Krause, 2019), may be effectively limited because top political news sites attract audiences with diverse ideological beliefs (Nelson and Webster, 2017).

Even if people consume a variety of sources when engaging the news, there is evidence to suggest that political identification can influence how they ultimately judge the reliability and credibility of news sources. Self-reported liberals rate major news sources more likely as real compared to self-reported conservatives who may rate the same sources more likely as fake (Michael and Breaux, 2021). Yet this is complicated by findings that show that major news sources may be evaluated by conservatives as providing more fake news and propaganda but at the same time, still also provide real news, and that familiarity with news sources does not strongly determine beliefs about the veracity provided by these news sources (Michael and Breaux, 2021). Furthermore, liberals and conservatives determine validity differently. For example, liberals rate the accuracy of liberal leaning headlines more favorably than conservatives, and conservatives rate the accuracy of conservative leaning headlines more favorably than liberals (Calvillo, et al., 2021).

Exposure to false or debunked news has cognitive and behavioral effects (Bastick, 2021). People update their beliefs when false or debunked information is corrected, but they tend to forget the corrections and retain the false or debunked information over the long term, suggesting that corrections to false or debunked information may not be as memorable as the false or debunked information itself (Brashier, et al., 2021). Belief in fake news stems with issues in individuals’ abilities to achieve cognitive decoupling, or the ability to assess hypotheticals by decoupling real world representations from imaginary ones (DeDonno, 2016). Those who took longer to achieve cognitive decoupling were more likely to believe fake news stories (Bronstein, et al., 2021).

There has been a strong push toward information literacy in the library and information science field since the 1970s, but despite having several standards or frameworks that define information literacy, the topic is under-theorized (Budd and Lloyd, 2014). Recent efforts have pushed for a stronger approach to evaluating information critically by understanding how authority is constructed and contextual (Association of College and Research Libraries [ACRL], 2016). However, it is still true that indicators of authority, such as the names of authors, their credentials and affiliations, and publication sources, are used in simplified checklists to mark whether a work is authoritative or credible before critically evaluating the information it contains (Saunders and Budd, 2020). Thus, source information remains important. Students who score well on information literacy tests use source information since it provides context that reduces uncertainty about the information content (Walton and Hepworth, 2011).

Readers with diverse ideological beliefs read from a spectrum of news sources (Nelson and Webster, 2017), and then interpret stories from those sources differently depending on their respective ideologies (Michael and Breaux, 2021). Therefore, it should be the case that readers have differing views about the authority and credibility of those news sources. This idea is captured in the first frame of the aforementioned “Framework for information literacy in higher education” (Association of College and Research Libraries [ACRL], 2016), and is important given the splintering and polarization of the United States into two camps, broadly liberal and conservative. Saunders and Budd (2020) describe the problematic responses in the literature to this particular framework in that it “can be interpreted to suggest that there is no set of objective facts against which authority might be, in part, measured or based” [3]. However, a meta-analysis of fake news research shows that people fail to distinguish between true and false stories not because their “reasoning abilities are hijacked by political motivations,” but because they fail “to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge)” [4]. The problem does not appear to be that people hold facts to be consistent with their respective political opinions, but that they simply fail to take the time to weigh and evaluate the information they receive from news sources. This idea is supported by research that shows that people who possess higher levels of emotional intelligence are less susceptible to belief in fake news stories (Preston, et al., 2021), which in turn, supports the importance of the affective state in various information behavior theories (Walton and Hepworth, 2011).

 

++++++++++

Research questions and hypotheses

This study proposes two research questions. The first and main question asks whether presentation and source information of a news article influences the perception of a news article as fake news. Second, we ask what factors influence the perception of a news article as fake news? To answer our research questions, we tested four hypotheses. First, since source information conveys information about the organizational intent of a news source, we present the following hypothesis:

H1: There is a difference in how participants judge the news article as fake news depending on the presence of article source information.

Since presentation conveys information about the genre of a news source, e.g., how it is styled, we present the second hypothesis:

H2: There is a difference in how participants judge the news article as fake news depending on the presence of stylistic formatting (presentation).

Since both source and presentation are used as an indicators of news sources, we present the third hypothesis:

H3: There is a difference in how participants judge the news article as fake news depending on both the presence of article source information and on stylistic formatting.

To test for bias, we present a fourth hypothesis:

H4: Significant relationships exist between perceptions of fake news and factors characterized by political bias, media bias, voting behavior, information credibility, and news information behavior.

 

++++++++++

Methods

In order to answer the proposed research questions and hypotheses, we conducted our first experiment in spring 2019 and then directly replicated (Nosek, et al., 2012) this study three more times: fall 2019, spring 2020, and summer 2020. We therefore collected data at four distinct times totaling in four unique but replicated studies. Each study acquired a new sample of participants and employed a different news story while employing the same procedures and measures. Aside from using the same data collection instrument, the studies were not linked to each other.

The research team selected the news stories based on the perceived political leaning of the information presented and of the sources providing the information. All news stories were text-based and gathered from legitimate news sources. That is, we did not pick any “fake news” stories, or stories that were intentionally fabricated, although each source had biases or slants (Groseclose and Milyo, 2005). Two news sources were considered for-profit (i.e., New York Times, Fox News) and two sources were considered not-for-profit (i.e., National Public Radio, NPR; Associated Press, AP).

 

++++++++++

Stimulus material

For all four studies, participants were asked to read the news story and be prepared to answer follow-up questions about the article and their perceptions of the article. Participants were not allowed to progress forward in the survey until at least one minute was spent with the article. Participants were randomly assigned to a news story condition, which were presented in four ways.

Condition one served as the control for the experiment. Participants were presented with a news story as it appeared online or published. Advertisements were removed as they were not salient to the purpose of this series of studies and to avoid the personalized nature of advertisements. In the second condition, the source information (e.g., byline, title, author name, publication name) were removed from the news story. The style or presentation of the news story remained the same as the original publication in this condition. The third condition presented the news story with the presentation or styling minimized or removed (e.g., plain fonts, single font size, no photos); the story was presented to the subjects in a plain, sans-serif font, as if it were written on a computer terminal. The source information of the news story remained on the page in this condition. The fourth condition presented a news story with both the source information removed and with the presentation or styling minimized or removed.

 

Set of conditions used for the NPR study
 
Figure 1: Set of conditions used for the NPR study. The images are snippets only. Presenters viewed the entire article. Top left corner: Participants presented with this version of the article served as the control in the study. This version of the article was presented as it was published. Top right corner: Participants presented with this version of the study were in Treatment 1. This version of the article excluded all source information. Bottom left corner: Participants presented with this version of the study were in Treatment 2. This version of the article minimized styling. Bottom right corner: Participants presented with this version of the study were in Treatment 3. This version of the article excluded source information and minimized styling.

 

The news articles were chosen based on the perceived political leanings of the publications and on the article topics. According to a Pew Internet report (Jurkowitz, et al., 2020), Democrats and liberal leaning people tended to trust New York Times and NPR and distrust Fox News as news sources while Republicans and conservative leaning people tended to trust Fox News and distrust New York Times and NPR. Groseclose and Milyo (2005) showed that Fox News (specifically, Fox News Special Report) has a conservative bias, the New York Times has a liberal bias, while NPR (specifically, NPR Morning Edition) lies closer to the center or mainstream. We decided to include the Associated Press (AP) since the AP syndicates stories to the other three news organizations. The Pew Internet report (Jurkowitz, et al., 2020) provided no information on the AP and its political polarization, and we therefore assumed that the AP is not as politicized as the other three sources at the time of these studies. Typically, of Americans who reported leaning Democrat or who identified as Democrat, 53 percent stated trust for the New York Times for political and election news and 46 percent stated trust for NPR. For Americans who reported leaning Republican or identified as Republican, 65 percent stated trust for Fox News. Among Democrats and those who lean Democrat, 61 percent reported distrust for Fox News, and among the Republican counterparts, 42 percent reported distrust for the New York Times (Jurkowitz, et al., 2020).

Thus, for Study One, the New York Times was selected as the news source since it is perceived as a left-leaning news outlet. Additionally, the news story covered a topic we believed might be perceived as having a left-leaning bias by those who are right leaning: i.e., a report on the special counsel investigation, led by Robert Mueller, of Russian interference in the 2016 United States election (Haberman, et al., 2018). For Study Two, Fox News was selected as the news source since it is perceived as a right-leaning news outlet. Additionally, the news story covered a topic we believed might be perceived as having a right-leaning bias by those who are left-leaning: i.e., a report on U.S. representative Ilhan Omar’s call for the United Nations to help with the migration crisis on the U.S.-Mexico border (Shaw, 2019). For Study Three, AP was selected as the news source since it does not seem to have a reputation for being either left- or right-leaning, given that there was no data regarding it via the Pew Internet report (Jurkowitz, et al., 2020). Additionally, the AP news story presented a non-political topic: i.e., a report on a tennis match in the Australian Open (Pye, 2020). For the fourth study, NPR was selected as the news source since it is perceived as centrist to slightly left-leaning, and because it is a not-for-profit outlet. Additionally, the news story presented a topic that was fairly neutral at the time of data collection: i.e., a report projecting COVID-19 cases (Aizenman and McMinn, 2020). The information below describes the stimulus materials and presents details about the sample participants and procedures for each study phase. Lastly, information about the instruments used in each study are presented.

 

++++++++++

Participants

Study one. New York Times study. A total of 416 college students (148 males, 263 females, 1 male-to-female, 1 person preferred not to answer, and 3 did not answer) whose ages ranged from 18 to 44 years (M = 19.62, SD = 2.52) were recruited from a research subjects pool at a large southeastern university. The research subjects pool generally solicits participants from lower-level and one-upper level communication and information science courses. Generally, there is content dedicated to digital literacy in these courses. A majority of participants reported their ethnic origin as Caucasian (n = 336, 81 percent) and identified their academic standing as first-year students (n = 220, 53 percent). Lastly, 62 percent of participants reported that they did not vote in the last election when data were collected in the spring of 2019.

Study two. Fox News study. A total of 650 college students (222 males, 417 females, 3 preferred not to answer, and 8 did not answer) whose ages ranged from 18 to 53 years (M = 19.72, SD = 3.41) were recruited from a research subjects pool at a large southeastern university. The research subjects pool generally solicits participants from lower-level and one-upper level communication and information science courses. Generally, there is content dedicated to digital literacy in these courses. A majority of participants reported their ethnic origin as either Caucasian (n = 507, 78 percent) and identified their academic standing as first-year students (n = 314, 48 percent). Lastly, 69 percent of participants reported that they did not vote in the last election when data were collected in the fall of 2019.

Study three. Associated Press News study. A total of 661 college students (243 males, 411 females, 2 preferred not to answer, and 5 did not answer) whose ages ranged from 18 to 62 years (M = 19.96, SD = 2.87) were recruited from a research subjects pool at a large southeastern university. The research subjects pool generally solicits participants from lower-level and one-upper level communication and information science courses. Generally, there is content dedicated to digital literacy in these courses. A majority of participants reported their ethnic origin as either Caucasian (n = 497, 75 percent) and identified their academic standing as first-year students (n = 279, 42 percent). Lastly, 67 percent of participants reported that they did not vote in the last election when data were collected in the spring of 2020.

Study four. National Public Radio study. To understand the generalizability of our findings as they pertained to the general population, the research team used Qualtrics to recruit a national sample population that was representative of the United States population. A total of 303 participants (151 males, 152 females) whose ages ranged from 18 to 81 years (M = 44.24, SD = 18.00) were recruited. Additionally, 70 percent of participants reported that they voted in the last election when data were collected in the summer of 2020. See Table 1 for more of study four demographic information.

 

Table 1: Study Four demographics.
Reported ethnic originnPercentage of N
Caucasian20066%
Black/African American3411%
Hispanic/Latino258%
Asian Islander176%
Biracial176%
Multiracial31%
American Indian/Alaskan Native2<1%
Disclose their ethnic origin52%
Reported highest degree earnednPercentage of N
Less than a high school degree93%
High school graduate or equivalent5418%
Some college but no degree8829%
Associate degree3110%
Bachelor’s degree6020%
Master’s degree4615%
Doctoral degree52%
Professional degree103%
Reported financial earning (US$)nPercentage of N
less than $10,000289%
$10,000 to $19,999279%
$20,000 to $29,9993712%
$30,000 to $39,9993110%
$40,000 to $49,9992910%
$50,000 to $59,9993211%
$60,000 to $69,999186%
$70,000 to $79,999155%
$80,000 to $89,99993%
$90,000 to $99,99993%
$100,000 to $149,9993311%
$150,000 or more3512%

 

 

++++++++++

Procedures

Study One through Three. During the spring and fall 2019, and spring 2020 semesters, participants were invited to participate in an online survey. The study was allotted to take approximately 15 minutes. Participants read the IRB approved cover page and provided consent to partake in the study. They were then randomly assigned to one of the four conditions (between-subjects design) in each of the separate studies. Participants were instructed to read the news story. The option to progress to the series of questions would not be made available until after 60 seconds. After reading the news story, participants were asked to respond to demographic questions, questions about the perception of the news story, and to open-ended questions. Upon completion of the survey, participants were thanked for their time and received a research credit for their course.

Study Four. During the summer 2020 semester, participants were recruited via Qualtrics to participate in the final study in the series. This sample population is unique compared to the first three study populations in that the participants were not student-based. Aside from this, the study was designed exactly as the first three studies. Upon completion of the survey, participants were thanked for their time and received payment from the panel provider.

 

++++++++++

Measures

Similar to recent studies investigating fake news (e.g., Michael and Breaux, 2021; van der Linden, et al., 2020), this series of studies relied on questions to assess political and voting behaviors and perceptions. For example, we asked participants whether they voted in the last midterm election (Q11; See Appendix 1, https://doi.org/10.6084/m9.figshare.14778852, for a list of questions). Most of the student-based samples did not, in part likely due to the legal age of voting in the U.S., but most did in the general population study (Study 1, M = 1.67, SD = .47; Study 2, M = 1.72 , SD = .45; Study 3, M = 1.70, SD = .46; Study 4, M = 1.30 , SD = .46). We also asked where they would place themselves from a political perspective (Q21_1). In all four studies, subjects tended to lean slightly conservative (Study 1, M = 4.23, SD = 1.64; Study 2, M = 4.13, SD = 1.65; Study 3, M = 4.19, SD = 1.59; Study 4, M = 4.39, SD = 1.89) on a 7-point Likert scale (Extremely Liberal, Left = 1 to Extremely Conservative, Right = 7).

Moreover, we asked a series of questions to assess our participants engagement and interactions with news. For example, we asked participants to select how often they read news articles (Q26) (Study 1, M = 2.51, SD = .88; Study 2, M = 2.65, SD = .85; Study 3, M = 2.64, SD = .90; Study 4, M = 3.25, SD = 1.07) on a 5-point Likert scale (1, Never to 5, Always). Participant were also asked how often they shared new articles (Q32) (Study 1, M = 1.90, SD = .85; Study 2, M = 1.98, SD = .81; Study 3, M = 2.06, SD = .79; Study 4, M = 2.74 , SD = 1.10) and how often they actively sought out news articles (Q34) (Study 1, M = 2.28, SD = 1.00; Study 2, M = 2.34, SD = .95; Study 3, M = 2.45, SD = .96; Study 4, M = 3.22, SD = 1.11) on a 5-point Likert scale (1, Never to 5, Always). Finally, participants were asked whether they perceived most news media as biased against their views (Q28)(Study 1, M = 3.69, SD = 1.15; Study 2, M = 3.56, SD = 1.15; Study 3, M = 3.68, SD = 1.07; Study 4, M = 3.41, SD = 1.50) on a 7-point Likert scale (Strongly Agree =1 to Strongly Disagree = 7).

Lastly, we asked participants to reflect on their reading of the presented news article. Participants were asked whether the news article provided fair, balanced, and an evidence based view of the article’s topic (Q40) (Study 1, M = 3.62, SD = 1.28; Study 2, M = 3.56, SD = 1.39; Study 3, M = 2.71, SD = 1.08; Study 4, M = 2.47, SD = 1.30) on a 7-point Likert scale (Strongly Agree =1 to Strongly Disagree = 7). We asked participants where they would place the political perspective of this news article (Q43_1). Participants rated the New York Times article and the Fox News article as liberal leaning (Study 1, M = 3.52, SD = 1.45; Study 2, M = 2.95, SD = 1.40), and they rated the AP News article and the NPR article as centrist (Study 3, M = 3.90, SD = .98; Study 4, M = 4.13, SD = 1.56) on a 7-point Likert scale (Extremely Liberal, Left = 1 to Extremely Conservative, Right = 7). Lastly, we asked participants whether they perceived the news article as fake news on a 5-point Likert scale (Q44) (Definitely Yes =1 to Definitely No = 5). Participants in all four studies were more likely to judge the articles as not fake news than as fake news (Table 2).

 

Table 2: One Sample t-test per Study: Is the story fake news?
Note: Scale of 1 to 5 where 5 is definitely not fake news.
StudyNMSDT95% CIP
New York Times3683.350.9417.373.25, 3.45<0
Fox News6073.620.8831.313.55, 3.69<0
AP6253.550.8331.713.49, 3.62<0
NPR3023.531.0616.923.41, 3.65<0

 

 

++++++++++

Data analysis

After collecting data for the four studies, the research team prepared the data for analysis. Participant responses were removed based on the following criteria: unfinished surveys, under the age of 17, or if the participants did not indicate an age. Data was analyzed using the R programming language, version 4.1.0. (R Core Team, 2021). We used several packages to aid in data preparation and analysis. The car, psych, and sjstats packages provided ANOVA and additional descriptive statistics functionality (Fox and Weisberg, 2019; Lüdecke, 2021; Revelle, 2021). The dplyr and the plyr packages provided functions to clean and prepare the data (Wickham, 2011; Wickham, et al., 2021). The dabestr, forestmodel, ggupbr, and gridExtra packages provided plotting functionality (Auguie, 2017; Ho, et al., 2019; Kassambara, 2020; Kennedy, 2020). The haven and stargazer packages assisted with importing and exporting data (Hlavac, 2018; Wickham and Miller, 2020). All R code is stored in the first author’s GitHub repository (https://github.com/cseanburns/news-study).

Student data is not available for public use, but data for the NPR analysis is publicly available in the GitHub repository. Appendix 1 (https://doi.org/10.6084/m9.figshare.14778852) describes the questions and variables used in the regression analyses and includes the question numbers referenced in this paper. Appendix 2 (https://doi.org/10.6084/m9.figshare.14778870) provides the regression statistics in table format.

 

++++++++++

Results

Article properties

Our proposed hypotheses asked whether there would be a difference in how participants judged the news article as fake news depending on (H1: Treatment 1) the presence of article source information, (H2: Treatment 2) the presence of stylistic formatting (presentation), and (H3: Treatment 3) the presence of both source information and stylistic formatting (presentation) (Figure 2). Factorial ANOVAs were used to test the first three hypotheses to the control group. We found no significant difference between the control and treatment groups for the New York Times, Fox News, and the NPR experiments (Table 3). That is, in these three studies, participants were as likely to evaluate the articles as fake, not fake, or as somewhere in-between regardless of the treatment used.

 

Distributions of results per group and per experimental study
 
Figure 2: Distributions of results per group and per experimental study. Results were significant only (p < 0.05) for the AP News study and only for Treatment 2 and Treatment 3.

 

For the AP News article study (Study 3), there was a significant main effect of condition on judgement of fake news (Table 3) even though the effect size was relatively small (η2 = SSB/SST = 0.05). Tukey post hoc comparisons of the AP News conditions indicated the mean score for the control (M = 3.81, SD = 0.75) was significantly different from the treatment 2 condition (minimized presentation) (m = 3.36, sd = 0.88, p < 0.0001), and from the treatment 3 condition (no source information / minimized presentation) (m = 3.41, sd = 0.79, p < 0.0001) but not statistically different from the treatment 1 condition (no source information) (m = 3.64, sd = 0.82, p = 0.223). Overall, relative to the control group, those who saw minimal presentation alone or no source information/minimized presentation tended to judge the story as fake news. This lends minimal support for the general argument that source information and presentation are consequential; interestingly this finding was not consistent across studies and suggests the possibility of other factors driving these judgments.

 

Table 3: One-way analysis of variance of fake news by treatment groups for each news article.
StudySSdfFobtPr(>F)
New York Times 
Between groups2.2630.850.47
Within groups321.52364 
Fox News 
Between groups1.1130.480.7
Within groups466.45603 
AP 
Between groups20.78310.5<0.0001
Within groups409.67621 
NPR 
Between groups0.3730.110.96
Within groups338.8298 

 

Factors associated with fake news

In the prior section, respondents on average were more likely to judge articles that we presented as not fake news, and we found no effects between our treatment and control groups in three of the four replicated studies. We found little evidence then that source information and presentation, on average, have an effect on perceptions of fake news. Despite this, some participants in each of the studies did claim the articles were fake news, which suggests individual differences may be altering perceptions of what is and is not fake news. Therefore, in line with our fourth hypothesis, a series of regression analyses were employed to better understand which specific factors explain the likelihood of fake news judgments. We limited our regressions to the three studies that did not contain any main effects of the treatment conditions, although for those interested, Appendix 3 provides the regression coefficients for the AP News study (https://doi.org/10.6084/m9.figshare.14778876).

A number of individual variables were measured in this study. Specifically, we investigated the influence of voting behavior (Q11), political affiliation (Q21_1) and political perspective of the news article (Q43_1), various aspects of news engagement including how often news is read (Q26), shared (Q32), or sought out (Q34), and media bias (Q28). We also asked subjects to judge whether the news article provided a fair, balanced, evidence-based view of the article’s topic (Q40). We selected these variables based on previous research. Across all four studies, each of these independent or predictor variables were regressed on the judgement of the fake news variable (dependent variable).

New York Times study

For the New York Times study (Figure 3), we did not find evidence that voting behavior (Q11), political affiliation (Q21_1), political perspective of the news article (Q43_1), or how often subjects read (Q26) or shared (Q32) the news had any effect on whether the subjects judged the article to be fake news. Subjects that stated they sometimes seek out news (Q34) were less likely to claim the article as fake news relative to those who claim they never do. Subjects who disagreed or who strongly disagreed that the New York Times article offered a fair, balanced, evidence-based view of the topic (Q40) were more likely to judge the New York Times article as fake news compared to those who strongly agreed the New York Times is a respectable news outlet. Further, data revealed subjects who agreed, somewhat agreed, neither agreed nor disagreed, somewhat disagreed, or disagreed to the statement that most media was biased against my views (Q28) were more likely to judge the New York Times article as not fake news relative to those who strongly agreed with the statement. Essentially, this means people who hold very strong views that the news media is biased against their own views, in turn, also tend to believe the New York Times article was fake.

 

Forest plot of coefficients, confidence intervals, and p-values for the New York Times regression analysis
 
Figure 3: Forest plot of coefficients, confidence intervals, and p-values for the New York Times regression analysis.

 

Fox News study

For the Fox News study (Figure 4), different factors led to judgements about fake news. There was no evidence that voting behavior (Q11) had any effect on whether subjects claimed the Fox News story was fake news. However, unlike with the New York Times article, subjects’ political affiliation had an effect on whether they claimed the article was fake news (Q21_1). Relative to subjects who reported themselves as extremely liberal, subjects who reported themselves as liberal, center, conservative, fairly conservative, or extremely conservative were more likely to claim that the Fox News article was fake news. This finding seems counterintuitive, but it may reflect the content of the article, which covered Representative Ilhan Omar’s comments on the role the United Nations might play on the border between the United States and Mexico and that subjects, interestingly, perceived this source as left leaning. Further, there was no evidence how often subjects read the news (Q26), shared the news (Q32), or sought out news (Q34) had any effect on whether the subjects judged the Fox News article to be fake news. Unlike the New York Times article, we found no evidence in how subjects judged the article with regard to the statement about whether most media is biased against their views (Q28). Also, unlike the New York Times article, there was evidence that the political perspective of the news article (Q43_1) mattered in judging it as fake news. Those who judged the article as fairly liberal, liberal, or fairly conservative were more likely to state the article was not fake news than those who judged the article as extremely liberal. Additionally, when asked if the article provided a fair, balanced, evidence-based view of the topic (Q40), those who neither agreed nor disagreed, as well as those who somewhat disagreed, disagreed, or strongly disagreed were more likely to state the article was fake news relative to those who strongly agreed with the statement. Collectively, the responses involving the judgements of fake news based on perspective of the article (i.e., liberal v. conservative/balanced v. unbalanced) seem to suggest that the way each person interprets an article can influence whether or not they believe it to be fake. In this case, it seems those who viewed this article as being politically skewed were more likely to believe the article was fake.

 

Forest plot of coefficients, confidence intervals, and p-values for the Fox News regression analysis
 
Figure 4: Forest plot of coefficients, confidence intervals, and p-values for the Fox News regression analysis.

 

NPR study

For the NPR experiment (Figure 5), there was no evidence that voting behavior had any effect on how subjects judged the article (Q11). There was, however, some evidence that subjects’ political affiliation had an effect on whether they claimed the article was fake news (Q21_1). If subjects reported as conservative, then they were more likely to claim the article as fake news than subjects who reported as extremely liberal. This is not surprising given conservative lawmakers have made efforts to defund NPR (Sonmez, 2011) and to the growing backlash against various aspects of the pandemic, which started around this time (Bosman, et al., 2020). Moreover, subjects who disagreed or who strongly disagreed with the statement that most media was biased against my views (Q28) were less likely to claim the NPR article was fake news relative to those who strongly agreed with the statement that most media was biased against my views. Again, similar to results from the New York Times study, it appears those individuals who have a strong belief that the media are biased against their views are more likely to judge the article as fake than others who do not hold these beliefs. Additionally, in this study, there was some evidence that subjects who reported as “often sharing (Q32) the news” were more likely to claim the NPR article was fake news compared to those who reported never sharing news, and subjects who reported sometimes or often seeking (Q34) the news were less likely to claim the article was fake news than those who reported never doing so, but no evidence was found regarding how often or little subjects read the news had an effect (Q26). Further, no evidence was found if whether subjects placed the political perspective of the news article (Q43_1) had an effect on deciding if the article was fake news. However, for the statement asking whether the article provides a fair, balanced, evidence-based view of the topic (Q40), we found a fairly linear effect across all levels. Relative to subjects who strongly agreed, judgement that the NPR article was fake tended to increase when they believed it was not fair and balanced.

 

Forest plot of coefficients, confidence intervals, and p-values for the NPR regression analysis
 
Figure 5: Forest plot of coefficients, confidence intervals, and p-values for the NPR regression analysis.

 

To summarize, the findings from the three studies, which did not contain any main effects of the treatment conditions, tell an interesting story about what is leading people to judge news articles as fake. Although the effects across these studies are not uniform, they suggest personal political affiliations, preconceived notions about the political leaning of the article, perceptions that the article is fair and balanced, and perceptions about media bias that seem to predict how people respond when asked to evaluate whether or not a news article is fake.

 

++++++++++

Discussion

Taken together, this series of directly replicated experimental studies sought to test the perception of fake news articles as a function of the source information presented on the articles and as a function of the formatting and look of the news articles. The first study was conducted in spring 2019 (New York Times article) and then was directly replicated three additional times in fall 2019 (Fox News article), spring 2020 (AP News article), and summer 2020 (NPR article). Overall, we found in three of the studies that neither the presence of source information on a news article, the presence of formatting and style used by the respective news sources, nor the combination of either had any influence on whether participants judged the news articles as fake news.

Notably, for Studies One and Two, participants were undergraduate students who were likely enrolled or have been previously enrolled in a course which includes information literacy components, and therefore may know better than chance how to identify whether a story is fake or not (Bobkowski and Younger, 2020; Jones-Jang, et al., 2019); however, Study Four included participants representative of the general population. Given the variety of educational backgrounds within this group and our inability to identify an effect in this study, we conclude that the results from this group support the findings from the first two regression studies that source information and style or presentation of information of a news article do not influence perceptions of fake news.

Moreover, we found a statistical difference in Study Three (AP News article). Participants in this particular study’s control group, who read the article as-is, were more likely to judge the article as not fake news compared to participants who read the article with the presentation/style elements minimized and or who read the article with both presentation/style elements minimized and source information removed. We conclude there are two possible reasons for the effect found in this study. First, given no significant causal relationship in our other three studies, the results in Study Three could be a statistical fluke. This particular finding seems possible given the small effect size found and is part of the rationale and motivation for the direct replication of these studies.

Second, if Study Three did reveal a true effect, then it could be because of the nature of the information presented. We believe it is possible the participants, who knew they were involved in a study on a news article, were not expecting to read a story about a tennis match (Pye, 2020) but were rather expecting to read a story about a political topic or person. This notion seems reasonable given the politicization and weaponization of news as a topic (Vosoughi, et al., 2018; Wardle, 2017) and supported by the fact that participants who clearly saw this was an AP News article were less likely to judge the article as fake news. Therefore, we conclude it was possible that the participants in two of this study’s treatment groups second guessed their evaluation of the content and may have assumed that we intentionally presented them with a fabricated story since it was not a story about a political topic or person. Regardless of what we conclude from the results of Study Three, conflicted findings show why it is important to conduct direct replications of experimental studies.

Additionally, we also note that in our studies we asked respondents to read through the news article. To encourage this and control for skimming or passing over the news story, we prevented respondents from progressing in the survey until at least one minute had passed. Many studies use fake news headlines, and not the content, as the unit of analysis (Allcott and Gentzkow, 2017; Brashier, et al., 2021; Bronstein, et al., 2020; Calvillo, et al., 2021; Jones-Jang, et al., 2019). This is because social media has been a major platform for fake news dissemination (Nelson and Webster, 2017; Torres, et al., 2018), and the attention-getting information presented in a headline (Brashier, et al., 2021), as shared on social media, may function as the main takeaway for social media users who may not read the articles that are linked to in social media posts. Judging a news article as fake might be lessened if people read through the content because people may be afforded a better opportunity to evaluate content given their prior knowledge of the topic (Pennycook and Rand, 2021). Indeed, possessing information literacy skills has been shown to increase the ability to detect fake news (Bobkowski and Younger, 2020; Jones-Jang, et al., 2019).

In our analyses of factors that influence the perception of fake news, we found little consistency among the three studies that we explored. For all three studies, we found no evidence that voting behavior nor news reading frequency had any effect on whether subjects rated the articles as fake news. We found that the political perspective of participants was related to the judgment of the news article as fake news in the Fox News (Study 2) and NPR (Study 4) studies, and here we found some evidence that those who lean conservative were more likely to judge the articles as fake news. We also found perceptions of media bias had an effect in two studies. In the New York Times (Study 1), we found regardless of whether subjects agreed or disagreed with the statement that most news media are biased against their views, participants were less likely to judge the article as fake news. However, for the NPR study (Study 4), the effect became more focused. Here we found only those who disagreed or strongly disagreed were less likely to judge the article as fake news compared to those who strongly agreed. Given no effect was found for this question in the Fox News study (Study 2) and more focused effects were found in the NPR study (Study 4), we think that the generalization of our study to the broader public, by sampling not just students, explains this effect, and the relation between perceptions of media bias and fake news is likely a real effect among the general population.

Although we found no effect in any of the studies between news reading frequency and fake news judgements, we did find that other news engagement behaviors were related. Specifically, we found some evidence that those who stated they often share news were more likely to judge the NPR article (Study 4) as fake news. However, in the New York Times study (Study 1) and in the NPR study (Study 4), we found some evidence that those who actively, but not excessively, seek out news articles were more likely to judge the articles as not fake news. Given these findings, those who are moderately engaged with the news but who do not feel compelled to share it, compared to those who feel compelled to share news, might be better able to judge articles as fake news or not, and this should be explored in future studies. More specifically, people who are very prone to share news stories might be less discriminating in their evaluation of news articles.

Interestingly, we found evidence in only one study of a relationship between the political perspective of the news article and a judgment of the article as fake news. More specifically, in the Fox News study (Study 2) those who identified as fairly liberal or liberal, or fairly conservative, were more likely to judge the article as not fake news. This finding echoes previous research that shows readers with broad ideological backgrounds are likely to engage in multiple sources (Nelson and Webster, 2017) and may not have strict biases against any one source (like New York Times or Fox News), even if they do have biases against specific sources. Moreover, the content of this specific article may have influenced this outcome. The selection of the Fox News article was due to the nature of the statement made by U.S. representative Ilhan Omar and because she had often been disparaged by then President Donald Trump. We hypothesized that because of this disparagement, those who leaned conservative would likely judge this article as fake news by default. However, since our data supported evidence of some consensus among those who identified on either side of the political spectrum, we conclude this finding reflects simple agreement on the reporting of the case.

Finally, we found consistent evidence across all three studies in response to the question about whether the articles provided a fair, balanced, evidence-based view of the article’s topic. Broadly, the data from these three studies show that the more respondents disagreed with this statement, the more likely they were to judge the articles as fake news. We think this reflects the general fractioniation of media (Lewandowsky, et al., 2012) and the broad belief among the more conservative portions of the population that news publishers, generally speaking, do not not publish fair, balanced, or evidence-based reporting (Michael and Breaux, 2021).

 

++++++++++

Limitations

Although our studies are quasi-experimental (Studies 1-3) or experimental (Study 4), the participants took these surveys outside of a lab, and thus we had no control over the conditions under which participants engaged with the study. Specifically, participants may have been motivated to research the articles that we presented to them as they took the survey. It has been shown that what is called reading laterally, evaluating the credibility of a source by checking other sources, rather than vertical reading, limiting evaluation based on the source only, is an efficient and accurate way to identify whether a source is fake or not (Wineburg and McGrew, 2019). If participants conducted separate searches as they undertook the study, then this kind of lateral engagement with the articles might have influenced judgements about whether the articles were fake news.

Although we think having directly replicated studies makes our findings, or lack of consistent findings, more robust, it is also true that the first three studies were based on student populations, and a more robust series of replicated studies about how fake news is perceived among the public would be based on the broader population. Studies based on students only have been shown to be limited given differences between student and general populations (Hanel and Vione, 2016). Therefore, directly replicated studies where each study is based on sampling from the general population would provide for more robust findings, and perhaps more consistent findings with respect to the factors associated with judging fake news.

 

++++++++++

Future studies

When asked whether they thought the article was fake news, many subjects in our studies, regardless of their treatment group, responded that the articles might or might not be. For the New York Times study (Study 1), 41 percent of the participants could not tell if the article was fake or not, and a large proportion of the respondents chose this answer for the other studies, too: Fox News study (Study 2; 31 percent), AP News (Study 3; 35 percent), and NPR (Study 4; 32 percent). We think that since about a third or more respondents could not determine whether the articles they were presented with were fake news warrants additional studies about information uncertainty. As mentioned above, it has been shown that reading laterally improves fake news judgments, and this should be explored in additional studies that examine methods to aid in reducing or managing that uncertainty.

Many studies on fake news do not involve participants reading stories but instead focus on the information displayed in social media posts, such as headlines and graphics. We think one reason we found no main effect in three of our four studies may be due to prompting participants to read the articles, which may have helped reduce the frequency of fake news judgments regardless of how the articles were displayed to the participants. We think future experimental studies could compare how participants judge news articles when presented with headlines, and when presented with headlines and complete stories in order to understand how reading contributes to judgments of fake news.

Also, we chose to replicate our studies based on methodology only and not replicate stimulus material. Our motive was to test whether the presence of source information and/or the presence of formatting had an effect on perceptions of fake news judgements regardless of that source information or the content of the news stories. As a result, our studies involved four unique stories from four different publishers with a range of different political reputations. We think this helped broadly test whether these factors had the hypothesized influence, given that three of the four studies returned negative results, That is, we are slightly more inclined now to think they have limited influence. However, given the different results in the regression analyses, we think this is more complicated than it seems, and that source information and source content could matter in these judgements. To help unravel how source information and content influence judgements of fake news, we think future studies could focus on directly replicating a study based on a single news story or a single topic from a variety of news sources.

 

++++++++++

Conclusion

In this series of directly replicated experimental studies, we tested if source information and presentation of a news article influenced the perception of the article as fake news. In order to conduct a thorough examination, we then directly replicated that test three additional times. Rather than testing fake news stories, we specifically chose legitimate news articles but ones that were sourced from publications that have perceived ranges of political biases, and we chose news topics that we thought pushed those biases. We also chose four unique news articles in order to test if the presence of source information or the presentation of the article influenced judgments of fake news regardless of the source or the content. However, we found no main effects in three out of four of the replications that the presence of source information and/or the presentation of a story have an influence on how news articles are judged as fake news.

Although participants in three of our studies did not evaluate the articles as fake news differently based on the treatments, a proportion of subjects in each of the studies did evaluate the articles as fake news. We therefore examined the three studies with no main effects on what factors contributed to those judgments, and we found few consistent results among them. Ultimately, the collective evidence across the studies lay in the response to whether the articles provided a fair, balanced, evidence-based view of its topic. We think this warrants more discussion about the purpose of news organizations and news reporting as well as discussions about how evidence and fairness works in news information. End of article

 

About the authors

C. Sean Burns is Associate Professor in the School of Information Science at the University of Kentucky.
E-mail: sean [dot] burns [at] uky [dot] edu

Renee Kaufmann is Associate Professor in the School of Information Science at the University of Kentucky.
E-mail: renee [dot] kaufmann [at] uky [dot] edu

Anthony Limperos is Associate Professor in the Department of Communication at the University of Kentucky.
E-mail: anthony [dot] limperos [at] uky [dot] edu

 

Notes

1. Lazer, et al., 2018, p. 1,094.

2. Gray, et al.., 2020, p. 320.

3. Saunders and Budd, 2020, p. 3.

4. Pennycook and Rand, 2021, p. 6.

 

References

N. Aizenman and S. McMinn, 2020. “How to make sense of all The COVID-19 projections? A new model combines them,” NPR.org (13 May), at https://www.npr.org/sections/health-shots/2020/05/13/855038708/combining-different-models-new-coronavirus-projection-shows-110-000-deaths-by-ju, accessed 25 November 2021.

K. Ali and K. Zain-ul-abdin, 2021. “Post-truth propaganda: Heuristic processing of political fake news on Facebook during the 2016 U.S. presidential election,” Journal of Applied Communication Research, volume 49, number 1, pp. 109–128.
doi: https://doi.org/10.1080/00909882.2020.1847311, accessed 25 November 2021.

H. Allcott and M. Gentzkow, 2017. “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, volume 31, number 2, pp. 211–236.
doi: https://doi.org/10.1257/jep.31.2.211, accessed 25 November 2021.

Association of College and Research Libraries (ACRL), 2016. “Framework for information literacy for higher education” (11 January), at https://www.ala.org/acrl/standards/ilframework, accessed 25 November 2021.

B. Auguie, 2017. “gridExtra: Miscellaneous functions for ‘Grid’ graphics,” at https://CRAN.R-project.org/package=gridExtra, accessed 25 November 2021.

Z. Bastick, 2021. “Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation,” Computers in Human Behavior, volume 116, 106633.
doi: https://doi.org/10.1016/j.chb.2020.106633, accessed 25 November 2021.

P.S. Bobkowski and K. Younger, 2020. “News credibility: Adapting and testing a source evaluation assessment in journalism,” College & Research Libraries, volume 81, number 5, pp. 822–843.
doi: https://doi.org/10.5860/crl.81.5.822, accessed 25 November 2021.

J. Bosman, S. Mervosh, and M. Santora, 2020. “As the coronavirus surges, a new culprit emerges: Pandemic fatigue,” New York Times (17 October), at https://www.nytimes.com/2020/10/17/us/coronavirus-pandemic-fatigue.html, accessed 25 November 2021.

N.M. Brashier, G. Pennycook, A.J. Berinsky, and D.G. Rand, 2021. “Timing matters when correcting fake news,” Proceedings of the National Academy of Sciences, volume 118, number 5 (2 February), e2020043118.
doi: https://doi.org/10.1073/pnas.2020043118, accessed 25 November 2021.

M.V. Bronstein, G. Pennycook, L. Buonomano, and T.D. Cannon, 2021. “Belief in fake news, responsiveness to cognitive conflict, and analytic reasoning engagement,” Thinking & Reasoning, volume 27, number 4, pp. 510–535.
doi: https://doi.org/10.1080/13546783.2020.1847190, accessed 25 November 2021.

J.M. Budd and A. Lloyd, 2014. “Theoretical foundations for information literacy: A plan for action,” Proceedings of the American Society for Information Science and Technology, volume 51, number 1, pp. 1–5.
doi: https://doi.org/10.1002/meet.2014.14505101001, accessed 25 November 2021.

D.P. Calvillo, R.J.B. Garcia, K. Bertrand, and T.A. Mayers, 2021. “Personality factors and self-reported political news consumption predict susceptibility to political fake news,” Personality and Individual Differences, volume 174, 110666.
doi: https://doi.org/10.1016/j.paid.2021.110666, accessed 25 November 2021.

D.O. Case (editor), 2012. Looking for information: A survey of research on information seeking, needs and behavior Third edition. Bingley: Emerald.

N.A. Cooke, 2017. “Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age,” Library Quarterly, volume 87, number 3, pp. 211–221.
doi: https://doi.org/10.1086/692298, accessed 25 November 2021.

M.A. DeDonno, 2016. “Mental activity and the act of learning in the digital age,” In: V. X. Wang (editor). Handbook of research on advancing health education through technology. Hershey, Pa.: IGI Global, pp. 347–373.
doi: https://doi.org/10.4018/978-1-4666-9494-1.ch015, accessed 25 November 2021.

J. Fox and S. Weisberg, 2019. An R companion to applied regression. Third edition. Thousand Oaks, Calif.: Sage; see also https://socialsciences.mcmaster.ca/jfox/Books/Companion/, accessed 25 November 2021.

J. Gray, L. Bounegru, and T. Venturini, 2020. “‘Fake news’ as infrastructural uncanny,” New Media & Society, volume 22, number 2, pp. 317–341.
doi: https://doi.org/10.1177/1461444819856912, accessed 25 November 2021.

T. Groseclose and J. Milyo, 2005. “A measure of media bias,” Quarterly Journal of Economics, volume 120, number 4, pp. 1,191–1,237.
doi: https://doi.org/10.1162/003355305775097542, accessed 25 November 2021.

M. Haberman, M.S. Schmidt, and E. Sullivan, 2018. “Mueller team has ‘gone absolutely nuts,’ Trump says, resuming attacks on Russia inquiry,” New York Times (15 November), at https://www.nytimes.com/2018/11/15/us/politics/trump-mueller-russia-inquiry.html, accessed 25 November 2021.

P.H.P. Hanel and K.C. Vione, 2016. “Do student samples provide an accurate estimate of the general public?” PLoS ONE, volume 11, number 12 (21 December), e0168354.
doi: https://doi.org/10.1371/journal.pone.0168354, accessed 25 November 2021.

M. Hlavac, 2018. “stargazer: Well-formatted regression and summary statistics tables,&edquo; at https://CRAN.R-project.org/package=stargazer, accessed 25 November 2021.

J. Ho, T. Tumkaya, S. Aryal, H. Choi, and A. Claridge-Chang, 2019. “Moving beyond P values: Everyday data analysis with estimation plots,” Nature Methods, volume 16, pp. 565–566.
doi: https://doi.org/10.1038/s41592-019-0470-3, accessed 25 November 2021.

C. Jack, 2017. “Lexicon of lies: Terms for problematic information,” Data & Society (9 August), at https://datasociety.net/library/lexicon-of-lies/, accessed 25 November 2021.

S.M. Jang, T. Geng, J.-Y. Queenie Li, R. Xia, C.-T. Huang, H. Kim, and J. Tang, 2018. “A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis,” Computers in Human Behavior, volume 84, pp. 103–113.
doi: https://doi.org/10.1016/j.chb.2018.02.032, accessed 25 November 2021.

F.J. Jennings, 2019. “Where to turn? The influence of information source on belief and behavior,” Journal of Risk Research, volume 22, number 7, pp. 909–918.
doi: https://doi.org/10.1080/13669877.2017.1422788, accessed 25 November 2021.

Joe, 2015. “The plugins you need to create a news site with WordPress,” DesignWall (29 June), at https://perma.cc/UST4-KKF2, accessed 25 November 2021.

S.M. Jones-Jang, T. Mortensen, and J. Liu, 2019. “Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t,” American Behavioral Scientist (28 August).
doi: https://doi.org/10.1177/0002764219869406, accessed 25 November 2021.

M. Jurkowitz, A. Mitchell, E. Shearer, and M. Walker, 2020. “U.S. media polarization and the 2020 election: A nation divided,” Pew Research Center (24 January), at https://www.journalism.org/2020/01/24/u-s-media-polarization-and-the-2020-election-a-nation-divided/, accessed 25 November 2021.

A. Kassambara, 2020. “ggpubr: ‘ggplot2’ based publication ready plots,” at https://CRAN.R-project.org/package=ggpubr, accessed 25 November 2021.

N. Kennedy, 2020. “forestmodel: Forest plots from regression models,” at https://CRAN.R-project.org/package=forestmodel, accessed 25 November 2021.

P.A. Kerr, 2013. “Theory of action and information literacy: Critical assessment towards effective practice,” In: S. Kurbanoğlu, E. Grassian, D. Mizrachi, R. Catts, and S. Špiranec (editors). Worldwide commonalities and challenges in information literacy research and practice. Cham, Switzerland: Springer, pp. 429–435.
doi: https://doi.org/10.1007/978-3-319-03919-0_57, accessed 25 November 2021.

S.S. LaPierre and V. Kitzie, 2019. “Lots of questions about ‘fake news’: How public libraries have addressed media literacy, 2016–2018,” Public Library Quarterly, volume 38, number 4, pp. 428–452.
doi: https://doi.org/10.1080/01616846.2019.1600391, accessed 25 November 2021.

D.M.J. Lazer, M.A. Baum, Y. Benkler, A.J. Berinsky, K.M. Greenhill, F. Menczer, M.J. Metzger, B. Nyhan, G. Pennycook, D. Rothschild, M. Schudson, S.A. Sloman, C.R. Sunstein, E.A. Thorson, D.J. Watts, and J.L. Zittrain, 2018. “The science of fake news,” Science, volume 359, number 6380 (9 March), pp. 1,094–1,096.
doi: https://doi.org/10.1126/science.aao2998, accessed 25 November 2021.

S. Lewandowsky, U.K.H. Ecker, C.M. Seifert, N. Schwarz, and J. Cook, 2012. “Misinformation and its correction: Continued influence and successful debiasing,” Psychological Science in the Public Interest, volume 13, number 3, pp. 106–131.
doi: https://doi.org/10.1177/1529100612451018, accessed 25 November 2021.

S. Lim, 2020. “Academic library guides for tackling fake news: A content analysis,” Journal of Academic Librarianship, volume 46, number 5, 102195.
doi: https://doi.org/10.1016/j.acalib.2020.102195, accessed 25 November 2021.

D. Lüdecke, 2021. “sjstats: Statistical functions for regression models (version 0.18.1),” at https://doi.org/10.5281/zenodo.1284472, accessed 25 November 2021.

R.B. Michael and B.O. Breaux, 2021. “The relationship between political affiliation and beliefs about sources of ‘fake news’,” Cognitive Research: Principles and Implications, volume 6, article number 6.
doi: https://doi.org/10.1186/s41235-021-00278-1, accessed 25 November 2021.

J.L. Nelson and J.G. Webster, 2017. “The myth of partisan selective exposure: A portrait of the online political news audience,” Social Media + Society (7 September).
doi: https://doi.org/10.1177/2056305117729314, accessed 25 November 2021.

News Leader Association, 2020. “ASNE statement of principles,” at https://members.newsleaders.org/asne-principles, accessed 25 November 2021.

D. Nikolov, M. Lalmas, A. Flammini, and F. Menczer, 2019. “Quantifying biases in online information exposure,” Journal of the Association for Information Science and Technology, volume 70, number 3, pp. 218–229.
doi: https://doi.org/10.1002/asi.24121, accessed 25 November 2021.

B.A. Nosek, J.R. Spies, and M. Motyl, 2012. “Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability,” Perspectives on Psychological Science, volume 7, number 6, pp. 615–631.
doi: https://doi.org/10.1177/1745691612459058, accessed 25 November 2021.

G. Pennycook and D.G. Rand, 2021. “The psychology of fake news,” Trends in Cognitive Sciences, volume 25, number 5, pp. 388–402.
doi: https://doi.org/10.1016/j.tics.2021.02.007, accessed 25 November 2021.

S. Preston, A. Anderson, D.J. Robertson, M.P. Shephard, and N. Huhe, 2021. “Detecting fake news on Facebook: The role of emotional intelligence,” PLoS ONE, volume 16, number 3 (11 March), e0246757.
doi: https://doi.org/10.1371/journal.pone.0246757, accessed 25 November 2021.

Purdue Writing Lab, n.d. “The inverted pyramid struture,” at https://owl.purdue.edu/owl/subject_specific_writing/journalism_and_journalistic_writing/the_inverted_pyramid.html, accessed 12 March 2021.

J. Pye, 2020. “Barty goes on at Australian Open, sets up semifinal v Kenin,” AP News (28 January), at https://apnews.com/article/db1e51adbc4451add833dfdf7d48f793, accessed 25 November 2021.

T. Quandt, L. Frischlich, S. Boberg, and T. SchattoEckrodt, 2019. “Fake news,” In: T.P. Vos and F. Hanusch (editors). International Encyclopedia of Journalism Studies. Malden, Mass.: Wiley-Blackwell.
doi: https://doi.org/10.1002/9781118841570.iejs0128, accessed 25 November 2021.

R Core Team, 2021. “The R Project for Statistical Computing,” at https://www.R-project.org/, accessed 25 November 2021.

W. Revelle, 2021. “psych: Procedures for psychological, psychometric, and personality research,” at https://CRAN.R-project.org/package=psych, accessed 25 November 2021.

V.L. Rubin, Y. Chen, and N.K. Conroy, 2015. “Deception detection for news: Three types of fakes,” Proceedings of the Association for Information Science and Technology, volume 52, number 1, pp. 1–4.
doi: https://doi.org/10.1002/pra2.2015.145052010083, accessed 25 November 2021.

L. Saunders and J. Budd, 2020. “Examining authority and reclaiming expertise,” Journal of Academic Librarianship, volume 46, number 1, 102077.
doi: https://doi.org/10.1016/j.acalib.2019.102077, accessed 25 November 2021.

D.A. Scheufele and N.M. Krause, 2019. “Science audiences, misinformation, and fake news,” Proceedings of the National Academy of Sciences, volume 116, number 16 (14 January), pp. 7,662–7,669.
doi: https://doi.org/10.1073/pnas.1805871115, accessed 25 November 2021.

A. Shaw, 2019. “Omar calls for UN to handle migration crisis at the southern border,&edquo; Fox News (29 August), at https://www.foxnews.com/politics/omar-calls-for-un-to-handle-migration-crisis-at-the-southern-border, accessed 25 November 2021.

V.K. Singh, I. Ghosh, and D. Sonagara, 2021. “Detecting fake news stories via multimodal analysis,” Journal of the Association for Information Science and Technology, volume 72, number 1, pp. 3–17.
doi: https://doi.org/10.1002/asi.24359, accessed 25 November 2021.

F. Sonmez, 2011. “House votes to defund NPR,” Washington Post (17 March), at https://www.washingtonpost.com/blogs/2chambers/post/house-votes-to-move-forward-on-defunding-npr/2011/03/17/AB50Uqk_blog.html, accessed 25 November 2021.

E.C. Tandoc, Jr., R.J. Thomas, and L. Bishop, 2021. “What is (fake) news? Analyzing news values (and more) in fake stories,” Media and Communication, volume 9, number 1, pp. 110–119.
doi: https://doi.org/10.17645/mac.v9i1.3331, accessed 25 November 2021.

R. Torres, N. Gerhart, and A. Negahban, 2018. “Epistemology in the era of fake news: An exploration of information verification behaviors among social networking site users,” SIGMIS Database, volume 49, number 3, pp. 78–97.
doi: https://doi.org/10.1145/3242734.3242740, accessed 25 November 2021.

S. van der Linden, C. Panagopoulos, and J. Roozenbeek, 2020. “You are fake news: Political bias in perceptions of fake news,” Media, Culture & Society, volume 42, number 3, pp. 460–470.
doi: https://doi.org/10.1177/0163443720906992, accessed 25 November 2021.

S. Vosoughi, D. Roy, and S. Aral, 2018. “The spread of true and false news online,” Science, volume 359, number 6380 (9 March), pp. 1,146–1,151.
doi: https://doi.org/10.1126/science.aap9559, accessed 25 November 2021.

G. Walton and M. Hepworth, 2011. “A longitudinal study of changes in learners’ cognitive states during and following an information literacy teaching intervention,” Journal of Documentation, volume 67, number 3, pp. 449–479.
doi: https://doi.org/10.1108/00220411111124541, accessed 25 November 2021.

C. Wardle, 2017. “Fake news. Its complicated” (16 February), at https://firstdraftnews.org:443/latest/fake-news-complicated/, accessed 25 November 2021.

H. Wickham, 2011. “The split-apply-combine strategy for data analysis,” Journal of Statistical Software, volume 40, number 1, pp. 1–29.
doi: https://doi.org/10.18637/jss.v040.i01, accessed 25 November 2021.

H. Wickham and E. Miller, 2020. “haven: Import and export ‘SPSS’, ‘stata’ and ‘SAS’ files,” at https://CRAN.R-project.org/package=haven, accessed 25 November 2021.

H. Wickham, R. François, L. Henry, and K. Müller, 2021. “dplyr: A grammar of data manipulation,” at https://CRAN.R-project.org/package=dplyr, accessed 25 November 2021.

S. Wineburg and S. McGrew, 2019. “Lateral reading: Reading less and learning more when evaluating digital information,” Teachers College Record volume 121, number 11, pp. 1–40.
doi: https://doi.org/10.1177/016146811912101102, accessed 25 November 2021.

 


Editorial history

Received 12 September 2021; revised 11 November 2021; accepted 30 November 2021.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Mixed findings in directly replicated experimental studies on fake news
by C. Sean Burns, Renee Kaufmann, and Anthony Limperos.
First Monday, Volume 26, Number 12 - 6 December 2021
https://firstmonday.org/ojs/index.php/fm/article/download/11774/10540
doi: https://dx.doi.org/10.5210/fm.v26i12.11774