First Monday

Perceptions of accuracy in online news during the COVID-19 pandemic by Mashael Almoqbel, Jordan Vanzyl, Matthew Keaton, Manal Desai, Seejal Padhi, Seong Jae Min, and Donghee Yvette Wohn



Abstract
In this research, we assessed how young adults determine the accuracy of news articles and sources through a seven-day diary study. We performed a qualitative analysis on the participants’ responses and found that the participants mainly used nine different strategies to evaluate the accuracy of COVID news. The majority of respondents relied on their inherent trust and the reputation of a given news outlet instead of actively determining if the information was accurate. Young adults also used their perception of the quality of the article, personal logical reasoning, cross referencing the information, availability of data, among others. We discuss the implications of the results and propose practical suggestions.

Contents

COVID-19 news accuracy perceptions
Literature review
Methods
Results
Discussion
Limitations
Conclusion

 


 

COVID-19 news accuracy perceptions

Social media has been a large source for news, whether intentionally sought or incidentally viewed (Boczkowski, et al., 2017). Recently, the world has been faced with a health pandemic along with an “infodemic”, where there has been an abundance of misinformation related to COVID-19 (Ahinkorah, et al., 2020), with one-third of Americans endorsing COVID-19 related misinformation (Motta, et al., 2020). This poses significant threats to public health, and jeopardizes the valued efforts of the public health sector (Tasnim, et al., 2020).

Prior research found that people who rely on social media for news were less knowledgeable about facts surrounding the COVID-19 pandemic compared to those who relied on news Web sites, apps, printed media, radio and network, and cable television for their information (Mitchell, et al., 2020). In the case of COVID-19, the amount of fake news and online automatic fact checkers increased at high rates, signifying a need to understand the basis for how users considered news as accurate (Simon, et al., 2020). In this research, we studied a young demographic because we expected them to be more technologically literate, therefore, consuming more news online (Shearer, 2018).

In understanding how people assess accuracy of news, some research has found that analytical thinking helped news readers distinguish between fake and real news, and that lack of reasoning was associated with believing fake news, as opposed to being biased (Pennycook and Rand, 2019). Luo and colleagues (2022) found that certain cues on social media affected news accuracy perceptions, such as Facebook Likes, where people were more inclined to believe news with more Likes.

This study is mainly motivated by the increased prevalence of fake news about COVID-19, and because repeated exposure to fabricated news increases the likelihood of viewers to perceive news as accurate (Pennycook, et al., 2018). We aimed to understand: how do young news consumers assess COVID-19 news accuracy? Based on the methods of assessing accuracy identified by the first question, we want to understand if they assess news accuracy actively or passively. We conducted a seven-day diary study with a demographic of undergraduate college students attending a STEM university. Participants recorded information about COVID-19 news that they viewed during the day and reported their accuracy perceptions.

 

++++++++++

Literature review

Online news consumption

The prevalence of online media outlets increases exposure to news. The Pew Research Center found that U.S. adults consume more news from social media compared to newspapers (Shearer, 2018), and 55 percent of U.S. adults get news from social media, a trend that is increasing every year (Shearer and Grieco, 2019). Online media outlets not only provide for easy access to news, they also affect how news is consumed. Some research has treated “filter bubbles” or “echo chambers”, where users are thought to increase ideological segregation by being exposed to repeated similar content (Flaxman, et al., 2016). Social media political polarization is increasing because it rarely suggests news from differing beliefs (Levy, 2021). However, it has been found that the composition of users’ networks has a direct effect on content exposure (Min and Wohn, 2020). Social media was also found to have an effect on information load, such as increasing the amount of news users were exposed to, suggesting unwanted content, and increasing concerns about reliability for news (Pentina and Tarafdar, 2014). Another layer of news information on social media is the availability of user comments based in part on political and cultural orientations (Almoqbel, et al., 2019).

Social media users are exposed to different content in their feeds; therefore, they are exposed to news even when they are not particularly seeking it. For example, “incidental news”, viewing news without explicitly seeking it, increases user exposure to news on social media (Boczkowski, et al., 2017). The more users click on news online and read it, the stronger their habit of reading news online will become (Wohn and Ahmadi, 2019). Interestingly, incidental news is negatively associated with intentionally seeking news on social media or mainstream media, therefore, increasing news consumption for a segment of users not particularly interested in news (Park and Kaye, 2020). This kind of unintentional news consumption adds more weight to news consumed online and exacerbates its effect, and therefore, requires more research to understand the effect of online news outlets on news consumption.

COVID news and misinformation

Since the early 2020, the amount of misinformation along with online fact-checkers has grown at an alarming pace (Brennen, et al., 2020). On social media, COVID-19 misinformation is mostly concerned with public figures and authority actions in the fight over the pandemic (Simon, et al., 2020). A recent comprehensive study by researchers at Cornell examined 38 million articles about COVID-19 in English around the world, found the largest portion of misinformation included mentions of a former U.S. president (Evanega, et al., 2020). In fact, the term “fake news” only gained popularity in research after the 2016 election, and has become a largely discussed topic by scholars to date (Righetti, 2021).

Misinformation in the global health crisis has had a negative impact. It has contributed to individuals engaging in wrong and unhealthy behaviors during the pandemic, contributing in turn to an increase in the severity of the pandemic (Tasnim, et al., 2020). A study conducted in five countries found that a higher susceptibility to misinformation was associated with a lower rate of compliance with public health guidelines. Alarmingly, a one-unit increase in susceptibility to misinformation was associated with a 23 percent decrease in the likelihood to get vaccinated against COVID-19 and to recommend vaccinations to friends and family (Roozenbeek and van der Linden, 2020). Such misinformation not only affects physical health, but also mental health (Tasnim, et al., 2020). Underestimating the severity of the pandemic increased infected cases and also increased the overall death toll (Bagherpour and Nouri, 2020). Hence, research during the pandemic has highlighted the consequences of misinformation during a global crisis.

There have been solutions suggested to fight this infodemic. In a study in Puebla, Mexico, researchers found that information provided by Google did not meet quality standards required for health information and could not be considered reliable (Cuan-Baltazar, et al., 2020). There is a need to spread a fact-checked narrative in social media and other means through health care professionals and scientists (Ahinkorah, et al., 2020). The latter recommendation was being implemented by large social media platforms, such as Facebook and Twitter. For example, Facebook allowed the World Health Organization to send free ads on Facebook to spread fact-based information (Enberg, 2020). Some strict measures were recommended, such as demonetizing content creators who spread misinformation using their platforms (Pennycook, et al., 2020). Individual ability to discern between misinformation and accurate information — information literacy — is an important skill. Individuals are faced with an enormous amount of information; some has questionable credibility. It is critical to determine how well individuals are able to evaluate news.

Online news sources and accuracy perceptions

A large percentage of the U.S. population prefer to receive news from online sources, including news Web sites, news apps, and social media (Geiger, 2019). Prior research has shown journalists’ reliance on the Internet in their day to day lives, a positive predictor of perceived credibility in online news (Cassidy, 2007). During the COVID-19 outbreak, the Internet was essential for 53 percent of Americans (Vogels, et al., 2020). Therefore, individuals relying on online news may be more likely to view online news from media outlets as accurate.

The top sources of news on social media are Facebook, YouTube, and Twitter (Shearer and Mitchell, 2021). On social media sites, an individual's social network has a significant effect on what they are exposed to. Users on social media follow people who are perceived as similar to them, and therefore, exposed to a certain reality (Wohn and Bowe, 2014). With time, this understanding of information crystallizes into a true reality for an individual (Wohn and Bowe, 2014). Prior exposure has been found to increase perceived accuracy of fake news (Pennycook, et al., 2018). Therefore, if individuals are on social media, and are exposed to the same misinformation repeatedly, false information can be accepted.

Fact checking platforms, such as FactCheck.org, exist to minimize the level of deception in news. Perceptions of various fact checking platforms, such as factcheck.org and Snopes, have been found to be largely negative due to a lack of trust and ability to remain neutral (Brandtzaeg and Følstad, 2017). This is concerning because if individuals are unable to discern between accurate news and misinformation, and do not trust platforms that aim to minimize misinformation, then there can be a severe disconnect between what is true and what people believe to be true. During a global pandemic, the consequences can affect health and an ability to prevent themselves and those around them from becoming ill.

In a study that looked at the factors influencing students’ perceptions of credibility of scholarly information on the Internet, it was found that people are more likely to perceive higher credibility, if references are included (Liu, 2004). This same study also found that if the information is consistent with one’s own beliefs, they are more likely to rate that information as credible (Liu, 2004). As previously stated, social media rarely suggests news from opposing beliefs (Levy, 2021). Therefore, those seeking news on social media are exposed to information that is consistent with their beliefs that they are likely to believe to be credible. While young news consumers are considered more technologically literate (Vogels and Anderson, 2019), this does not necessarily mean that they are information literate (Breivik, 2005). In fact, previous research found that students use limited efforts in assessing the quality and credibility of sources (Taylor, 2012). Therefore, we aim to answer the following question to determine the methods that young news consumers use to assess the accuracy of COVID-19 related news:

RQ1: How do young news consumers assess COVID-19 related news accuracy?

ELM model of persuasion

The Elaboration Likelihood Model (Petty and Cacioppo, 1986) defines two distinct routes to persuasion. The central route occurs as a result of an individual’s careful and thoughtful consideration of specific information. As this relates to news, an individual will actively engage with content in an article, systematically processing information. Utilizing critical evaluation behaviors is correlated with higher rates of accurate evaluation relative to the accuracy of content (Leeder, 2019). The peripheral route results from a simpler cue that induces change without looking carefully at specific information. This passive form of evaluation can result from a lack of interest in the topic or some other distraction. People are generally motivated to hold correct attitudes surrounding themselves, other people, objects, and issues. Even with that willingness to hold correct attitudes, the willingness and ability to engage in careful and thoughtful consideration varies with an individual and situational factors (Petty and Cacioppo, 1986).

Motivational factors affect an ability and willingness to use careful and thoughtful consideration in examining information. If information is presented in a persuasive manner has some effect. Another factor seems to be a willingness to participate in effortful thinking (Petty and Cacioppo, 1986). Other factors include how understandable information happens to be, how distracted an individual might be, and how much information a person already knows about a given topic prior to evaluation.

Even if an individual is motivated to scrutinize information actively, if they lack the ability to do so, they will utilize the peripheral route (Petty and Cacioppo, 1986). When individuals need to evaluate information that directly relates to measures they need to take to keep themselves and those around them safe from COVID-19, lacking ability factors to actively evaluate information could potentially put their own lives at risk. College students, the demographic we examine in this study, have been found to consume news passively in their personal lives unless they were specifically interested in a given topic (Head, et al., 2019).

In a time defined by the World Health Organization (2020) as an infodemic, with individuals exposed to misinformation about the virus (Jurkowitz and Mictchell, 2020), it is in the public’s best interest to actively evaluate news via the central route, to ensure that individuals are giving careful and thoughtful consideration to news in order to successfully distinguish between true and false information. Therefore, we aim to understand the following question:

RQ2: Do young news consumers assess news accuracy actively or passively?

 

++++++++++

Methods

Participants

Participants in this research were undergraduate students at an American university, participating in a self-reported diary study as part of a class assignment. A total of 30 students were engaged in the study answering a number of questions daily over a seven days and recording their news viewing behaviors and accuracy perceptions. All data was self-reported by participants in this study.

Participants in this study were aged between 19–33 with an average age of 22 years old (SD = 2.7). In terms of gender, 66.67 percent were female while 33.33 percent were male. The racial demographics were as follows: Asian (50 percent), Caucasian (33.33 percent), African-American (6.67 percent), Cacausian and Hispanic (White hispanic) (6.67 percent), and American Indian (3.33 percent). When asked about their political affiliations, the participants identified as Democratic (60 percent), other (20 percent), independent (13.33 percent), Republican (3.33 percent), and Libertarian (3.33 percent).

 

Table 1: Profiles of participants.
Participant numberGenderRaceAge
P1FemaleAsian19
P2MaleWhite25
P3MaleWhite22
P4FemaleWhite/Hispanic21
P5MaleWhite21
P6FemaleAsian21
P7FemaleBlack25
P8FemaleAsian23
P9FemaleWhite19
P10MaleAsian23
P11FemaleAmerican Indian24
P12FemaleWhite22
P13MaleAsian21
P14MaleAsian24
P15FemaleAsian20
P16FemaleAsian22
P17FemaleAsian20
P18MaleWhite20
P19FemaleAsian24
P20FemaleWhite33
P21FemaleWhite/Hispanic21
P22FemaleAsian24
P23FemaleWhite26
P24FemaleWhite20
P25FemaleAsian21
P26FemaleAsian22
P27MaleAsian20
P28MaleWhite23
P29MaleAsian20
P30FemaleBlack22

 

Materials and procedure

Participants were asked to record their COVID-19 news consumption behaviors over seven days. Participants were instructed to download a template created on Microsoft Excel and complete questions concerning demographic information. Study participants were encouraged to record their answers as soon as possible to ensure accuracy. Participants recorded the date of entry, and the most memorable COVID-19 related news that they came across through the day and the time they read or viewed news, a link if it was available, and headline. Next, participants were asked if they read or viewed the full COVID-19 related news article, answering yes or no. They then were asked if they thought the news was accurate, answering either, ‘I am not sure’, “not accurate at all”, “somewhat accurate”, or “very accurate”, and elaborated on how they came to their conclusion.

Participants recorded how they found COVID-related news and if they found it by themselves. If the news was shared, they recorded who shared the news and how the news was shared. After participants completed the seven-day diary, they were asked to write an essay, averaging two pages in length. In it, they were asked to reflect on their experience participating in the study. Participants wrote if they thought the study changed any of their news consumption habits, and they evaluated their news consumption behaviors on their own.

We utilized a diary approach without a need for prompting entries. Diary studies offer a more naturalistic approach to observe participant behaviors and experiences. They are a minimally intrusive method for gathering data on real-life experiences since information is self-reported. Compared to traditional surveys where participants answer pre-determined questions, participants in diary studies can record their experiences rather immediately (Ohly, et al., 2010). As we were specifically evaluating how this population evaluated news for accuracy, we utilized a diary approach as we were specifically interested in methods used to interpret accuracy.

Participants reported three relevant frequency measures: Social media (M=6.03, SD=.89), News access (M=4.7, SD=1.37), and COVID-19 news exposure (M= 4.5, SD=1.36). These statistics measured how often participants used social media, how often they accessed news, and how often they were exposed to some form of news regarding COVID-19. Participants were presented with a scale ranging from one to eight. The scale had the following labels: 1=Never, 2=2–3 times a month, 3=once a week, 4=2–3 days a week, 5=4–6 days a week, 6=once a day, 7=several times a day, 8=10 or more times a day.

Coding process

Before attempting to code data, four researchers went through the manuscripts and generated codes. They discussed and wrote specific definitions for the codes. Then, they decided to individually code around 16 percent of the data. The researchers met again to discuss how and why they decided to associate certain text to specific codes. After several meetings, the authors decided to code another 16 percent of uncoded data individually. The rest of the data was coded by two researchers, who met to discuss certain text that was vague and difficult to associate with a code. About 33 percent of the data was coded by three researchers and the remaining was split in half and coded by two researchers.

The researchers came up with nine different codes, with some sub themes: statistics, cross reference, political identification, perceived quality, fact-checkers, outlet, source, personal experience, and logical reasoning. All diary entries were coded on the basis of how participants evaluated news accuracy, that is, what methods they used to determine the accuracy of a news article itself. For the outlets code, participants evaluated the article’s accuracy based on their trust of the news outlet. In these cases, we categorized the entries as “specific media outlet” because of the perceived accuracy of the specific outlet itself. If the participant had a predefined view of the outlets’ credibility, the participant assumed that the information was accurate. “Statistics” was defined as when participants used any numerical or data visualization, graphs/charts, or any other quantification of data to justify their judgment.

The category “cross-reference” was defined as checking outside of the article to either verify or validate the information that they read. We sub-categorized the cross-reference category to specify if the participant cross-referenced with other articles, other media, other data, friends, or family, or if the cross-reference was not specified. Political identification occurred when some participants relied on their political beliefs in order to determine the factual basis of an article. Perceived quality referred to the participant evaluating a given article’s accuracy based on their own perception of the validity of the information, and was further divided into balanced sources and factual articles.

“Fact-checkers” was another code defined as relying on other people to assess the accuracy of COVID-related news. We identified four types of fact-checkers. First, “platform fact-checkers” where certain platforms, such as Twitter, advertised their efforts to fight fake news and block certain content from the public. Second, “outlet fact-checkers” where participants believed that the news outlet fact-checked information for their readers. Third, “friends fact-checkers” where participants stated trust in their friends, therefore, trust in the news that they shared with them. Fourth, “comments fact-checkers” where participants relied on people in the comment section of an online COVID-19 related news and used their judgment as the basis for their accuracy perceptions. The “source” code referred to media that was written or cited by other sources, such as scientists and the government. “Personal experience” referred to the participant evaluating the accuracy of articles based on past experiences. Finally, “logical reasoning” included a form of personal logical reasoning and some thoughtful consideration.

The codes generated by the group were divided as passive or active. Then, two researchers divided the participants and found all instances of active accuracy evaluation based on the pre-defined designation of active versus passive codes. In this phase, the researchers recorded which active participants had recorded in their diary if they had read the entire article. To further explore passive versus active codes, the researchers looked at each of the participants that used active codes to determine whether they read the full article. To determine whether the participants potentially skimmed the article or just looked at a headline, the researchers looked at the essay quotes used by the participants. Through most of the entries, the more the participant perceived an article as accurate, the more trust a participant reported on that article or the source behind the article. For example, if participants rated an article as “very accurate”, they were inclined to trust the information in the article and its source.

 

++++++++++

Results

Outlet

The main factor that many of our participants stated as a metric for gauging COVID-19 news accuracy was mere trust in an outlet. This finding is interesting because it required the least amount of effort from readers. We found that 73.3 percent of our participants used this category, and it was used 27.14 percent of the time throughout the seven-day study period.

P3 (Male, White, 22) made a passive accuracy decision, when he said, “NYT is referenced which is credible.” Some participants reported trust in local outlets because they could attest to some of the news posted on these platforms, therefore, they had more trust in what was published by them. For example, P15 (Female, Asian, 20), said, “the overall guidelines seem to be correct since it is from a local news source”. In another case, P01 (Female, Asian, 19) said:

“I researched the topic myself in hopes of finding a source to use in my diary log. After going through several sources, I referred to CNBC given that it provides factual information.”

Sometimes participants believed that certain outlets were biased. For example, P11 (Female, American Indian, 24), rated a news article as somewhat accurate and said, “I heard [the] New York Times is always getting sued for lies”.

Source

The prevalence of sources cited in articles was found to be a common factor that participants used to evaluate news accuracy. We found that 70 percent of participants evaluated the accuracy of an article by identifying the use of a scientific source. Also, 15.7 percent of total articles were evaluated using this method, while 30 percent of the participants in this study evaluated accuracy, noting the presence of a government source in an article. We also found that 7.14 percent of the total articles evaluated by the 30 participants noted a government source and was cited in the article they read that day.

Analysis demonstrated that participants regarded articles that contained scientist sources to be accurate. P25 (Female, Asian, 21), trusted news that she read in the Atlanta Journal-Constitution and said, “It’s accurate because the main authority is Dr. Fauci, head of [the] U.S National Institute of Allergy and Infectious Diseases.”

One diary entry consisted of a participant doubting the credibility of a hospital due to a possible financial motive. P4 (Female, White/Hispanic, 21) said, “The information is coming from doctors within the state of Texas and being reported by the Texas Tribune. However, the Texas Hospital Association is a financial supporter of the Texas Tribune at the very bottom of the article so it can be a little suspicious even if it is a nonprofit/nonpartisan news organization.”

On the other hand, it was evident that some individuals put more value in cited government sources, while for others, it did not play a role in determining news accuracy. P1 (Female, Asian, 19) stated, “I believe this information is accurate because it comes straight from a federal agency, the FDA, who is responsible for our consumption of many things not just limited to medications.” Even with less credible information, citing government sources was shown to improve the perceived accuracy of material. This can be seen in P4’s (Female, White/Hispanic, 21) diary entry. She said, “I only think it is somewhat accurate because it is under Bloomberg’s opinion articles. However, the evidence seems convincing to me because the sources are from the CDC and Public Health England.”

The origin of a government source played a factor in whether the government source was perceived as accurate or not. P22 (Female, Asian, 24), believed that information from Australia’s Department of Health and Human Services to be accurate because it came from a government agency. However, P6 (Female, Asian, 21), doubted the accuracy of an article from the USA Today, as it contained information provided by the Chinese government. She said, “Although the article is written by a reputable source, the information provided by the Chinese government may be skewed.”

Perceived quality

Personal perceptions of the quality of news articles was found to be a strategy used to vet news credibility. Participants reported using their personal judgment 70 percent of the time. Out of the 210 entries made by 30 participants, 18.1 percent were coded with perceived quality. Under perceived quality, we identified balanced sources as a subcategory. Under this subcategory, participants mentioned that they perceived no political bias. We found that one percent of the articles were evaluated as accurate due to an article containing balanced sources. Factual articles, a subcategory of perceived quality, referred to participants perceiving an article as objective and containing minimal or no opinions, even if the article itself was not necessarily factual. Overall, 4.76 percent of the articles were evaluated as accurate by 30 percent of the participants. An example of relying on perceived quality, balanced sources, was evident by P4 (Female,White/Hispanic, 21): “I don’t really see any party bias or have a reason to be skeptical, that’s why I think it is at least somewhat accurate.” In this instance, the participant perceived the article to be accurate as she did not directly observe any party bias. We observed entries under the subcategory, factual article, the participant would specifically mention that they perceived the article to be reporting facts. For example, P17 (Female, Asian, 20), stated: “This story is not one of opinion or certain viewpoints. It is simply stating a verifiable fact that the FDA has approved a drug.”

Entries categorized as factual articles included those that mentioned that an article was based on an opinion. Five participants came across articles over the course of the study that they perceived to be opinion based, which caused them to not rate the articles as very accurate. To be categorized as perceived quality, participants would usually justify their accuracy claim because they had no reason to believe the information provided as false or true based on their perception of the article itself. For example, P10 (Male, Asian, 23) said: “... The article is written by a journalist. This may indicate that their research on the topic would be detailed before publishing.” The participant assumed that journalists completed their research before publishing an article just because a journalist wrote the article. Based on these trivial means, some participants gave the highest accuracy rating of ‘very accurate’. For example, P24 (Female, White, 20) said: “The article itself states that the information is yet to be reviewed by experts. Leading to a chance this information could be false.” The accuracy rating that this participant gave was the lowest “not accurate”. The participant rated the article as not accurate because it had not been peer reviewed.

Logical reasoning

Participants often used critical thinking in the form of logical reasoning to determine the credibility of articles that they had read. Logical reasoning was used by 63.3 percent of participants, and was used 17.6 percent of the time. P9 (Female, White, 19), demonstrated this by showing some skepticism: “The government is claiming that he/she might have had the placebo shot so his/her death might be unrelated to the trials and they don’t want to stop their trials ...”

Some participants were skeptical of how reliable information was when articles mentioned methods of data collection that were slow or had a risk of inaccuracies. P19 (Female, Asian, 24) stated: “There is no official date to when the FDA will decide to use the first vaccines and they will have to be used on a group of 30,000 test patients. The FDA wants to come to a decision by year end.” This code was used to describe some flaw in either the way the information was gathered or how it was presented.

Cross-reference

Participants in this study were inclined to cross reference information that they were reading. This strategy was used by 60 percent of the participants. Out of the 210 entries made by 30 participants, 16.19 percent were categorized as cross-reference. For the subcategories, “cross-reference: other articles” was used by 40 percent of participants and 8.57 percent of entries. “Cross-reference other data” was used by 13.33 percent of participants with 2.86 percent of entries. “Cross reference: other media” was used by 16.67 percent of participants and 2.38 percent of entries. “Cross-reference: friends and family” was used by 6.67 percent of participants and 0.952 percent of entries. “Cross-reference: non-specified” was used by 6.67 percent of participants and 1.43 percent of entries.

In certain cases, participants reported using a form of cross referencing information but did not exactly report how that process was conducted. Therefore, with such entries, we coded them as “Cross-reference: non-specified”.

Most of the time, participants stated that they used other sources to verify the accuracy of news that they were reading. For example, P29 (Male, Asian, 20) wrote: “Yes I think the information is accurate because other websites and news sites have the same content/information as well, and a statement was released by governor Murphy.” In this case, the participant cross referenced with other articles.

Another example of a participant cross-referencing with other data can be seen in an entry from P24 (Female, White, 20), where she specifically looked at statistics from other sources to verify data that she read in an article. Other participants may have come across information that did not validate what they read in other news sources. For example P13 (Male, Asian, 21) said: “There has been so many different news about the release of the vaccines. So, I am unsure about which is accurate.” In this case, the participant was unable to cross-reference information provided in an article as news about vaccines varied from article to article. Misinformation spread rapidly over the course of the pandemic, leading to increased awareness with cross-referencing. P19 (Female, Asian, 24), for example, was a participant that cross-checked articles she read throughout the study. Two participants in this study were inclined to rate accuracy for news that they were able to validate by coming across similar headlines. Two participants cross-referenced with friends or family. For instance, P7 (Female, Black,25) said, “... I have female friends who go there almost monthly, and they confirmed that it’s closed — they’re disappointed ...” In that case, the participant read an article about a restaurant that was shut down after violating COVID-19 restrictions titled “Cuban Pete’s in Montclair is shut down after repeatedly violating indoor dining orders.” The participant perceived this article to be very accurate because her friends were able to verify information provided in the article.

Another example of cross-referencing occurred when participants cross-referenced information that they came across with other media. For example, P14 (Male, Asian, 24) reported: “There has been live recordings with Trump in his rallies, and it is so easy to see everyone there being closely knit together — some with masks but also some with no masks.” Because the participant was able to watch video that validated information that he had read in an article, he perceived the article to be very accurate.

Statistics

One of the obvious and intuitive results that we found was the reliance upon data and statistics. We defined the category “statistics/quantitative information” as any numerical or data visualization, graphs/charts, and/or any quantification of data. Overall, 43.3 percent of participants reported that when they saw data in an article, their trust in that article improved. However, looking at the data over the course of the seven-day study, this strategy was used only 8.1 percent of the time. For example, P23 (Female, White, 26), rated an article as very accurate and said the reason was “this article has actual statistics and real numbers.” Moreover, the use of data in articles was eye-catching for some participants and was considered a reason that bolstered a given article’s claims. P5 (Male, White, 21) said in the essay, “The stories that usually caught my eye were the ones that contained some numbers.” Sometimes, it was also how the data was conveyed in an article that triggered notions of accuracy. P24 (Female, White, 20) said: “The article is very long and goes very [in depth] about the information. It presents the information very well with a multitude of graphs.”

While some participants were convinced that the presence of data in an article was one indicator of trustworthiness, others believed that the source of data was important. For example, P27 (Male, Asian, 20) said:

“it also provides data and graphs to further illustrate its point. Additionally, the bottom of the article claims that the coronavirus case data comes directly from state, local health agencies, and hospital reports.”

We also found that some participants had a negative attitude about data, believing that they were skewed or manipulated. For example, P5 (Male, White, 21) said, “sometimes with graphs and data, people skew it so that it fits their agenda.”

Personal experience

Personal experience was shown to have a positive correlation with perceived accuracy. In total, 26.7 percent of participants used their personal experiences to determine a given article’s accuracy; 4.76 percent of all the entries cited personal experiences. Some participants found that what they experienced first hand correlated to articles that they had read as well as data presented in an article. For example, P1 (Female, Asian, 19), decided that the U.S reporting more than 83,000 coronavirus cases two days in a row was accurate:

“I have seen it myself first hand. Many people were outside enjoying the weather. Now when I pass by the park, there’s a fraction of the people outside in comparison to before.”

Additionally, some participants believed an article was accurate because it had a personal effect on them. P6 (Female, Asian, 21) believed that an article “Newark imposes Covid-19 restrictions on businesses” was accurate because she was a resident in the area, providing first hand experience. Furthermore, many participants believed articles that they read were accurate by verifying with personal experiences of others. For example, P7 (Female, Black, 25) rated the headline, “Cuban Pete’s in Montclair is shut down after repeatedly violating indoor dining orders” as accurate because her female friends confirmed that Cuban Pete’s was indeed closed.

We found that most of the entries that were coded as personal experience were from participants who used the code to describe why they thought an article was very accurate or somewhat accurate. No participants used personal experience to support why they thought an article that they read was entirely inaccurate.

Fact checkers

We found that all types of fact checkers were used by various participants to support their conclusions on the accuracy of an article. P4 (Female, White/Hispanic, 21), illustrated the first type of fact checkers’ ‘platform’:

“Any news and information about COVID-19 gets fact checked by Twitter. There’s not that many instances that I am aware of that an article with false information does not get blocked or flagged within the day if it is false.”

There was also the issue of trust in certain outlets to do their due diligence and fact-check their news before publication. P17 (Female, Asian, 20) described this clearly when she noted:

“While the trust of these sources has become a partisan issue, and while I acknowledge many of them contain biases, I trust all of them to fact check their information.”

Another kind of fact checking was a reliance on friends who shared news. P5 (Male, White, 21) simply said, “I trust the person who shared it”, to communicate his way of assessing COVID-19 news. The last type we identified was ‘comments fact-checkers’, P8 (Female, Asian, 23) explained in her essay how the comments section affected her decisions about news accuracy:

“It makes me worry about whether we could really trust the news after I view the comments section of the news in some of the YouTube videos. People were claiming some of the news was used to control the public.”

Sometimes participants relied on a combination of strategies to assess accuracy. For example, P12 (Female, White, 22) said:

“The good thing that I have known about Reddit is not only is there a moderation team that is responsible for minimizing the amount of fluff and fake news stories that get posted but there is a large group of people that make comments as well.”

It is worth noting that only 20 percent of participants used this category as a means to assess accuracy. However, each participant used it once or twice over the seven days. Hence, this category was only used 3.3 percent of the time to assess accuracy.

Political identification

Only 10 percent of participants used political identification. It was reportedly used only 1.4 percent of the time over the course of the seven days. We noticed that political bias was always a way to reduce authenticity of an article, if it came from an opposition, rather than to increase accuracy of news from the same political ideologies. To illustrate, P26 (Female, Asian, 22), shared how she did not trust a news article about COVID-19 because she has negative feelings about the former U.S. president:

“Donald Trump is known for his lies and outlandish stories. I think this is just another concoction of a story he made up himself.”

She used political identification as a subjective way to evaluate the accuracy of COVID-19 news.

Some participants were sensitive to any political bias presence in COVID-19 news. Although P29 (Male, Asian, 20) reported his affiliation with the Democratic party, he questions the credibility of news articles that were politically biased, stating:

“Yes I think this information is accurate however, only somewhat because after reading the article it felt as if the article might be slightly biased and ‘anti-Trump’.”

Political bias in news was seen as a red flag over the validity of information in a given article.

Active versus passive

To actively evaluate an article, a participant must evaluate the news article with active and thoughtful consideration. The codes that we considered to be active were: cross-reference, logical reasoning, and personal experience. The rest of the codes were considered to be passive. Cross-referencing, personal experience, and logical reasoning all included various forms of critical thinking of the content of articles. Passive ways of considering news accuracy relied on other people (fact checkers, outlets, source) and preconceived beliefs (political identification, perceived quality, presence of statistics). In total, 76.67 percent of participants actively evaluated news accuracy at least once. However, out of the 210 total entries, only 32.86 percent of the articles were actively evaluated for accuracy. The data we collected suggests that 65.2 percent participants actively evaluated news accuracy at least once during the seven days.

 

++++++++++

Discussion

In this study, the majority of participants evaluated news accuracy based on an inherent blind trust of the news outlet. We believe that this was due to these outlets being closely related to traditional media such as television and newspapers. A study in Europe found that 51 percent of Europeans trusted television and 47 percent trusted the written press to uphold reliable information (Russman and Hess, 2020). However, this was not an accuracy evaluation method that required thoughtful consideration of information as presented. People are generally motivated to hold correct attitudes (Petty and Cacioppo, 1986). Therefore, there must be additional situational and motivational factors that limit news consumers’ ability to evaluate news in a thoughtful way. Even though 76.67 percent of participants in this study evaluated news information with thoughtful consideration at least once, this only accounted for 32.86 percent of the total entries. Further research is necessary to determine what factors prevented news consumers from evaluating news information with more active consideration of accuracy.

Additionally, more research would need to be conducted to determine why people trust specific media outlets. Our study found that news readers relied on peripheral cues to assess the accuracy of information. Readers tended to trust outlets they were familiar with or that they perceived to have a good reputation. Even though political identification was the least prevalent theme, this study found that if information was consistent with one’s beliefs then they were more likely to view the information as accurate (Liu, 2004). It could be inferred that because an outlet was more left or right leaning, they trusted that outlet because it aligned with their political beliefs. The information presented in outlets that participants were familiar with and consistent with their own ideas were associated with a higher reputation, which correlates with previous research (Liu, 2004). The majority of the participants in this study were Democratic. If our participants included a more diverse group of political ideologies, it might have been possible to see a more clear relationship between political belief and accuracy perception.

The results of this study suggest a disconnect between how readers assess accuracy of the news that they are exposed to and how journalists convey accuracy based on professional protocols. In this study, readers assessed accuracy based on reputation and familiarity. These are peripheral cues that readers seemed to rely on to assess accuracy as opposed to journalists who aim to focus on objectivity and transparency (Society of Professional Journalists, 2014). This implication puts a larger responsibility on journalists to ensure that information is accurate because readers are going to rely on peripheral cues to assess accuracy.

We found that the second most used accuracy evaluation strategy was a reliance on scientists as sources of information. This finding was expected considering the demographics of our participants, largely Democrats attending a STEM university. It has been found that confidence in scientists is much stronger among those with high science knowledge and who identify as Democratic (Funk, et al., 2019). In the context of COVID-19, there has been a great deal of misinformation circulating on the Internet (Simon, et al., 2020). The results of not considering scientific knowledge in evaluating a constantly evolving and volatile situation such as a pandemic would result in significant effects. Though this finding is promising; we believe such practices should be recommended to news readers. A study found that sending small nudges to users of social media over accuracy helped to improve decision-making (Pennycook, et al., 2020). Similarly, providing subtle cues in the communicated news content conveying the importance of scientific findings and authoritative health agencies may encourage a reader to give more emphasis and significance to an article.

Interestingly, participants in this study assumed social media was a fact checker; in reality, social media platforms have mostly a hands off approach. Relying on social media to identify fake news can be dangerous, as seen during the COVID-19 pandemic. Increasingly, platforms have taken a new approach to combat misinformation. Twitter, for example, began adding labels to tweets that contained misleading information surrounding the COVID-19 pandemic (Roth and Pickles, 2020). Since social media platforms are beginning to make positive changes in order to combat misinformation, this could convince the general public that social media is more trustworthy to provide news.

Participants in this study always used multiple strategies. The only prevalent combination of strategies used together was outlet and source. Therefore, participants not only relied on outlets that they trusted but they sometimes looked at sources cited in articles published by outlets that they trusted. This finding is significant because it adds another layer to the authentication process that some young news consumers follow. When articles of perceived trusted outlets lacked sources, participants hesitated to believe content in those articles. This finding is significant because the accuracy of news in general has been declining over time (Maier, 2005).

The use of outlet and source, as strategies, might be due to their prevalence or ease of use or to actual correlation. Further research is needed to see if there is a correlation and the effect of using them jointly to determine news accuracy.

 

++++++++++

Limitations

The sample of college students used in this study was more technologically literate than the general population. Being technologically literate meant participants could comfortably research news and verify content easily. Therefore, using a sample whose technical literacy skills were varied would lead to a better representation of the general population. The duration of the study was one week, and therefore, a longer duration study would reveal more insights into COVID-19 news consumption behaviors. The study was part of a class assignment, thus, participants might have focused on major news that they were exposed to, and failed to remember other news experiences. Self-reported data is subject to its own known limitations. Individuals that show lower levels of trust in news tend to rely heavily on social media (Ardèvol-Abreu and Gil de Zúñiga, 2016). Conservatives have been found to have lower levels of trust in news (Head, et al., 2019), which might explain why our participant pool, as largely Democratic, relied heavily on news outlets for their daily news consumption.

 

++++++++++

Conclusion

Identifying what young news consumers value in determining accuracy is important. This information can be used to help better communicate information in a way to encourage trust. Additionally, it is significant to understand if young news consumers assess news information actively or passively. This can serve as a basis for better education in assessing news and creating information literate citizens. In this study we found that young news consumers used different strategies to assess news accuracy. The news outlet, news article source, and perceived quality of the article, among others, were reported as prominent strategies to gauge accuracy. Such findings would serve as the basis for future analysis into how exactly those factors affect accuracy decisions for news consumers and to what extent other contexts and personal histories influence the effect of those factors. In future work, we intend to expand our investigation and explore data in a more quantitative manner. We plan on running statistical and regression models to test the significance of factors on news accuracy reported by participants. End of article

 

About the authors

Mashael Almoqbel is an assistant professor at Jubail Industrial College. She recently received her Ph.D. from the New Jersey Institute of Technology. Mashael is interested in social media, public safety, and social good in general through the use of computing.
E-mail: moqbelm [at] rcjy [dot] edu [dot] sa

Jordan Vanzyl is a research coordinator at the MJHS Institute of Innovation in Palliative Care.
E-mail: jordan [dot] vanzyl [at] gmail [dot] com

Matthew Keaton is an undergraduate student at the New Jersey Institute of Technology, majoring in Web and information systems.
E-mail: mk872 [at] njit [dot] edu

Manal Desai is an undergraduate student studying computer science and applied mathematics at the New Jersey Institute of Technology.
E-mail: mvd29 [at] njit [dot] edu

Seejal Padhi is an undergraduate student majoring in biomedical engineering at the New Jersey Institute of Technology.
E-mail: sp48 [at] njit [dot] edu

Seong Jae Min is an associate professor in the Department of Communication Studies at Pace University, New York City campus. His research interests involve political communication and journalism, with a special focus on deliberative democracy.
E-mail: smin [at] pace [dot] edu

Donghee Yvette Wohn is an associate professor at the New Jersey Institute of Technology and director of the Social Interaction Lab (socialinteractionlab.com). Her research is in the area of human computer interaction (HCI) where she studies the characteristics and consequences of social interactions in online environments such as livestreaming, gaming, and social media.
E-mail: wohn [at] njit [dot] edu

 

References

B.O. Ahinkorah, E.K. Ameyaw, J.E. Hagan, A.-A. Seidu, and T. Schack, 2020. “Rising above misinformation or fake news in Africa: Another strategy to control COVID-19 spread,” Frontiers in Communication, volume 5 (17 June).
doi: https://doi.org/10.3389/fcomm.2020.00045, accessed 12 February 2023.

M.Y. Almoqbel, D.Y. Wohn, R.A. Hayes, and M. Cha, 2019. “Understanding Facebook news post comment reading and reacting behavior through political extremism and cultural orientation,” Computers in Human Behavior, volume 100, pp. 118–126.
doi: https://doi.org/10.1016/j.chb.2019.06.006, accessed 12 February 2023.

A. Ardèvol-Abreu and H. Gil de Zúñiga, 2016. “Effects of editorial media bias perception and media trust on the use of traditional, citizen, and social media news,” Journalism & Mass Communication Quarterly, volume 94, number 3, pp. 703–724.
doi: https://doi.org/10.1177/1077699016654684, accessed 12 February 2023.

A. Bagherpour and A. Nouri, 2020. “COVID misinformation is killing people,” Scientific American (11 October), at https://www.scientificamerican.com/article/covid-misinformation-is-killing-people1/, accessed 27 October 2020.

P. Boczkowski, E. Mitchelstein, and M. Matassi, 2017. “Incidental news: How young people consume news on social media,” Proceedings of the 50th Hawaii International Conference on System Sciences, pp. 1,785–1,792, and at http://hdl.handle.net/10125/41371, accessed 12 February 2023.

P.B. Brandtzaeg and A. Følstad, 2017. “Trust and distrust in online fact-checking services,” Communications of the ACM, volume 60, number 9, pp 65–71.
doi: https://doi.org/10.1145/3122803, accessed 12 February 2023.

P.S. Breivik, 2005. “21st century learning and information literacy,” Change: The Magazine of Higher Learning, volume 37, number 2, pp. 21–27.
doi: https://doi.org/10.3200/chng.37.2.21-27, accessed 12 February 2023.

W.P. Cassidy, 2007. “Online news credibility: An examination of the perceptions of newspaper journalists,” Journal of Computer-Mediated Communication, volume 12, number 2, pp. 478–498.
doi: https://doi.org/10.1111/J.1083-6101.2007.00334.X, accessed 12 February 2023.

J.Y. Cuan-Baltazar, M.J. Muñoz-Perez, C. Robledo-Vega, M.F. Pérez-Zepeda, and E. Soto-Vega, 2020. “Misinformation of COVID-19 on the Internet: Infodemiology study,” JMIR Public Health and Surveillance, volume 6, number 2, e18444.
doi: https://doi.org/10.2196/18444, accessed 12 February 2023.

J. Enberg, 2020. “How COVID-19 is testing social media’s ability to fight misinformation” (18 March), at https://www.insiderintelligence.com/content/how-covid-19-is-testing-social-medias-ability-to-fight-misinformation, accessed 12 February 2023.

S. Evanega, M. Lynas, J. Adams, and K. Smolenyak, 2020. “Coronavirus misinformation: Quantifying sources and themes in the COVID-19 ‘infodemic’,” Alliance for Science, at https://allianceforscience.org/wp-content/uploads/2020/09/Evanega-et-al-Coronavirus-misinformationFINAL.pdf, accessed 12 February 2023.

S.R. Flaxman, S. Goel, and J.M. Rao, 2016. “Ideological segregation and the effects of social media on news consumption,” Public Opinion Quarterly, volume 80, number S1, pp. 298–320.
doi: https://doi.org/10.1093/poq/nfw006, accessed 12 February 2023.

C. Funk, M. Hefferon, B. Kennedy, and C. Johnson, 2019. “Trust and mistrust in Americans’ views of scientific experts,” Pew Research Center (2 August), at https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/, accessed 12 February 2023.

A.W. Geiger, 2019. “Key findings about the online news landscape in America,” Pew Research Center (11 September), at https://www.pewresearch.org/fact-tank/2019/09/11/key-findings-about-the-online-news-landscape-in-america/, accessed 3 November 2020.

A.J. Head, E. DeFrain, B. Fister, and M. MacMillan, 2019. “Across the great divide: How today’s college students engage with news,” First Monday, volume 24, number 8, at https://firstmonday.org/article/view/10166/8057, accessed 12 February 2023.
doi: https://doi.org/10.5210/fm.v24i8.10166, accessed 12 February 2023.

M. Jurkowitz and A. Mitchell, 2020. “Coronavirus stories cited as made-up news include claims about risks, details of virus,” Pew Research Center (15 April), at https://www.pewresearch.org/journalism/2020/04/15/early-in-outbreak-americans-cited-claims-about-risk-level-and-details-of-coronavirus-as-made-up-news/, accessed 12 February 2023.

C. Leeder, 2019. “How college students evaluate and share ‘fake news’ stories,” Library & Information Science Research, volume 41, number 3, 100967.
doi: https://doi.org/10.1016/j.lisr.2019.100967, accessed 12 February 2023.

R. Levy, 2021. “Social media, news consumption, and polarization: Evidence from a field experiment,” American Economic Review, volume 111, number 3, pp. 831–870.
doi: https://doi.org/10.1257/aer.20191777, accessed 12 February 2023.

Z. Liu, 2004. “Perceptions of credibility of scholarly information on the Web,” Information Processing & Management, volume 40, number 6, pp. 1,027–1,038.
doi: https://doi.org/10.1016/S0306-4573(03)00064-5, accessed 12 February 2023.

M. Luo, J.T. Hancock, and D.M. Markowitz, 2022. “Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues,” Communication Research, volume 49, number 2, pp. 171–195.
doi: https://doi.org/10.1177/0093650220921321, accessed 12 February 2023.

S.R. Maier, 2005. “Accuracy matters: A cross-market assessment of newspaper error and credibility,” Journalism & Mass Communication Quarterly, volume 82, number 3, pp. 533–551.
doi: https://doi.org/10.1177/107769900508200304, accessed 12 February 2023.

A. Mitchell, J.B. Oliphant, and E. Shearer, 2020. “About seven-in-ten US adults say they need to take breaks from COVID-19 news,” Pew Research Center (29 April), at https://www.pewresearch.org/journalism/2020/04/29/about-seven-in-ten-u-s-adults-say-they-need-to-take-breaks-from-covid-19-news/, accessed 12 February 2023.

M. Motta, D. Stecula, and C. Farhart, 2020. “How right-leaning media coverage of Covid-19 facilitated the spread of misinformation in the early stages of the pandemic in the U.S.,” Canadian Journal of Political Science, volume 53, number 2, pp. 335–342.
doi: https://doi.org/10.1017/S0008423920000396, accessed 12 February 2023.

C.S. Park and B.K. Kaye, 2020. “What’s this? Incidental exposure to news on social media, news-finds-me perception, news efficacy, and news consumption,” Mass Communication and Society, volume 23, number 2, pp. 157–180.
doi: https://doi.org/10.1080/15205436.2019.1702216, accessed 12 February 2023.

S. Ohly, S. Sonnentag, C. Niessen, and D. Zapf, 2010. “Diary studies in organizational research: An introduction and some practical recommendations,” Journal of Personnel Psychology, volume 9, number 2, pp. 79–93.
doi: https://doi.org/10.1027/1866-5888/a000009, accessed 12 February 2023.

G. Pennycook and D.G. Rand, 2019. “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning,” Cognition, volume 188, pp. 39–50.
doi: https://doi.org/10.1016/j.cognition.2018.06.011, accessed 12 February 2023.

G. Pennycook, T.D. Cannon, and D.G. Rand, 2018. “Prior exposure increases perceived accuracy of fake news,” Journal of Experimental Psychology: General, volume 147, number 12, pp. 1,865–1,880.
doi: https://doi.org/10.1037/xge0000465, accessed 12 February 2023.

G. Pennycook, J. McPhetres, Y. Zhang, J.G. Lu, and D.G. Rand, 2020. “Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention,” Psychological Science, volume 31, number 7, pp. 770–780.
doi: https://doi.org/10.1177/0956797620939054, accessed 12 February 2023.

I. Pentina and M. Tarafdar, 2014. “From ‘information’ to ‘knowing’: Exploring the role of social media in contemporary news consumption,” Computers in Human Behavior, volume 35, pp. 211–223.
doi: https://doi.org/10.1016/j.chb.2014.02.045, accessed 12 February 2023.

R.E. Petty and J.T. Cacioppo, 1986. “The elaboration likelihood model of persuasion,” In: R.E. Petty and J.T. Cacioppo. Communication and persuasion: Central and peripheral routes to attitude change. Berlin: Springer-Verlag, pp. 1–24.
doi: https://doi.org/10.1007/978-1-4612-4964-1_1, accessed 12 February 2023.

N. Righetti, 2021. “Four years of fake news: A quantitative analysis of the scientific literature,” First Monday, volume 26, number 6, at https://firstmonday.org/article/view/11645/10152, accessed 12 February 2023.
doi: https://doi.org/10.5210/fm.v26i7.11645, accessed 12 February 2023.

J. Roozenbeek and S. van der Linden, 2019. “Fake news game confers psychological resistance against online misinformation,” Palgrave Communications, volume 5, article number 65.
doi: https://doi.org/10.1057/s41599-019-0279-9, accessed 12 February 2023.

Y. Roth and N. Pickles, 2020. “Updating our approach to misleading information,” Twitter (11 May), at https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html, accessed 12 February 2023.

U. Russmann and A. Hess, 2020. “News consumption and trust in online and social media: An in-depth qualitative study of young adults in Austria,” International Journal Of Communication, volume 14, at https://ijoc.org/index.php/ijoc/article/view/13774, accessed 12 February 2023.

E. Shearer, 2018. “Social media outpaces print newspapers in the U.S. as news source,” Pew Research Center (10 December), at https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/, accessed 21 October 2020.

E. Shearer and A. Mitchell, 2021. “News use across social media platforms in 2020,” Pew Research Center (12 January), at https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/, accessed 12 February 2023.

E. Shearer and E. Grieco, 2019. “Americans are wary of the role social media sites play in delivering the news,” Pew Research Center (2 October), at https://www.journalism.org/2019/10/02/americans-are-wary-of-the-role-social-media-sites-play-in-delivering-the-news/, accessed 21 October 2020.

F. Simon, P.N. Howard, and R.K. Nielsen, 2020. “Types, sources, and claims of COVID-19 misinformation” (7 April), at https://reutersinstitute.politics.ox.ac.uk/types-sources-and-claims-covid-19-misinformation, accessed 12 February 2023.

Society of Professional Journalists, 2014. “SPJ code of ethics,” at https://www.spj.org/ethicscode.asp, accessed 12 February 2023.

S. Tasnim, M.M. Hossain, and H. Mazumder, 2020. “Impact of rumors and misinformation on COVID-19 in social media,” Journal of Preventive Medicine & Public Health, volume 53, number 3, pp. 171–174.
doi: https://doi.org/10.3961/jpmph.20.094, accessed 12 February 2023.

A. Taylor, 2012. “The information search behavior of the millennial generation,” Ubiquitous Learning, volume 4, number 3, pp. 85–98.
doi: https://doi.org/10.18848/1835-9795/CGP/v04i03/40341, accessed 12 February 2023.

E.A. Vogels and M. Anderson, 2019. “Americans and digital knowledge,” Pew Research Center (9 October), at https://www.pewresearch.org/internet/2019/10/09/americans-and-digital-knowledge/, accessed 12 February 2023.

E.A. Vogels, A. Perrin, L. Raine, and M. Anderson, 2020. “53% of Americans say Internet has been essential during COVID-19 outbreak,” Pew Research Center (30 April), at https://www.pewresearch.org/internet/2020/04/30/53-of-americans-say-the-internet-has-been-essential-during-the-covid-19-outbreak/, accessed 12 February 2023.

D.Y. Wohn and M. Ahmadi, 2019. “Motivations and habits of micro-news consumption on mobile social media,” Telematics and Informatics, volume 44, 101262.
doi: https://doi.org/10.1016/j.tele.2019.101262, accessed 12 February 2023.

D.Y. Wohn and B.J. Bowe, 2014. “Crystallization: How social media facilitates social construction of reality,” CSCW Companion ’14: Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 261–264.
doi: https://doi.org/10.1145/2556420.2556509, accessed 12 February 2023.

World Health Organization, 2020. “Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation” (23 September), at https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation, accessed 20 April 2021.

 


Editorial history

Received 27 March 2022; revised 28 July 2022; accepted 10 February 2023.


CC0
To the extent possible under law, this work is dedicated to the public domain.

Perceptions of accuracy in online news during the COVID-19 pandemic
by Mashael Almoqbel, Jordan Vanzyl, Matthew Keaton, Manal Desai, Seejal Padhi, Seong Jae Min, and Donghee Yvette Wohn.
First Monday, Volume 28, Number 3 - 6 March 2023
https://firstmonday.org/ojs/index.php/fm/article/download/12342/10816
doi: https://dx.doi.org/10.5210/fm.v28i3.12342