Credibility judgment and verification behavior of college students concerning Wikipedia
First Monday

Credibility judgment and verification behavior of college students concerning Wikipedia by Sook Lim and Christine Simon



Abstract
This study examines credibility judgments in relation to peripheral cues and genre of Wikipedia articles, and attempts to understand user information verification behavior based on the theory of bounded rationality. Data were collected employing both an experiment and a survey at a large public university in the midwestern United States in Spring 2010. This study shows some interesting patterns. It appears that the effect of peripheral cues on credibility judgments differed according to genre. Those who did not verify information displayed a higher level of satisficing than those who did. Students used a variety of peripheral cues of Wikipedia. The exploratory data show that peer endorsement may be more important than formal authorities for user generated information sources, such as Wikipedia, which calls for further research.

Contents

Introduction
Literature review
Methodology
Findings
Discussion
Conclusions

 


 

Introduction

Internet users can currently access information more easily than at any other time. However, due to the complex characteristics of digital information, they face unique challenges in discerning credible information from unfiltered information, and in selecting appropriate information for their needs. That is, in digital environments, the standards of quality control are less rigorous than in traditional information sources, and the origin of information, the context of information, and the distinctions among sources and media messages are less clear than ever before (Eysenbach, 2008; Flanagin and Metzger, 2008; Harris, 2008). Digital information allows people to be more self–sufficient; however, it ironically renders people more responsible for making information decisions (Lankes, 2008). These unique characteristics of digital information make its credibility assessment difficult and require Internet users to develop new information literacy skills for making appropriate information decisions. In recent years educators and researchers have acknowledged this problem by paying a great deal of attention to credibility issues in digital environments.

In particular, researchers have examined how Internet users assess the credibility of Web information, and which factors affect their credibility judgments. The literature shows that Internet users rarely use the traditional checklist method, whereby users scrutinize the author, source or currency in evaluating Web information (Hughes, et al., 2010). Similarly, Warnick (2004) reports that an author’s identity is not even the most important criterion. Instead, other peripheral cues such as information structure and professional design, influence users’ assessment of Web credibility. Some researchers have further attempted to examine the effect of peripheral cues on credibility judgments, and have found that certain peripheral cues, such as the attractiveness of images or structural features, influence credibility judgments of information (Rains and Karmikel, 2009; Reinhard and Sporer, 2010). Other studies show that credibility judgments depend on the Internet site genre (Flanagin and Metzger, 2007) or individuals’ purposes of information searching (Metzger, 2007; Rieh and Hilligoss, 2008; Stavrositu and Sundar, 2008). Given these findings, it would be useful to know whether similar patterns can be observed in Wikipedia, a user–generated encyclopedia, that has become a popular information source among college students (Head and Eisenberg, 2010; Lim, 2009). Currently, however, little is known about what peripheral cues college students use in judging the credibility of Wikipedia articles, and whether certain peripheral cues influence their credibility judgments of Wikipedia, especially when they do not have sufficient knowledge of the topic of the article.

In addition, previous research has shown that Internet users are concerned with the credibility of Web information. Interestingly, however, Internet users do not diligently evaluate Web information (Metzger, 2007). Furthermore, Internet users tend not to verify information (Flanagin and Metzger, 2007). In fact, there exists a discrepancy between what Internet users say and what they actually do regarding information verification (Flanagin and Metzger, 2007; Iding, et al., 2009). Nonetheless, there exist few studies speculating as to why this is the case. This phenomenon can be interpreted in the sense that Internet users may not look for the best information on the Web. Instead, the second–best information may sufficiently satisfy (“satisfice”) their needs, leading to cessation of further efforts or actions, including verification. In other words, as far as Internet users obtain satisficing information that exceeds their aspiration level of information, they may not have an incentive to verify the information or to seek further information. By acknowledging these phenomena, this study attempted to examine whether the theory of bounded rationality (Gigerenzer and Selten, 2002; Simon, 1955; Simon, 1997a) can explain verification (especially, non–verification) behavior concerning Wikipedia.

The purpose of this study is twofold: to examine the credibility judgments in relation to the peripheral cue and genre of Wikipedia articles, and to understand user information verification behavior concerning Wikipedia by employing the theory of bounded rationality. The major research questions of this study are presented below.

RQ1. What peripheral cues do students use in their credibility judgments of Wikipedia?
RQ2. Do peripheral cues influence credibility judgments of Wikipedia?
RQ3. Does genre influence credibility judgments of Wikipedia?
RQ4. Do the effects of peripheral cues on credibility judgments differ according to genre?
RQ5. Do students verify information? If they do not, why not?

 

++++++++++

Literature review

Defining credibility

A conceptual definition of credibility varies from researcher to researcher. For instance, credibility is defined as believability (Flanagin and Metzger, 2008; Tseng and Fogg, 1999), an individual judgment or perception of “valid reflection of reality” (Newhagen and Nass, 1989), accuracy and truthfulness (Tormala and Petty, 2004) or accuracy and believability (Hu and Sundar, 2010). Nonetheless, it seems that an agreement exists, at least, among researchers that credibility is a multidimensional concept (Gaziano and McGrath, 1986; Jensen, 2008; Newhagen and Nass, 1989; Rieh and Danielson, 2007; Tseng and Fogg, 1999). In fact, due to the multidimensional nature of credibility, researchers tend to discuss its meaning by using dimensions instead of explicitly providing a conceptual definition. For instance, researchers tend to describe credibility as the two dimensions of expertise and trustworthiness (Flanagin and Metzger, 2008; Hilligoss and Rieh, 2008; Jensen, 2008; Newhagen and Nass, 1989; Rieh and Danielson, 2007; Tseng and Fogg, 1999; Wang, et al., 2008; Wathen and Burkell, 2002). Expertise refers to a communicator’s qualifications or competence to know the truth about a topic, while trustworthiness refers to a communicator’s motivations or inclinations to tell the truth about a topic (Jensen, 2008; Wang, et al., 2008). In other words, these two dimensions have both objective and subjective elements: Expertise can be “subjectively perceived but includes relatively objective characteristics of the source or message,” while trustworthiness is “a receiver judgment based on primarily subjective factors” [1]. Further, the two elements of trustworthiness and expertise are not always perceived together (Rieh, 2010). Due to both objective and subjective elements, Wang and her colleagues (2008) acknowledge that tension may exist between the two components with respect to certain information, such as online health information. For instance, the dimension of expertise is important to the credibility assessments of health Web sites, whereas that of trustworthiness greatly influences the credibility judgments of online peer support groups. As a result, Internet users may use different mechanisms in assessing credibility.

This literature suggests that both the objective and subjective elements of credibility influence users’ credibility judgments of Wikipedia articles. Consequently, a conceptual definition of credibility needs to convey both elements. With respect to the two elements of credibility, Rieh’s (2010) recent definition is more satisfying than that of others. That is, she defines credibility as “people’s assessment of whether information is trustworthy based on their own expertise and knowledge” [2]. In addition, the credibility literature implies that the relative importance of each element to users may vary, depending on users’ individual and situational factors or the types of information that users seek. For instance, when users have sufficient knowledge of the topic in a Wikipedia article, expertise may be more critical than trustworthiness in the judging credibility of the article. In other cases when users are positioned in personal situations (e.g., looking for a supportive advisor), trustworthiness may be more influential than expertise in credibility judgments. In the case that users have both subject knowledge and certain situational factors, both expertise and trustworthiness may be equally important in credibility judgments. Based on Rieh’s (2010) definition and the credibility literature, this study defines credibility as an individuals’ assessment whether information is believable based on their knowledge, experience and situations.

Theoretical background: The theory of bounded rationality

This study employed the theory of bounded rationality as the main theoretical framework in order to understand users’ information verification behavior. According to Simon (1997a), “human behavior is intendedly rational, but only boundedly so” [3]. Humans cannot evaluate all alternatives before they make a choice, due to the limitations of their cognitive ability, resources and time. Instead, humans examine alternatives sequentially and continue their search process until a satisfactory alternative that meets or exceeds their aspiration level is found. These aspiration levels are not fixed, but are adjusted to the situation in the sequence of trials. The aspiration levels rise if satisfactory alternatives are easy to find, and fall if they are difficult to acquire (Simon, 1955). In other words, humans operate within the limits of bounded rationality and simplify the choices available in making decisions (Todd, 2002), displaying a stimulus–response pattern more than a choice among alternatives (Simon, 1997a). According to Simon (1979), this decision–making process leads humans to pursue a “satisficing” path instead of an optimal one. In fact, Simon coined the term, “satisficing” (Selten, 2002), which is a blend of satisfying and sufficing (Agosto, 2002). A satisficing strategy seeks a satisfactory choice that is “good enough” to suit an individual’s purpose. A satisficing strategy is a rational rule that reduces the informational and computational requirements of a rational choice (Byron, 1998), requiring less time and less cognitive exertion.

Other researchers further develop models of bounded rationality by using “the metaphor of the adaptive toolbox” [4], which is designed to achieve proximal goals (Gigerenzer, 2002; Gigerenzer and Selten, 2002; Todd, 2002). According to Todd (2002), humans use fast and frugal heuristics in making decisions because such simple heuristics perform well in real environments by enabling people to engage in adaptive behavior. By providing the empirical data of other researchers’ tests, Todd (2002) demonstrates that fast and frugal heuristics using fewer cues are as accurate as the traditional decision mechanisms that use all available information. Further, optimization, compared to heuristics, does not always result in better solutions (Gigerenzer, 2008). For this reason, Todd (2002) states, “simplicity is a virtue, rather than a curse” [5]. In fact, human heuristics are selected to achieve speed by using a few cues that provide enough information to guide adaptive behavior in many situations (Todd, 2002). In particular, Todd’s (2002) ignorance–based decision mechanism (one class of fast and frugal heuristics regarding human information behavior) provides a useful explanation for why people take shortcuts. That is, if people recognize one object, but not the other, they tend to choose the recognized object. People employ the ignorance–based decision mechanism because while they use as little information as possible, recognition heuristics can produce accurate decisions more often than can random choices. In fact, adding more knowledge to use can even decrease decision accuracy. An intermediate amount of knowledge about a set of objects can result in the highest proportion of correct decisions, an example of the “less–is–more effect” [6]. In other words, fast and frugal heuristics explain human behavior based on the human rationality of making a decision. In a similar vein, Gigerenzer (2002) provides an explanation of why fast and frugal heuristics work. Humans use heuristics that are matched to particular environments and make adaptive decisions, taking into account a combination of accuracy, speed and frugality. In other words, humans are ecologically rational. In addition, humans use social rationality, which is a special case of ecological rationality. Humans can speed up decision making by using heuristics that are socially endorsed (e.g., “Eat what your peers eat.” [7]). These explanations help us understand why people take shortcuts, use heuristics (e.g., peripheral cues), and tend not to verify information.

Relevant literature on credibility

Previous research has shown that peripheral cues affect the credibility judgments of information. Here, the term peripheral cue refers to the objects or attributes related to a message or a person, but not the central merits of a message or person. In the literature, the objects indicating peripheral cues vary from message cues (e.g., hedging, statistics, identification of an author, references, currency, etc.) to structural features (e.g., image, navigation menu, design, etc.). Some peripheral cues are more closely tied to the content of a message (e.g., hedging) than others (e.g., navigation menu).

Based on dual-process theories, Reinhard and Sporer (2010) conducted a series of experiments to test whether there were relationships between the use of source cues and the levels of task involvement in making credibility judgments. One of their experiments used the attractiveness of images as a source cue, which can be considered as a peripheral cue. They found that only peripheral cues influenced the credibility judgments of participants with low–task involvement, whereas both central and peripheral cues had an impact on the credibility judgments of participants with high–task involvement. Their findings are in accordance with the dual–process theories in that peripheral cues affect the credibility judgments of information when people do not have either high motivation or the necessary cognitive ability to evaluate information (Chen and Chaiken, 1999; Petty and Wegener, 1999).

Similarly, Rains and Karmikel (2009) examined whether message characteristics and structural features of health Web sites were related to perceptions of credibility. They measured seven structural features, such as the names of the organizations operating or sponsoring the Web sites, images, third–party endorsements, physical addresses or telephone numbers, privacy statements, links to external Web sites, and navigation menus. They found that structural features were positively related to perceptions of site credibility. In addition, other researchers found that hyperlinks and author information were positively related to the credibility judgments of a citizen journalism Web site (Johnson and Wiedenbeck, 2009), and the image quality of local television news was positively associated with audiences’ perceptions of source credibility (Bracken, 2006).

On the other hand, Jensen (2008) examined whether hedging influences news consumers’ perceptions of the credibility of scientists and journalists. His object, hedged information, is a more content–linked information cue than that of other studies discussed above. His experiment showed that his sample of college students perceived both scientists and journalists as more trustworthy when news coverage of cancer research reported its study limitations than when it did not. The above studies suggest that similar phenomena may be observed regarding users’ credibility judgments of Wikipedia. That is, peripheral cues may affect the credibility judgments of college students concerning Wikipedia.

In addition to peripheral cues, research has shown that genres or types of information are related to credibility judgments. For instance, Flanagin and Metzger (2007) found that users perceived news organization Web sites as more credible than other site genres, such as e–commerce, special interest organizations, or personal Web sites for both sponsor and message credibility. Their interpretation was that the participants perceived the sponsors for both e–commerce and special interest sites as having persuasive intentions, compared to news organization sites that do not have persuasive intentions. Similarly, Iding, et al. (2009) found that college students perceived information– or education–focused Web sites as more credible than commercial sites associated with vested interests. These studies imply that people tend to perceive interest–neutral sites as more credible than self–interested ones. In addition, some researchers have observed that information seekers perceive sites as more credible than those who use sites for entertainment (Stavrositu and Sundar, 2008). By the same token, researchers contend that users’ motivations in evaluating the credibility of information depend on the type of information (Metzger, 2007; Rieh and Hilligoss, 2008). Therefore, genres may affect credibility judgments, and different genres may produce different effects of heuristics or peripheral cues on credibility judgments.

On the other hand, Flanagin and Metzger (2007) examined Internet users’ verification behavior in the context of credibility judgments. Interestingly, they found that there was a zero correlation between self–reported verification and observed verification behavior. Further, there was no correlation between users’ experience and observed verification behavior. That is, experienced users knew that they ought to be skeptical of Web information and should verify such information, but they did not make any effort to verify Web information.

Nonetheless, it may be possible that Internet users’ verification behavior may differ, according to the genres or types of information. That is, it is likely that users are more concerned with the information quality of health information than that of entertainment (Metzger, 2007). Therefore, Internet users looking for serious information may make more of an effort to evaluate information than those looking for entertainment or less serious types of information. For instance, Kim’s (2010) study on a social question and answer site demonstrates that informational questioners tend to verify information more than conversational questioners. Similarly, Wang, et al. (2008) report that serious health information seekers are more attentive to information or content than to Web site features when evaluating site credibility than are Internet users looking for less serious information. These studies may suggest that Internet users are more likely to take a further step to verify information for serious information than for less serious information such as entertainment, and a similar verification behavior may be observed concerning Wikipedia. The above findings lead to the following research hypotheses of this study:

H1. Peripheral cues influence credibility judgments of Wikipedia.
H2. Genre influences credibility judgments of Wikipedia.
H3. The effect of peripheral cues on credibility judgments differs according to genre.

 

++++++++++

Methodology

Participants

The data were collected at a large public university in the midwestern United States from the late April through early May 2010. The study participants consisted of undergraduate students from four courses in social sciences whose instructors agreed to their participation. A total 59 students participated in the current study. This study was a part of a larger study in which the participating students had the opportunity to enter a random drawing to win a prize of US$30 gift card, and a total of 25 prizes were awarded.

Data collection methods: An experiment and a survey

This study employed both an experiment and a Web survey regarding students’ credibility judgments of Wikipedia. The experiment was embedded in the survey.

The design of the experiment. The experiment took the form of a 2x2 factorial design, with peripheral cues of a Wikipedia article (high or low number of citations) and genre (health and comics). A total of four different screens (two screens per genre) were created for the experiment. The text of each article was held constant, while the peripheral cues of the article varied.

The Wikipedia articles. Health was selected for the representation of an information–driven genre, while comics was selected for that of an entertainment–driven one. A Wikipedia article about rapid eye movement (REM) behavior disorder (at http://en.wikipedia.org/wiki/Rapid_eye_movement_behavior_disorder) and an article about Lady Dorma, a comic character (at http://en.wikipedia.org/wiki/Lady_Dorma) were selected. As of 9 April 2010, the versions of 2 April 2010 and 5 January 2010 were the latest versions of the REM and Lady Dorma articles respectively, which were used as the bases for creating the screens of the experiment. A few modifications were made for the experiment in order to make the two articles comparable with respect to peripheral cues. The modified articles had neither a warning message nor an image, and had the same number of subheadings.

Procedure. The participants were directed to the study’s Web site via a written URL included in a solicitation e–mail. The participants were randomly assigned to one of the four screens of Wikipedia (two versions per each genre) by a computerized program. The participants would view one of the following versions:

REM 1: An REM Wikipedia article with a higher number of citations (a total of six citations) (N=12).
REM 2: An REM Wikipedia article with a lower number of citations (a total of one citation) (N=14).
Lady Dorma 1: A Lady Dorma Wikipedia article with a higher number of citations (a total of 4 citations) (N=18)
Lady Dorma 2: A Lady Dorma Wikipedia article with a lower number of citations (a total of one citation) (N=15)

They were instructed to read the article they were viewing. Then all participants were directed to a questionnaire that they would complete online. Once directed to the questionnaire, the participants were not able to view the article again. In addition, verification behavior was examined as a form of an experiment in the following manner: the participants were asked whether they wanted to check with another source if they were unsure about the believability of the article they read. Clicking on the “yes” option was considered as evidence that they verified the information.

The survey instrument. The survey instrument was developed or modified based on relevant literature (Agosto, 2002; Cassidy, 2007; Gaziano and McGrath, 1986; Gigerenzer, 2002; Gigerenzer and Selten, 2002; Hilligoss and Rieh, 2008; Lim, 2009; Selten, 2002; Simon, 1955; Simon, 1997a; Simon, 1997b; Todd, 2002; Tseng and Fogg, 1999; Tsfati and Cappella, 2005). Table 1 presents the major variables of the survey.

 

Table 1: Major variables.
Note: *See Table 3 for the means of different genres and peripheral cues.
**Reversed scores were used to obtain a grand mean and a reliability coefficient.
Conceptual variablesSurvey itemsMean
(a 7–point scale)
Standard deviationCronbach’s α and a grand mean
Credibility
(N=58)
This article is reasonably accurate.4.83.994α=.835
mean: 5.167*
The information in this article is verifiable elsewhere.5.051.176
This article is reliable.4.661.236
This article includes major facts of the topic.5.17.994
This article presents views fairly and without bias.5.28.894
This article is plausible.5.64.788
This article is believable.5.551.079
Satisficing
(N=58)
Most of the time I am satisfied with Wikipedia information that is good enough for my needs.5.72.894α=.906
mean: 5.722
Most of the time I am satisfied with reasonably accurate Wikipedia information.5.451.142
Most of the time I am satisfied with the speed of finding information in Wikipedia.6.09.942
Most of the time I am satisfied with Wikipedia, which saves me time in finding information.5.911.189
Most of the time I am satisfied with Wikipedia, which saves me effort in finding information.5.711.243
Most of the time I am satisfied with Wikipedia, which demands less effort in finding information than other information sources.5.711.170
Most of the time I like Wikipedia, which simplifies my choices of Web information sources.5.471.246
Aspiration
(N=58)
I look for good enough information in Wikipedia most of the time.4.981.291α=.781
mean: 5.345
Moderately accurate information in Wikipedia would be acceptable to me.3.981.660
I expect speedy access to information in Wikipedia.5.93.989
I expect to spend less time on finding information in Wikipedia than in other free Web sources.5.861.099
I expect to save time when I use Wikipedia, compared to using other free Web sources.5.741.305
I expect to make less of an effort using Wikipedia than using other free Web sources.5.571.244
Professor’s endorsement
(N=57)
Most of my professors prohibit me from using Wikipedia.**3.0351.636α=.829
mean: 5.035
Most of my professors are positive about Wikipedia as an information source.3.3511.469
Most of my professors allow me to use Wikipedia for my academic work.2.7191.485
Peer experience
(N=58)
My friends or peers use Wikipedia.6.21.853α=.540
mean: 5.511
I feel comfortable using Wikipedia because my friends or peers use it.4.711.522
My friends or peers have said that they find useful information from Wikipedia.5.621.057
Wikipedia as one of the top Web sources
(N=59)
Wikipedia is one of the few top sources I look at when I search for information on the Web.4.461.745α=.603
mean: 4.441
I prefer to look at Wikipedia rather than trying out a new Web information source that I am unfamiliar with.4.421.754
Reasons for not verifying information
(N=11)
Because I DON’T look for the best information in Wikipedia anyway.4.551.753 
Because I use Wikipedia to have an idea of the topic of my interest most of the time.5.73.786
Because I use Wikipedia to obtain background information most of the time.5.73.647
Because I DON’T use Wikipedia for serious purposes anyway.4.821.537
Because the accuracy of information is NOT the most important reason for my use of Wikipedia.4.821.662
Because the overall content is good enough for my needs.5.55.934
Because I DON’T have enough time for verifying information.3.911.640
Because checking with other sources takes time.5.271.009
Because checking with other sources requires my mental effort.5.091.300
Because I DON’T want to make any further effort in dealing with other information sources.4.731.618
Because regardless of the accuracy of the information, I would use it anyway.3.181.537
Because convenience is one of the most important reasons for my using Wikipedia most of the time.5.181.168
Because accessibility is one of the most important reasons for my using Wikipedia most of the time.5.271.272
Because the usefulness of information is more important than the accuracy of information in using Wikipedia.4.731.272
Typically, I just DON’T make further effort in verifying information of any information source.3.801.619

 

 

++++++++++

Findings

The findings were organized into three subsections and by the research questions. The first subsection presents the characteristics of participants and the answer to RQ1. The second subsection presents the results of the experiment, corresponding to RQ2 through RQ5. Along the way, the results of the hypothesis testing are reported. Finally, the third subsection answers the other exploratory research questions.

Participants and peripheral cue use

Participants. Table 2 presents the characteristics of participants.

 

Table 2: Participants characteristics.
 FrequencyPercentage
GenderFemaleN=3051.7%
 MaleN=2848.3%
AgeUnder 20N=2441.4%
 20–21N=2441.4%
 22 or olderN=1017.1%
RaceAsianN=1016.9%
 CaucasianN=4474.6%
 HispanicN=11.7%
 MixedN=23.4%
 Other/non–responseN=23.4%
MajorHumanities and ArtsN=1017%
 Social SciencesN=1932.2%
 Natural and Applied SciencesN=1932.2%
 UndecidedN=1423.7%
School yearFirst yearN=1932.8%
 SophomoreN=1220.7%
 JuniorN=1729.3%
 SeniorN=813.8%
 OtherN=23.4%

 

RQ1. What peripheral cues do students use in their credibility judgments of Wikipedia?

The respondents reported using the following features or taking certain actions when they were uncertain about the believability of a Wikipedia article (listed from most to least used): scanned the length of an article (88.1 percent of the participants), scanned the list of contents (78 percent), scanned the references (76.3 percent), checked a warning message (67.2 percent), scanned or clicked on external links (66.1 percent), scanned the number of citations (50.8 percent) and clicked on reference (45.8 percent). A small percentage of the respondents reported using other features, such as the history of edits (13.6 percent) or a discussion page (10.2 percent). It is not surprising that the participants widely used the length of an article and a list of the contents and references when they were uncertain about the believability of a Wikipedia article. On the other hand, it is worthwhile to note that a high percentage of participants used external links. However, neither a discussion page (which is a highly content or argument oriented information cue, in which users can scrutinize the merits of the content of an article) nor a history of edits was widely used. This may be due to the fact that: 1) Participants might not be aware of these features; 2) Reviewing a discussion page particularly demands additional cognitive effort that users tend not to exert unless they need to do so, as dual–process theories assume (Chen and Chaiken, 1999; Petty and Wegener, 1999).

The results of the experiment

RQ2. Do peripheral cues influence credibility judgments of Wikipedia?

The index mean of a higher number of citations was higher than that of a low number of citations. However, the peripheral cue (a high versus low number of citations) did not make a difference in the credibility judgments of the articles. Therefore, H1 is not supported. However, peripheral cues had different effects within the genres. That is, the mean difference between a high number of citations and low number of citations within the health genre (REM article) was significant (t (24) = 2.36, ρ<0.027), while there was no significant difference within the comics genre (Lady Dorma article). The current test result suggests that the peripheral cue needs to be reexamined employing more and different samples.

RQ3. Does genre influence credibility judgments of Wikipedia?

There was no effect of genre on credibility. Therefore, H2 is not supported. Table 3 presents the means of credibility across genre and peripheral cues.

 

Table 3: Credibility across genre and peripheral cue.
Notes: Mean is an index mean. A grand mean and its corresponding standard deviation on a 7–point scale per each category are presented in a parenthesis.
 Health Comics Total
 MSDNMSDN 
Low33.79
(4.83)
3.66
(.371)
1436.40
(5.2)
5.32
(.439)
1535.14
(5.02)
High38.58
(5.51)
6.52
(.342)
1236.23
(5.18)
4.26
(.431)
1837.18
(5.32)
Total36.00
(5.14)
5.62
(.321)
2636.31
(5.19)
4.69
(.432)
3359

 

RQ4. Do the effects of peripheral cues on credibility judgments differ according to genre?

The effect of a peripheral cue on credibility judgments was not different according to genre. Thus, H3 is not supported. However, the mean values show interesting patterns where a high number of citations was rated more credible than a low number of citations within health information. On the other hand, there was little difference in credibility judgment between a high and low number of citations within the comics genre (Table 3). Figure 1 presents these patterns. The test result ((F (1, 55) =3.622, ρ<0.062, MSE=88.77, partial η2=0.062) suggested that an interaction between peripheral cues and genre needs to be reexamined employing more and different samples.

 

Figure 1: Interaction between genre and peripheral cues
Figure 1: Interaction between genre and peripheral cues.

 

RQ5. Do students verify information? If they do not, why not?

In the experiment, 57.6 percent of the respondents verified information when they were uncertain about the believability of the article they had read. In addition to this experiment, students’ verification behavior was examined through a self–reported survey. Approximately 20.3 percent of the respondents reported that they quit looking for further information in the past, indicating that 79.7 percent of the respondents reported verifying information (Table 4). Despite a higher percentage of self–reports than actual actions (79.7 percent versus 57.6 percent), this result was not statistically significant. This experiment shows that over half of the respondents took the additional action to verify information. In other words, to some degree, this study’s finding supports other studies that demonstrate a discrepancy between what Internet users say and what they actually do regarding information verification (Flanagin and Metzger, 2007; Iding, et al., 2009). However, this study did not confirm the above studies statistically. Some possible reasons for this inconsistency will be discussed in the discussion section.

 

Table 4: Verification in experiment and survey.
ExperimentSurveyTotal
YesNo 
YesN29534
 Percentage49.2%8.5%57.6%
NoN18725
 Percentage30.5%11.9%42.4%
TotalN471259
 Percentage79.7%20.3%100%

 

This study attempted to understand non–verification behavior by employing the theory of bounded rationality. A set of t–tests were performed to examine any difference between the two groups, (verification (57.6 percent, N=34) versus non–verification (42.4 percent, N=24) groups) regarding the satisficing level, and the satisficing and aspiration level differences, which answers the second part of the above questions. The result shows that the non–verification group displayed a higher level of satisficing than the verification group (t=-2.163, ρ<0.035). Another t–test was performed to compare the satisficing and aspiration level differences between the two groups. The result shows that the non–verification group displayed a higher level of satisficing, compared to their aspiration level regarding Wikipedia than the verification group. However, the differences were not statistically significant. These results may suggest that the theory of bounded rationality may be only partially applicable to non–verification behavior. Further interpretations will be described in the Discussion section.

In addition, this study attempted to understand students’ non–verification behavior in an exploratory manner. That is, the students who reported that they quit looking for further information were asked to provide their reasons for not verifying information about which they were unsure. Eleven out of 12 students provided their reasons for not doing so. The two top reasons for not verifying information were Wikipedia use for obtaining background information and Wikipedia use for obtaining an idea of a topic. Other highly rated reasons were the following: overall good enough content, Wikipedia use due to easy accessibility, the need for time to check with other sources, Wikipedia use due to convenience, and need for mental effort to check with other sources. Among the reasons, the two lowest ratings (or their disagreements) were their use of Wikipedia regardless of accuracy, and typically no further effort in verifying information (See Table 1). This result regarding accuracy seems consistent with Lim’s (2009) study, demonstrating that students did not use Wikipedia blindly. In addition, the respondents tended to show neutral attitudes regarding the reason for looking for the best information in Wikipedia (mean of 4.55 on a 7–point scale).

Other findings

Professors’ endorsement of Wikipedia, peers’ experience with Wikipedia and satisficing with Wikipedia.

This study shows that students’ instructors advised them not to use Wikipedia. The respondents reported an overall low endorsement on the part of their professors regarding Wikipedia. Interestingly, professors’ discouragement of Wikipedia was not related to students’ use of Wikipedia (See Table 1). It appears that this result is consistent with Head and Eisenberg’s (2010) finding, demonstrating that most students use Wikipedia anyway. That is, they merely do not inform their professors that they use Wikipedia and avoid citing it in their papers. Similarly, students reported observing their friends’ or peers’ use of and positive experience with Wikipedia. Further, those who observed their peers’ use of Wikipedia were more likely to report Wikipedia as one of their top Web sources. There were positive correlations between peers’ experiences and Wikipedia use; between peers’ experiences and academic use of Wikipedia; and between peers’ experiences and satisficing with Wikipedia. Table 5 presents the results.

 

Table 5: Correlations .
Notes: Pearson correlation (p–value), Spearman’s rho (ρ–value) in italics for academic use and other variables, *: ρ<.05
 Professor’s endorsementPeer experienceWikipedia as a top Web sourceUseAcademic useSatisficing with Wikipedia
Professor’s endorsement1-.002
(.988)
.053
(.696)
-.100
(.470)
.134
(.325)
.011
(.934)
Peer experience 1.474*
(.00)
.404*
(.002)
.428*
(.001)
.683*
(.000)
A top Web source  1.381*
(.003)
.196
(.141)
.614*
(.000)
Use   1.197
(.141)
.453*
(.000)
Academic use    1.224
(.094)

 

Academic use of Wikipedia

Finally, approximately 51.7 percent of students tend not to use Wikipedia for academic purposes. However, another half of the respondents sometimes (32.8 percent) or often (25.5 percent) used Wikipedia for academic purposes. If they have ever used Wikipedia for their academic work, overwhelmingly the majority of students (90.6 percent) reported using Wikipedia at an early stage for their academic work.

 

++++++++++

Discussion

This study provides a number of implications for credibility research, educators and library practice. First, peripheral cues measured as the number of citations did not affect the credibility judgment across the two genres. However, there was a relationship between peripheral cues and credibility judgments within the health genre, despite the need for a cautious interpretation of this result. That is, students judged a Wikipedia article with a high number of citations as more credible than that with a low number of citations only within the health genre. This result suggests that further re–examination of peripheral cues of Wikipedia is needed. Additionally this result provides some implications to both the Wikipedia community and educators. Wikipedia contributors need to better utilize peripheral cues that may assist Wikipedia readers to correctly evaluate the credibility of Wikipedia, at least of an information–driven genre. Educators need to develop a holistic approach employing peripheral cues that guide Wikipedia readers to evaluate information that they are unfamiliar with.

Further, an exploratory question regarding the use of other peripheral cues provides some useful insights into credibility research and library practice. When the respondents were uncertain about the credibility of a Wikipedia article, they tended to look at peripheral cues that they could easily process, and tended not to examine cues such as a discussion page, which demands more cognitive effort. Further research is needed to examine under which conditions users tend to use certain peripheral cues. Additionally, the respondents’ use of peripheral cues can be interpreted as their use of fast and frugal heuristics. Among various peripheral cues, the length of an article was highly used, which is not surprising. In addition, a high percentage of the respondents reported using external links. This result provides practical implications for library and educational practices. College libraries can promote their library sources or suggest other useful sources by actively inserting their sources or other useful Web sources into relevant Wikipedia articles. For instance, the University of Washington Libraries use this strategy to reach out to students (Lally and Dunford, 2007). Educators themselves can introduce a quality of external Internet sources by posting such external sources to relevant Wikipedia articles as external links. In addition, library educators can use the feature of external links for educational purposes. For instance, students in user education or references courses can contribute to external links of Wikipedia as course projects.

Second, there was no significant difference between the two genres regarding credibility judgment. Despite the need for more evidence, this result may suggest that Wikipedia readers pay more attention to Wikipedia, the source than to the individual Wikipedia article, the message when they judge credibility. This may imply that source credibility still matters despite blurring the distinction between source and message credibility in social information sources. However, further empirical studies employing more and different samples are needed to have a concrete conclusion on genre.

Third, despite no significant interaction between peripheral cues and genre, it is worthwhile to note the ρ–value (ρ<0.062) and interesting patterns between peripheral cues and genre presented in Figure 1, which suggests the further reexamining of the patterns. It appears that a different number of citations tended to have a greater impact on the credibility judgment of an information–driven genre than of an entertainment–driven one. Although this result needs a cautious interpretation due to the insignificant result under α=0.05 and a smaller difference of citation numbers in comics genre (an entertainment–driven genre) than in health (an information–driven genre), this result can be interpreted to mean that a certain peripheral cue such as citations influences credibility judgments of an article on serious subjects, but may not of an article on non–serious ones. Further studies employing more and different samples are needed to reexamine this issue.

Fourth, in the experiment, over half of the respondents verified information that they questioned. Despite a higher percentage of self–report data than their actions (79.7 percent versus 57.6 percent), the difference was not statistically significant. One possible reason may be due to the experiment in which the respondents were directly asked whether they wanted to verify information, which may induce a conscious action of verification by the respondents, leading to higher verification rates than what they might do in a natural setting. An observation or a different experimental method that can avoid any conscious action of the participants may result in different verification findings. Nonetheless, this experiment offered some useful data regarding non–verification behavior for the following aspects: The result shows that the non–verification group displayed a higher level of satisficing than the verification group, which was significant. In addition, it shows a pattern where the higher the respondents displayed a level of satisficing regarding Wikipedia compared to that of aspiration, the less the respondents verified information. Those who found satisficing information in Wikipedia that met or exceeded their aspiration level were less likely to make any further cognitive effort to check with another source to verify the information than those who did not. However, this result was not statistically significant. The results can be interpreted to mean that non–verification behavior may be partially understood in the framework of bounded rationality. Further research is needed to reexamine whether the theory of bounded rationality is applicable to non–verification behavior employing more and different samples.

In addition, the survey data of an exploratory question regarding the reasons for not verifying information provide some other possible explanations of non–verification behaviors. That is, the participants tended to use Wikipedia for obtaining background information or some ideas, for its good enough information and easy accessibility. On the other hand, participants tended to disagree on no further typical verification efforts, although their disagreement may be due to social desirability factors. This result may suggest that user non–verification behavior may be understood as a human rational behavior, which requires further research.

Finally, students are not discouraged to use Wikipedia, despite their professors’ discouragement of Wikipedia use in general. Interestingly, students’ observing their peers’ experiences with Wikipedia was correlated to their use of Wikipedia, their consideration of Wikipedia as one of the top Web sources and their satisficing with Wikipedia. Despite the need for further empirical studies, it appears that the results are consistent with recent studies, demonstrating that peer endorsement increasingly becomes important in their acceptance of information sources to the Net generation, as opposed to formal authorities in networked environments (Flanagin and Metzger, 2008; Ito, et al., 2009). Further research is needed to examine whether and how social endorsement plays out in students’ credibility judgment of social information sources.

In addition, the majority of students tend not to use Wikipedia for academic purposes. They tend to use Wikipedia at the early stage of their academic work if they have ever used it for academic purposes. This result is consistent with Head and Eisenberg’s (2010) study, demonstrating that Wikipedia was used at the beginning of the research process. Moreover, this study shows that students use Wikipedia as a complementary source, which is consistent with Head and Eisenberg’s (2010) finding, demonstrating that the majority of students use Wikipedia, but Wikipedia is not the only source to which they turn. When students are uncertain about the believability of a Wikipedia article, the majority of respondents reported checking with other sources they trust including library sources.

 

++++++++++

Conclusions

The major findings of this study include the following: peripheral cues were not related to credibility judgments across genres, although peripheral cues affected credibility judgments within the health genre. Genre was not related to credibility judgment. It appears that the effect of peripheral cues mattered more for the health genre than the entertainment genre. However, the effect was not statistically significant, suggesting further research. With respect to verification behavior, despite a higher percentage of self–reports than the experiment, the difference was not statistically significant. Finally, non–verification group displayed a higher level of satisficing than the verification group. The higher the respondents displayed a level of satisficing regarding Wikipedia compared to that of aspiration, the less the respondents verified information, despite its insignificant statistical result. These patterns may indicate the possibility of an explanation of non–verification behavior in the framework of the theory of bounded rationality, which suggests further research.

The research hypotheses were not supported under the current type I error. That is, the researchers took a five percent risk of wrongly supporting research hypotheses, which is common in social sciences. With a higher risk (e.g., 10 percent), some research hypotheses would have been supported. Nonetheless, this study contributes to credibility research concerning Wikipedia for the following aspects. First, the study revealed that students, indeed, used various peripheral cues of Wikipedia when they were uncertain about the believability of Wikipedia articles. Second, this study first attempted to understand non–verification behavior and offered some new insights into understanding non–verification information behavior by employing the theory of bounded rationality, despite only partially confirming the theory. This study introduced a useful framework, fast and frugal heuristics, for understanding information decision–making in Web 2.0 environments, where Internet users face too many various qualities of information. Third, this study has practical usefulness by suggesting ways of promoting library sources and introducing a quality of sources to students or Wikipedia users. Finally, the exploratory data of this study revealed that peer experiences with Wikipedia were correlated to students’ use of Wikipedia and their consideration of Wikipedia as one of the top Web sources. This suggests that social endorsement through peers may be more important than formal authorities, such as professors, for user–generated information sources, such as Wikipedia, in networked environments, which calls for further research.

This study has certain limitations, and a few suggestions for further research have emerged from the current study. First, the experimental conditions were less than ideal. That is, this study used two existing articles with minor modifications in order to make the articles comparable. However, despite the modifications, the comparability of the two articles of two genres for the experiment was less than desirable due to the practical difficulty of selecting two equivalent articles in terms of peripheral cues. Further research is needed to re–examine the effects of genre and peripheral cues on credibility judgments by creating comparable articles in terms of peripheral cues instead of using existing articles. Second, the selection of the two genres was limited. That is, only one topic (one article) per each genre on the basis of the comparable length was selected for this experiment. It may be useful to compare the two genres that include several topics with the same quality ratings by Wikipedia. The selection of Wikipedia articles for the current experiment was more focused on whether the topics were either an information or entertainment–driven genre rather than on whether the topics were controversial. In fact, researchers suggest that the topics for credibility research need to be controversial enough to raise questions about credibility (Hu and Sundar, 2010). Further research is needed to examine the effect of the same independent variables (peripheral cues and genre) on credibility judgments, employing articles of more controversial topics than the current ones. Third, this study examined only the number of citations as a peripheral cue. Other peripheral cues such as a warning message, an image or external links need to be examined. Further, this study did not take into account the quality of the citations, which may result in different credibility judgments. Further research is needed to examine the effect of the quality of citations on credibility judgments. Fourth, this study examined users’ verification behavior by directly asking whether they wanted to verify information. An experiment employing indirect methods for verification behavior may result in different findings. For example, providing an option for verification without asking a direct question would be a more effective way to examine verification behavior.

It appears that the theory of bounded rationality, including fast and frugal heuristics, may be applicable to non–verification behavior, despite failing to confirm the theory statistically for this particular sample. Further research is needed to further integrate the theory of bounded rationality/fast and frugal heuristics into understanding the credibility judgments of other user–generated information sources. Finally, it appears that the exploratory data of the current study show the importance of peer experience in students’ acceptance of Wikipedia. Further research is needed to examine the relationship between peer endorsement and credibility judgments by specifying peer or social endorsement. End of article

 

About the authors

Sook Lim is an associate professor in Library and Information Science at St. Catherine University. She studies human information behavior in Web 2.0 environments. She has published articles in Journal of American Society for Information Science and Technology, Journal of Academic Librarianship, and Library & Information Science Research. For further information about the author, visit her Web site at http://sooklim.org.

Christine Simon is finishing her master’s degree in Library and Information Science at St. Catherine University. Before entering the library science program, she worked as a software developer building engineering and business software. She is interested in all aspects of science and technology librarianship, systems librarianship and corporate libraries. Currently she serves as a co–leader of Student Chapter of Special Library Association at St. Catherine University.

The authors wrote the methodology and finding sections for a general audience. Please contact the first author for further information regarding the two sections.

 

Acknowledgements

This study was funded by a Faculty Research and Scholarly Activities Grant at St. Catherine University. Especially, we would like to thank Dr. Minjung Park at the University of California, Berkeley for her great help with the data collection for this study. We would also like to express our appreciation to Nicole Pankiewicz and the instructors at the University of Minnesota, Twin Cities who agreed to their students’ participation in this study.

 

Notes

1. Flanagin and Metzger, 2008, p. 8.

2. Rieh, 2010, p. 1,338.

3. Simon, 1997a, p. 88.

4. Gigerenzer and Selten, 2002, p. 3.

5. Todd, 2002, p. 53.

6. Todd, 2002, p. 57.

7. Gigerenzer, 2002, p. 48.

 

References

Denise Agosto, 2002. “Bounded rationality and satisficing in young people’s Web–based decision making,” Journal of the American Society for Information Science and Technology, volume 53, number 1, pp. 16–27.http://dx.doi.org/10.1002/asi.10024

Cheryl C. Bracken, 2006. “Perceived source credibility of local television news: The impact of television form and presence,” Journal of Broadcasting & Electronic Media, volume 50, number 4, pp. 723–741.http://dx.doi.org/10.1207/s15506878jobem5004_9

Michael Byron, 1998. “Satisficing and optimality,” Ethics, volume 109, number 1, pp. 67–93.http://dx.doi.org/10.1086/233874

William P. Cassidy, 2007. “Online news credibility: An examination of the perceptions of newspaper journalists,” Journal of Computer–Mediated Communication, volume 12, number 2, at http://jcmc.indiana.edu/vol12/issue2/cassidy.html, accessed 30 July 2010.

Serena Chen and Shelly Chaiken, 1999. “The heuristic–systematic model in its broader context,” In: Shelly Chaiken and Yaacov Trope (editors). Dual–process theories in social psychology. New York: Guildford Press. pp. 73–96.

Gunther Eysenbach, 2008. “Credibility of health information and digital media: New perspectives and implications for youth,” In: Miriam J. Metzger and Andrew J. Flanagin (editors). Digital media, youth, and credibility. Cambridge, Mass.: MIT Press, pp. 123–154.

Andrew J. Flanagin and Miriam J. Metzger, 2008. “Digital media and youth: Unparalleled opportunity and unprecedented responsibility,” In: Miriam J. Metzger and Andrew J. Flanagin (editors). Digital media, youth, and credibility. Cambridge, Mass.: MIT Press, pp. 5–28.

Andrew J. Flanagin and Miriam J. Metzger, 2007. “The role of site features, user attributes, and information verification behaviors on the perceived credibility of Web–based information,” New Media & Society, volume 9, number 2, pp. 319–342.http://dx.doi.org/10.1177/1461444807075015

Cecilie Gaziano and Kristin McGrath, 1986. “Measuring the concept of credibility,” Journalism Quarterly, volume 63, number 3, pp. 451–462.http://dx.doi.org/10.1177/107769908606300301

Gerd Gigerenzer, 2008. “Bounded and rational,” In: Gerd Gigerenzer. Rationality for mortals: How people cope with uncertainty. New York: Oxford University Press, pp. 3–19.

Gerd Gigerenzer, 2002. “The adaptive toolbox,” In: Gerd Gigerenzer and Reinhard Selten (editors). Bounded rationality: The adaptive toolbox. Cambridge, Mass.: MIT Press, pp. 37–50.

Gerd Gigerenzer and Reinhard Selten, 2002. “Rethinking rationality,” In: Gerd Gigerenzer and Reinhard Selten (editors). Bounded rationality: The adaptive toolbox. Cambridge, Mass.: MIT Press, pp. 1–12.

Frances J. Harris, 2008. “Challenges to teaching credibility assessment in contemporary schooling,” In: Miriam J. Metzger and Andrew J. Flanagin (editors). Digital media, youth, and credibility. Cambridge, Mass.: MIT Press, pp. 155–179.

Alison J. Head and Michael B. Eisenberg, 2010. “How today’s college students use Wikipedia for course–related research,” First Monday, volume 15, number 3, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2830/2476, accessed 30 July 2010.

Brian Hilligoss and Soo Young Rieh, 2008. “Developing a unifying framework of credibility assessment: Concept, heuristics, and interaction in context,” Information Processing and Management, volume 44, number 4, pp. 1,467–1,484.

Yifeng Hu and Shyam S. Sundar, 2010. “Effects of online health sources on credibility and behavioral intentions,” Communication Research, volume 37, number 1, pp. 105–132.http://dx.doi.org/10.1177/0093650209351512

Benjamin Hughes, Jonathan Wareham and Indra Joshi, 2010. “Doctors’ online information needs, cognitive search strategies, and judgments of information quality and cognitive authority: How predictive judgments introduce bias into cognitive search models,” Journal of the American Society for Information Science and Technology, volume 61, number 3, pp. 433–452.

Marie K. Iding, Martha E. Crosby, Brent Auernheimer and Barbara E. Klemn, 2009. “Web site credibility: Why do people believe what they believe?” Instructional Science, volume 37, number 1, pp. 43–63.http://dx.doi.org/10.1007/s11251-008-9080-7

Mizuko Ito, Heather Horst, Matteo Bittanti, danah boyd, Becky Herr–Stephenson, Patricia G. Lange, C.J. Pascoe, and Laura Robinson, 2009. Living and learning with new media: Summary of findings from the digital youth project. Cambridge, Mass.: MIT Press.

Jakob D. Jensen, 2008. “Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists’ and journalists’ credibility,” Human Communication Research, volume 34, number 3, pp. 347–369.http://dx.doi.org/10.1111/j.1468-2958.2008.00324.x

Kirsten A. Johnson and Susan Wiedenbeck, 2009. “Enhancing perceived credibility of citizen journalism Web sites,” Journalism & Mass Communication Quarterly, volume 86, number 2, pp. 332–348.http://dx.doi.org/10.1177/107769900908600205

Soojung Kim, 2010. “Questioners’ credibility judgments of answers in a social question and answer site,” Information Research, volume 15, number 1, at http://InformationR.net/ir/15-2/paper432.html, accessed 30 July 2010.

Ann M. Lally and Carolyn E. Dunford, 2007. “Using Wikipedia to extend digital collections,” D–Lib Magazine, volume 13, numbers 5/6, at http://www.dlib.org/dlib/may07/lally/05lally.html, accessed 30 July 2010.

David Lankes, 2008. “Trusting the Internet: New approaches to credibility tools,” In: Miriam J. Metzger and Andrew J. Flanagin (editors). Digital media, youth, and credibility. Cambridge, Mass.: MIT Press, pp. 101–121.

Sook Lim, 2009. “How and why do college students use Wikipedia?” Journal of the American Society for Information Science and Technology, volume 60, number 9, pp. 2,189–2,202.

Miriam J. Metzger, 2007. “Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research,” Journal of the American Society for Information Science and Technology, volume 58, number 13, pp. 2,078–2,091.

John Newhagen and Clifford Nass, 1989. “Differential criteria for evaluating credibility of newspapers and TV news,” Journalism Quarterly, volume 66, number 2, pp. 277–284.http://dx.doi.org/10.1177/107769908906600202

Richard E. Petty and Duane T. Wegener, 1999. “The elaboration likelihood model: Current status and controversies,” In: Shelly Chaiken and Yaacov Trope (editors). Dual–process theories in social psychology. New York: Guildford Press. pp. 41–72.

Stephen A. Rains and Carolyn D. Karmikel, 2009. “Health information–seeking and perceptions of Web site credibility: Examining Web–use orientation, message characteristics, and structural features of Web sites,” Computers in Human Behavior, volume 25, number 2, pp. 544–553.http://dx.doi.org/10.1016/j.chb.2008.11.005

Marc–André Reinhard and Siegfried L. Sporer, 2010. “Content versus source cue information as a basis for credibility judgments: The impact of task involvement,” Social Psychology, volume 41, number 2, pp. 93–104.http://dx.doi.org/10.1027/1864-9335/a000014

Soo Young Rieh, 2010. “Credibility and cognitive authority of information,” In: Marcia J. Bates and Mary N. Maack (editors). Encyclopedia of library and information science. Third edition. New York: CRC Press. pp. 1,337–1,344.

Soo Young Rieh and Brian Hilligoss, 2008. “College students’ credibility judgments in the information seeking process,” In: Miriam J. Metzger and Andrew J. Flanagin (editors). Digital media, youth, and credibility. Cambridge, Mass.: MIT Press, pp. 49–72.

Soo Young Rieh and David R. Danielson, 2007. “Credibility: A multidisciplinary framework,” Annual Review of Information Science and Technology, volume 41, pp. 307–364.http://dx.doi.org/10.1002/aris.2007.1440410114

Reinhard Selten, 2002. “What is bounded rationality?” In: Gerd Gigerenzer and Reinhard Selten (editors). Bounded rationality: The adaptive toolbox. Cambridge, Mass.: MIT Press, pp. 13–36.

Herbert A. Simon, 1997a. “The psychology of administrative decisions,” In: Herbert A. Simon. Administrative behavior: A study of decision–making processes in administrative organizations. Fourth edition. New York: Free Press, pp. 92–117.

Herbert A. Simon, 1997b. “Rationality in administrative behavior,” In: Herbert A. Simon. Administrative behavior: A study of decision–making processes in administrative organizations. Fourth edition. New York: Free Press, pp. 87–91.

Herbert A. Simon, 1979. “Rational choice and the structure of the environment (1956),” In: Herbert A. Simon. Models of thought. New Haven, Conn.: Yale University, pp. 20–28.

Herbert A. Simon, 1955. “A behavioral model of rational choice,” Quarterly Journal of Economics, volume 69, number 1, pp. 99–118.http://dx.doi.org/10.2307/1884852

Carmen Stavrositu and S. Shyam Sundar, 2008. “If Internet credibility is so iffy, why the heavy use? The relationship between medium use and credibility,” CyberPsychology & Behavior, volume 11, number 1, pp. 65–68.http://dx.doi.org/10.1089/cpb.2007.9933

Peter M. Todd, 2002. “Fast and frugal heuristics for environmentally bounded minds,” In: Gerd Gigerenzer and Reinhard Selten (editors). Bounded rationality: The adaptive toolbox. Cambridge, Mass.: MIT Press, pp. 51–70.

Zakary L. Tormala and Richard Petty, 2004. “Source credibility and attitude certainty: A metacognitive analysis of resistance to persuasion,” Journal of Consumer Psychology, volume 14, number 4, pp. 427–442.http://dx.doi.org/10.1207/s15327663jcp1404_11

Shawn Tseng and B.J. Fogg, 1999. “Credibility and computing technology,” Communications of the ACM, volume 42, number 5, pp. 39–44.http://dx.doi.org/10.1145/301353.301402

Yariv Tsfati and Joseph N. Cappella, 2005. “Why do people watch news they do not trust? The need for cognition as a moderator in the association between news media skepticism and exposure,” Media Psychology, volume 7, number 3, pp. 251–171.http://dx.doi.org/10.1207/S1532785XMEP0703_2

Zuoming Wang, Joseph B. Walther, Suzanne Pingree, and Robert P. Hawkins, 2008. “Health information, credibility, homophily, and influence via the Internet: Web sites versus discussion groups,” Health Communication, volume 23, number 4, pp. 358–368.http://dx.doi.org/10.1080/10410230802229738

Barbara Warnick, 2004. “Online ethos: Source credibility on an ‘authorless’ environment,” American Behavioral Scientist, volume 48, number 2, pp. 256–265.http://dx.doi.org/10.1177/0002764204267273

C. Nadine Wathen and Jacquelyn Burkell, 2002. “Believe it or not: Factors influencing credibility on the Web,” Journal of the American Society for Information Science and Technology, volume 53, number 2, pp. 133–144.http://dx.doi.org/10.1002/asi.10016

 


Editorial history

Received 27 January 2011; revised 10 March 2011; revised 13 March 2011; accepted 15 March 2011.


Creative Commons License
“Credibility judgment and verification behavior of college students concerning Wikipedia” by Sook Lim and Christine Simon is licensed under a Creative Commons Attribution–NonCommercial–NoDerivs 3.0 Unported License.

Credibility judgment and verification behavior of college students concerning Wikipedia
by Sook Lim and Christine Simon.
First Monday, Volume 16, Number 4 - 4 April 2011
http://firstmonday.org/ojs/index.php/fm/article/view/3263/2860





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.