Playing science? Environmentally focused think tanks and the new scientific paradigm
First Monday

Playing science? Environmentally focused think tanks and the new scientific paradigm by Kimberly Douglass and Sarah Tanner



Abstract
Although research published by think tanks is generally studied for its contributions to policy discourses, this study finds that think tank–authored studies also affect scientific scholarly communications. Think tanks clearly represent political interests. However, this study shows that their exclusion from scientific rhetoric is not a matter of their failing to meet the community’s standards; it is a matter of ideology, which helps maintain a socially constructed boundary between scientific and political domains. The scientific community’s instinct to construct this boundary remains strong despite the emergence of a more inclusive scientific paradigm. This study takes an empirical view of the dispersion of research products generated by think tanks throughout scientific discourses. It does so through a type of bibliometrics called publication analysis, which looks at the distribution of an author’s or institution’s research products over time. This paper uses the number of articles found in scholarly publications as a proxy for productivity and as a measure of academia’s acceptance of ideas developed in those papers. Additional metrics include impact factor and Eigenfactor scores of journals where think tank–authored papers are published and the journals’ rankings within their respective disciplines.

Contents

Introduction
1. Background
2. Literature review
3. Methods
4. Analysis and results
5. Discussion
Conclusions

 


 

Introduction

This paper develops one of the many story lines in a broader narrative about how modern science is conducted. It opens a discussion about the contributions that environmentally focused think tanks make to the scientific record. The scientific record, in the abstract, is the “collective wisdom of hundreds of thousands of authors” [1]. However, it is also a collective, physical record. This record was once limited to peer–reviewed articles and books, but is expanding to include peer–reviewed articles, peer–reviewed data, books, Web posts, blogs, etc. (Abbott, 2009).

A think tank is “a specific kind of organization, namely a private group, usually but not always privately funded, producing arguments and data aimed at influencing specific sectors of public policy” [2]. Think tanks are known for their applied work, e.g., policy testimony, white papers, proceedings papers, abstracts, and reports related to public policy. However, the growing complexity of the work done by environmental think tanks and the premium that the contemporary science community places on applied science suggest that think tanks matter in scientific as well as policy domains (Abelson, 2009).

Our study demonstrates that scholarship produced by different types of think tanks indeed contributes to the scientific record, even a strict understanding of the scientific record. This study highlights inattention in scholarly communication to the roles think tanks play in scientific domains, despite the complexities (related to society and science) of the physical world. Since boundaries between science and other cognitive domains are unsustainable constructions (Gieryn, 1983), the institutions themselves become the boundaries. However, these substitutions weed out well–informed contributors, while forcing the scientific community to contend with representatives of commercial interests masquerading as academics.

 

++++++++++

1. Background

Due to advances in technology, a number of sophisticated tools, such as metadata systems, technical cyberinfrastructures, and best practices are coming online to assist researchers in developing, documenting, and storing their data for the advancement of science. These tools include data archival, data documentation, and data replication tools. However, continuing advancement requires program and technical developers to understand researchers’ needs. While discussions about researchers’ technical needs appear to be aimed largely at scholars in academia, academics make up only one group within global communities of research and practice (National Science Foundation, 2007). The focus by cyberinfrastructure developers on knowledge development in academic circles suggests that other members of epistemic communities, such as government officials, government managers, policy entrepreneurs, and policy analysts (Abelson, 2009), are undervalued and potentially underserved creators of new knowledge. It is important for institutions, developers, and researchers themselves to understand the contributions a range of groups make to knowledge development, as well as the research environments in which they make those contributions. For example, in what scholars label the “Fourth Paradigm of Science” [3], the status of applied research is currently elevated in ways unseen in previous paradigms. To understand non–academic researchers’ contributions to this more inclusive model of science, it makes sense to examine the contributions of those institutions that have historically depended upon applied research to achieve their goals.

 

++++++++++

2. Literature review

No think tank engages in discreet sets of activities (Mead, 1985; Stone, 2004; 2007) because their approaches vary according to their intended audiences. Therefore, rather than referring to a think tank prototype, it is more useful methodologically to examine think tank typologies. Abelson (2009) points out, however, the risk of misrepresenting think tanks in typologies because of overlap in the ways different types of think tanks conduct their business. Just the same, a well–conceptualized schema must be used in order to move along the discussion.

  1. Typology of think tanks

Table 1 provides an overview of the types of think tanks examined in this study. The spectrum of organization types represented here ranges from those believed to be more engaged in original research to those believed to be more engaged in analysis and synthesis of existing research (Stone, 2007). Typologies represented elsewhere include think tanks that “contract/consult, [operate for] profit/political party, act as agents of governmental organizations” [4]. Table 1 is an important piece of this study because it applies a unifying framework (Abelson, 2009) and helps fortify the “conceptual weaknesses” [5] that have created barriers to the study of think tanks.

 

Table 1: Typology of think tanks.
arrow
UC
University Based Research Centers
SC
Synthesis Centers
PI
Policy Research Institutes
[6]
NGO
Advocacy Organizations with Research Arms
[7]
 Address issues relevant to society and engages a range of communities, including policy–makers [8]  
arrow
Policy Focused University Institute [9]   
arrow 
“Scholarly Ink” [10]   
arrow  
“Academic university without students” [11]   
arrow 
  “Policy relevant” [12] 
 arrow
  “Policy Research Institute” [13] 
  arrow 
   “Advocacy Organization with a Research Arm” [14]
  arrow
   “Think and do tank, which implements or execut[es] community programmes, policy trials, evaluat[es] programmes, monitor[s], and so forth” [15]
 arrow
   “combine a strong policy, partisan or ideological bent with aggressive salesmanship” [16]
  arrow
   “marketing and repackaging; emphasis on producing brief reports and gaining access to the media” [17]
  arrow
  “Policy enterprise” [18] 
  arrow
   “Policy enterprise” [19];
“Advocacy” [20]
  arrow

 

     b. University based research centers

Tables 2 through 5 provide details about the types of organizations listed in Table 1. These organizations are all based in the United States. The think tanks listed in Table 2 are connected to universities. They were drawn from the “Top Thirty Environmental Think Tanks” annual index of a broad range of international think tanks [21]. They were evaluated based on a number of criteria, including their ability to serve as crosswalks from academics to policy–makers to the public.

 

Table 2: University based research centers.
CenterResearch focus
Earth Institute, Columbia University“Brings together the people and tools needed to address some of the world’s most difficult problems, from climate change and environmental degradation, to poverty, disease and the sustainable use of resources. It is a blend of scientific research, education, and practical solutions to help build a path towards sustainability” [22]
Program on Energy and Sustainable Development, Stanford University (PESD)“Draws on the fields of economics, political science, law, and management to investigate how the production and consumption of energy affect human welfare and environmental quality” [23]

 

     c. NSF–funded synthesis centers

The synthesis centers described in Table 3 implement the ideals and values of interdisciplinary basic and applied science encouraged by the philosophy of the Fourth Paradigm (Abbott, 2009). The synthesis centers are new concept organizations funded by the National Science Foundation (NSF) (and others), beginning in 1995. It is important to note that the synthesis centers do not show up in “traditional” think tank schemas (Abelson, 2009), although they are funded by NSF, for the explicit purpose of scientific collaboration among a range of stakeholders. They fit several conceptualizations of think tanks because the inclusion of policy–makers in collaborative processes and influence upon policy–makers is important to their missions.

 

Table 3: NSF–funded synthesis centers.
CenterResearch focus
NCEAS
National Center for Ecological Analysis and Synthesis Center, University of California, Santa Barbara
“Supports cross–disciplinary research that uses existing data to address major fundamental issues in ecology and allied fields and encourages the application of science to management and policy” [24]
NESCent
National Evolutionary Synthesis Center, Duke University
“Promotes the synthesis of information, concepts and knowledge to address significant, emerging, or novel questions in evolutionary science and its applications” [25]
NIMBios
National Institute for Mathematical and Biological Synthesis, University of Tennessee
“Develops creative solutions to today’s complex biological problems; endeavors to collaborate mathematics and biology; evaluates the ‘scientific stories’ we tell to better understand nature and project the implications of our actions on the biological system on all scales” [26]

 

     d. Stand–alone political/policy institutes

Table 4 lists stand–alone (separate from academic institutions) think tanks in the United States. Like the list of university research centers, the list was informed by the “Top Thirty” index.

 

Table 4: Stand–alone political/policy institutes.
CenterResearch focus
Consultative Group on International Agricultural Research (CGIAR)“a global partnership that unites organizations engaged in research for a food secure future” [27]
EarthWatch“Supports scientific field research related to sustainable development conducted by leading scientists in a broad range of disciplines, from habitat management to health care. Earthwatch’s Research Program provides support where funding is typically limited, to scientists from developing countries, women in science, and long–term monitoring projects” [28]
Resources for the Future (RFF)“Conducts independent research rooted primarily in economics and other social sciences on environmental, energy, natural resource and environmental health issues; focuses on energy and climate, regulating risks, transportation and urban land, the natural world, and health and environment” [29]
World Resources Institute (WRI)“Addresses global resource and environmental issues such as global climate change, sustainable markets, ecosystem protection, and environmentally responsible governance; measures success in the form of new policies, products, and practices that shift the ways governments work, companies operate, and people act” [30]
WorldWatch Institute“Focuses on the challenges of climate change, resource degradation, population growth, and poverty by developing and disseminating data and innovative strategies for achieving a sustainable society” [31]

 

     e. Non–governmental organizations (NGOs) with research arms

Table 5 lists organizations that juxtapose university–based research centers. While the organizations listed in the table are known for their advocacy work, they also have research arms (Stone, 2007). These organizations were selected because of their affiliations with a coalition of environmental think tanks [32]. However, this is not an exhaustive list of such organizations.

 

Table 5: Non–governmental organizations (NGOs) with research arms.
CenterResearch focus
Environmental Defense Fund (EDF)“Takes a multi–disciplinary approach working with businesses, government, and communities to help preserve the natural systems on which life depends; focuses on the biosphere, including climate, oceans, ecosystems, and health” [33]
National Audubon Society“Attempts to conserve and restore natural ecosystems, focusing on birds, other wildlife, and their habitats for the benefit of humanity and the Earth’s biological diversity” [34]
Nature Conservancy“Works around the world to protect ecologically important lands and waters for nature and people” [35]
World Wildlife Fund (WWF)“Protects and restores species and their habitats; strengthens local communities’ ability to conserve the natural resources they depend upon; transforms markets and policies to reduce the impact of the production and consumption of commodities; mobilizes hundreds of millions of people to support conservation” [36]

 

     f. Operational categories

The cases studied here are summarized in Table 6.

 

Table 6: Summary table of subjects.
OrganizationYear founded
NIMBios — University of Tennessee, Knoxville2008
NESCent — Duke University2004
Program on Energy and Sustainable Development (PESD) — Stanford University2001
Earth Institute — Columbia University1995
NCEAS — University of California, Santa Barbara1995
World Resources Institute1982
WorldWatch Institute1974
Consultative Group on International Agricultural Research (CGIAR)1971
EarthWatch1971
Environmental Defense Fund1967
World Wildlife Fund1961
Resources for the Future1952
Nature Conservancy1951
National Audubon Society1905

 

     g. Expertise

Think tanks seek to influence policy decisions. Policy–makers’ need for legitimacy in the decisions they make is increasingly extending the work of environmental think tanks into science–related domains [37]. The need for legitimacy is acknowledged in the popularity of claims of evidence–based policy making (Bertelli and Wenger, 2008). For good or bad, legislators may use the authority of “[S]cience” to wage political battles (Sarewitz, 2004). They have wide latitude in choosing scientific claims to bolster their positions because objectivity is not the merit that legislators, and therefore think tanks, seek. Both find legitimacy in the credibility of the information provided by think tanks. The credibility comes when their language speaks to defensible realities. Think tanks establish their credibility in a number of ways that are not necessarily related to the scientific method: they hire experts, create academic–looking journals, and market the credentials of their researchers (Stone, 2007; Mirowski, 2008; Medvetz, 2010).

     h. Specialization

The variability in the intellectual products generated by think tanks appears to be related to their need to establish credibility among an ideologically diverse spectrum of policy–makers. Think tanks employ a number of legitimizing strategies that also hold implications for their contributions to the scientific record. McGann identifies a trend toward specialization of think tanks since the 1970s. This trend is fostering “specialist [or] boutique think tanks” [38]. McGann attributes this increasing specialization to the demand for analysts who can provide information to legislators about increasingly complex issues.

Increased specialization suggests a demand for more concrete evidence to support the claims think tanks make, thus carving out room for scientifically based analysis. However, the productivity of think tanks has been evaluated based largely on its contribution to policy discussions. The separate but complementary roles think tanks often play in policy development as research units and as advocacy groups [39] have been documented in a body of scholarly research that appropriately measures effectiveness of think tanks by their influence on policy processes.

The intellectual products generated by entities working for think tanks are intended to be “agenda setting” material for public policy [40]. As a result, however, development in the understanding of what a think tank is and what it does has evolved around that focus (Rodgers, 1988). However, participation by think tanks in larger epistemic communities, which include academic researchers, suggests that their reach extends far beyond immediate policy arenas and that conceptual treatment of think tanks should extend as far (Tofthagen and Fagerstrøm, 2010; Rodgers, 1988).

The increasingly science-related work that environmental think tanks do makes questions about its scientific value salient. Think tanks are involved in basic research, analysis of secondary data, synthesis of previous findings, and/or branding their policy findings as science–based. However, segments of the research about think tanks treat all think tanks as equal in function. While this view renders the work done by think tanks irrelevant to discussions about the production of scientific knowledge, there are important differences in the substance and style of various products (Stone, 2007). Think tanks participate in the publication of white papers, reports, proceedings papers, press releases, briefs, as well as “scientific validation” [41]. In this paper, the validation process of think tanks is measured by the number of publications in science and social science journals and the reputations of those journals. These publications represent output (McGann, 2007) and help measure the receptivity of the intellectual community to the organization’s ideas. As measures of output and receptivity, these publications help quantify how think tanks contribute to the scientific record.

     i. Scientific context

The rigid dichotomy between science and other thought domains that preserved scientific authority in the past no longer serves the status quo. The need for increasingly complex understandings of our physical world imposes new standards of scientific credibility and relevance (Keller, 2010). The new scientific paradigm achieves credibility through the construction of an inextricable link with technology (computational science and related tools). The new paradigm achieves relevance through the rhetoric about interdisciplinary approaches to science, which suggests holistic, methodologically inclusive strategies.

The new scientific paradigm elevates the status of applied science. However, in doing so, it necessarily reinforces the construct of applied science as something separate from basic, “real” science. This type of equivocation allows industry groups to effectively market “science” that supports their interests. They merely exploit the cognitive boundaries that traditionalists construct between science and other domains of knowledge production. This bright line is difficult to erase when it becomes inconvenient; when industry effectively dons the trappings of scientific authority. It is also challenging for traditionalists to discount the authors’ legitimacy when science itself is a construction bred by a range of political agendas, whether the agenda is to secure more funding for research or to engage in social experiments (Gieryn, 1983).

 

++++++++++

3. Methods

Recent developments in science and public policy, as well as developments in the ways both are conceptualized suggest that sociologists of science should take closer looks at the outputs of environmental think tanks to measure their contributions to the scientific record. This study examines the contributions of a range of environmental think tanks: research synthesis centers funded by the NSF (and others); university research institutes; policy/political institutes; and, advocacy groups (referred to here as non–governmental organizations or NGOs) with research arms.

This study uses publication analysis to gauge the level at which literature generated by think tanks has penetrated scientific discourses. It adds empirical insight to a body of work about think tanks that has focused primarily on theoretical development (Mead, 1985; Medvetz, 2010) and is therefore, conceptually vulnerable (Tofthagen and Fagerstrøm, 2010). Since think tank researchers are often actors in “policy or epistemic communities,” which also include government officials and academics who develop scientific studies for decision–making processes, their influence extends beyond policy domains [42].

The literature review suggests that scholarship produced by think tanks contributes to the scientific record. It further suggests that the sizes of these contributions and intellectual value of these contributions within scientific communities are greatest among the synthesis centers and the university research centers and less among the policy research institutes and the NGOs.

This study provides an empirical view of the dispersion throughout scientific discourses of the research products generated by think tanks. It does so through a type of bibliometrics called publication analysis, which looks at the distribution of an author’s or institution’s research products over time (Koskinen, et al., 2008). This paper uses the number of articles found among scholarly publications as a proxy for productivity [43] and as a measure of academia’s acceptance of the ideas developed in the articles (McGann, 2007). Additional measures include Impact Factor (IF) scores and Eigenfactor (EF) scores of the journals (137) where think tank–authored papers are published and the journals’ rankings within their respective disciplines.

  1. Scope

The findings for this study are based upon records drawn from over 8,200 science journals and over 2,900 social science journals that were published between 2001 and 2010 [44]. The beginning year reflects what McGann [45] describes as the “6th Wave [of think tanks characterized by] Globalization and the War on Terror,” because globalization of environmental problems factors into the complexities that require think tanks to specialize [The period prior was the “Conservative War of Ideas (1980–2005)”] [46].

     b. Empirical tests

  1. Researchers conducted an abstract only (no text) search of Web of Science [47] using the address field to identify works in which a think tank researcher was involved or the think tank was an authorized publisher of the article. The address field indicates an institutional affiliation. Once the system returned records, researchers for this study randomly checked individual records to insure their relevance to the research questions asked in this study. Researchers then noted the publication titles and the IF and EF scores for the publications.

    IF and EF scores serve as proxies for the influence of journal titles within a specific time period; they indicate the relative value of journals. As the standard bearer, IF considers the average number of citations of articles within a journal over a two–year period [48]. However, both IF and EF scores were used in this study because EF paints a more accurate picture of receptivity by the scientific community; it controls for self–citations, while giving more weight to more prestigious journals [49]. It also paints a more accurate picture of connectedness and centrality because it places each journal in only one category, unlike IF, which may list journals in multiple categories. In addition, EF adjusts for the way different disciplines count citations [50].

  2. Upon identifying journals where the think tank products were published, the researchers of this study also collected IF and EF data about the ISI Web of Science categories in which those journals fell. Examples of the categories include Ecology, Evolutionary Biology, and Multidisciplinary Sciences. These categories “indicat[e] a general area[s] of science or the social sciences” [51].

 

++++++++++

4. Analysis and results

This study examines the contributions think tanks make to the scientific record by examining the products of four different types of think tanks: NSF–funded synthesis centers (SC); university research centers (UC) and political institutes (PI) identified by the Global Go To Think Tank Index; and, advocacy groups with research arms (NGOs), identified through their affiliation with the EarthShare coalition. The research products generated by these organizations were used as indicators of each institution’s contributions and the collective contributions of these different types of think tanks.

The number of articles to which think tanks contributed was used to indicate output of academic scholarship. To yield a more nuanced understanding of this output, researchers also looked at the number of articles generated over time (2001–2010). However, building an analysis solely on the number of articles ignores political context. For example, policy entrepreneurs may defy the philosophy of science by developing books and articles void of scientific merit to create skepticism about scientific evidence (Jacques, et al., 2008; Mirowski, 2008). Such products may be the results of a type of science–for–hire brokered among policy–makers, policy entrepreneurs, and academics, who market scientific authority as a proxy for science itself (Mirowski, 2008).

Since they indicate entrenchment of authors, articles, journals and institutions in the scientific community, bibliometric tools can help mitigate (but not eliminate) the effect of deliberate disinformation campaigns (Koskinen, et al., 2008). Therefore, in addition to output, this study also looked at the prestige of the journals where these articles appear. The relative prestige of journals can be assessed along several dimensions. The dimensions used in this study are Impact Factor, Eigenfactor, rank of a journal within its ISI category(s), and the prestige of the category(s) itself.

  1. Research and productivity

The Web of Science database returns information on a number of products: articles, editorial material, proceedings papers, reviews, letters, meeting abstracts, book reviews and book chapters [52]. Among these products, articles published in journals are used the most by the academy to measure scholarship and award tenure (Perlmutter, 2012). They will be used to measure scholarship in this study also.

As shown in Figure 1, among the four groupings of think tanks, university research centers generated the highest average number of articles (213) from 2001 to 2010. It is important to note, however, that the university research center grouping consisted of only two centers. The range between the number of journal articles generated by Earth Institute and the number generated by PESD, the two university research centers, was quite large, 421.

 

Average number of articles generated by think tanks (2001-2010)
 
Figure 1: Average number of articles generated by think tanks (2001–2010).

 

While the range in the number of articles generated by NGOs (486) was slightly larger than the university research centers, both ranges suggest sizeable disparities in the number of articles produced by individual organizations. However, the average number of articles generated by individual NGOs was smaller (180) than the average number generated by university centers. The majority of the NGO articles were generated by two organizations, National Conservancy (486) and the World Wildlife Foundation (198); the next highest number of articles was generated by the National Audubon Society (a distant 36). Researchers from Environmental Defense Fund (EDF) generated no articles that could be found in the Web of Science database.

As shown in Figure 1, political institutes produced the third highest average number of articles, an average of only 25 compared to the NGOs’ average of 180. Among the political institutes, the World Resources Institute produced the largest number of articles (63) and Resources for the Future produced the least, five. As a group, synthesis centers were the least prolific among the think tanks (average of 15 articles per center). It is important to reiterate, however, that the synthesis centers are relatively new. However, NCEAS created in 1995, NESCent created in 2004, and NIMBios created in 2008 generated more articles than the NGO Environmental Defense Fund (EDF) (zero documents in this database), established 45 years ago.

Among the synthesis centers, NCEAS generated the largest number of articles (found in this database), 28. A clear majority of the articles generated by the university centers were generated by the Earth Institute, which generated 423 articles compared to two by PESD. Among the articles generated by political institutes, 63 were generated by WRI, 39 by CGIAR, and 24 by Earthwatch. Although, RFF was founded in 1952, only five articles appeared in the database from this organization. Finally, Nature Conservancy was the most prolific among the NGOs, 486 articles. WWF generated 198 articles. The number of articles generated by the National Audubon Society and EDF, 36 and zero, respectively fell well below the average number (180) generated by organizations in this category.

Looking at the proliferation of articles over time provides another perspective. As shown in Figure 2, although the number of articles generated by synthesis centers was low (in this database), article production among the university centers, NGOs, and political institutes on average showed similar growth patterns. Also, while publications by university centers increased overall, their productivity had begun to decline in 2010.

 

Average number of articles generated annually by category (2001-2010)
 
Figure 2: Average number of articles generated annually by category (2001–2010).

 

Among the synthesis centers, only NCEAS had established a clear pattern of increasing article proliferation since its inception in 1995. The pattern was unclear for NIMBios, which had only begun two years prior to the end of the study period. Among the university research centers, the growth pattern was largely set by Earth Institute because it was so prolific and because PESD produced so few articles that appeared in this database. Among the political institutes, the WRI generated the largest number of articles. However, the number of articles produced by this institute showed no clear trend. Article proliferation was even more sporadic for World Watch Institute, RFF, Earth Watch, and CGIAR. In contrast, the number of articles generated by NGOs increased over time. They did so because the large number of articles generated by Nature Conservancy (486) increased over time. Conversely, the number of articles generated by the National Audubon Society and the World Wildlife Fund plateaued over time. As mentioned earlier, the Environmental Defense Fund generated no articles that showed up in the Web of Science database.

     b. Relative contributions

The previous section provides evidence that researchers acting on behalf of think tanks contribute to the scientific record. However, because articles generate reciprocal value for the publications in which they appear, a more nuanced view of productivity is possible. As shown in Figure 3, although synthesis centers generated few articles in their short lifespans, the average impact factor scores of the journals where they published (3.763), was higher than the other groupings of organizations: NGOs (2.848), political institutes (2.538), and university research centers (2.508). Two of the highest average IF scores were garnered by journals where synthesis centers, NCEAS (4.267) and NIMBios (3.763), published. The other highest averages were of journals where the university research centers Earth Institute (3.818) and the NGO Nature Conservancy (3.346), a university research center and an NGO published.

 

Average impact factor score of journals in which organizations published (2001-2010)
 
Figure 3: Average impact factor score of journals in which organizations published (2001–2010).

 

Table 7 compares the highest impact factor scores for journals classified in the most frequently occurring ISI categories. While all the group averages were several points below the highest IF scores of journals in the most frequently occurring ISI categories, these averages were still higher than those of about 89 percent of the journals in the Web of Science (science and social science) database. Looking at the most frequently occurring category (as opposed to all categories in which a journal appears) helped equalize the number of categories reflected in IF scores and the number reflected in Eigenfactor scores because IF scores reflect a journal’s classification in multiple categories, whereas EF scores are computed using only one category [53].

 

Table 7: Highest impact factor scores — Most frequently occurring ISI categories (2001–2010).
Type of organizationMost frequently occurring ISI categoryHighest impact factor scores of most frequently occurring ISI categoryAverage impact factor among organizations
UCMeteorology & atmospheric sciences5.3092.508
SCEcology15.2533.763
PIEnvironmental sciences9.4882.538
NGOEcology15.2532.848

 

Figure 4 shows that when based upon Eigenfactor scores, the political institutes appear to have published in higher impact journals (0.0710) than other organizations represented. The average EF scores of the other organizations were: (0.0617) for synthesis centers, (0.0613) for NGOs, and (0.0518) for university centers. Two of the highest average EF scores were garnered by journals affiliated with two synthesis centers, NCEAS (.0822) and NIMBios (.0993). Three of the other highest averages represented journals where World Wildlife Fund (an NGO) (.0952), Earth Institute (a university research center) (.0993), and World Resources Institute (a political institute) (.0933) published.

 

Average Eigenfactor score of journals where organizations published (2001-2010)
 
Figure 4: Average Eigenfactor score of journals where organizations published (2001–2010).

 

The Eigenfactor scores led to a different ranking of the think tanks groups than that established by Impact Factor scores. Average EF scores place the journals where political institutes publish ahead of synthesis centers and NGOs ahead of university research centers.

As shown in Table 8, the highest Eigenfactor score within a category (.18924) occurred among political institutes, whose highest impact ISI category was environmental sciences. The next highest score (.09051) occurred among synthesis centers and NGOs, whose highest impact ISI category was ecology. The lowest score (of the high scores) (.08375) occurred among university research centers, whose highest impact category was meteorology and atmospheric sciences.

 

Table 8: Highest Eigenfactor scores — Most frequently occurring ISI categories (2001–2010).
Type of organizationMost frequently occurring ISI categoryHighest Eigenfactor scores of most frequently occurring ISI categoryAverage Eigenfactor among organizations
UCMeteorology & atmospheric sciences0.083750.0518
SCEcology0.090510.0617
PIEnvironmental sciences0.189240.0710
NGOEcology0.09510.0613

 

Among the journals in the Web of Science — Science database, the Eigenfactor scores supported higher rankings (than impact factor) for the journals where organizations published. The average EF scores for the ISI categories represented here were higher than the average EF scores of 96 percent of the journals in the database.

Based on impact factor scores, the journals where synthesis centers published ranked the highest in their respective ISI categories (top 27 percent). University centers published in journals that had the next highest average ranking (top 29 percent). Journals for political institutes and NGOs ranked in the top 30 percent and 38 percent, respectively. The journals that garnered the highest rankings where NESCent (a synthesis center), NCEAS (a synthesis center), and Earth Institute (a university research center) published ranked in the top 23, 25, and 26 percent of their ISI categories, respectively.

 

Average ISI category ranking (by impact factor score) of journals where organizations publish (2001-2010)
 
Figure 5: Average ISI category ranking (by impact factor score) of journals where organizations publish (2001–2010).

 

Based on average Eigenfactor scores of the journals where they publish, synthesis centers (top 26 percent) ranked the highest. Next were journals where political institutes and university research centers published (top 38 percent), and where NGOs published (top 47 percent). With regard to category rankings, findings based on EF scores were relatively consistent with those derived from Impact Factor scores. Individual organizations that published in journals ranking highest within their respective ISI categories include NESCent (a synthesis center) (top 15 percent), Earth Institute (a university research center) (top 22 percent), NCEAS (a synthesis center) (top 27 percent), World Resources Institute (a political institute) (top 28 percent), The National Audubon Society (an NGO) (top 29 percent), and the World Wildlife Fund (an NGO) (top 29 percent). Within the university research center grouping, the journal rankings of the Earth Institute were offset by the low journal rankings of PESD. Within the NGO grouping, the journal rankings of The National Audubon Society and World Wildlife Fund were offset by the absence of work by the Environmental Defense Fund.

 

Average ISI category ranking (by Eigenfactor score) of journals where organizations publish (2001-2010)
 
Figure 6: Average ISI category ranking (by Eigenfactor score) of journals where organizations publish (2001–2010).

 

Beyond articles and journal rankings, another indicator of impact on the emerging scientific paradigm is multidisciplinarity. The groups examined in this study published in journals that appeared in an average of 1.56 to 1.74 different disciplinary categories, a range of less than one point. The highest number of categories covered by one journal represented in this study was six. This information is available only through impact factor computation because IF can place a journal in more than one category. A number of categories, such as multidisciplinary sciences and agricultural–multidisciplinary, build in multidisciplinarity. Four of the synthesis center articles, 22 of the university research center articles, 10 of the political institute articles, and 22 of the NGO articles were published in journals that fell into such categories.

 

++++++++++

5. Discussion

  1. Output and receptivity of ideas

Between 2001 and 2010, think tanks were institutional authors or publishers of over 1,300 journal articles, most of which were affiliated with university research centers and NGOs. Think tanks clearly make contributions to the scientific record, even a strict definition of the scientific record. Based on the standards of scientific authority, the findings in this study demonstrate that regardless of their initial reasons for conducting research (e.g., generating political influence), think tank researchers produce work that holds value in interdisciplinary, scientific environments. Think tanks have historically been portrayed more as consumers of science, rather than producers of science, a status which is automatically void of scientific authority. However, articles found in the Web of Science — Science database construct a starkly different narrative.

In terms of overall number of publications, university research centers, then NGOs, then political institutes, then synthesis centers were the most prolific. While the expectation was that NGOs would be the least prolific in this database because of their focus on advocacy work, two of the most prolific organizations were NGOS, Nature Conservancy and World Wildlife Fund.

Knowing where organizations’ products published offers additional insight. Impact factor and Eigenfactor scores were used in this study to indicate the reputations in the scientific community of the journals where articles appeared. These metrics help mitigate (but not eliminate) the influence of journals created for the sole purpose of diverting readers away from scientific evidence. Average IF scores and journal rankings within their ISI categories support the expectation that, among the four groups, synthesis centers published the most in reputable journals. Unexpectedly, however, NGOs had the next highest average IF scores, followed by public institutes and then university research centers. According to average EF scores, political institutes, then synthesis centers, then NGOs, then university research centers published in the most reputable journals; a different order than suggested by IF scores.

As discussed above, the factor scores and category rankings of the journals where it publishes provide some information about an organization’s disciplinary reach. Beyond their respective disciplines, their reach extends further through their classification in high impact ISI categories. Their category scores place the journals represented here among the top quarter of the journals identified through the Web of Science.

     b. The metrics

In addition to achieving the stated goals of this paper, some additional methodological insights emerged that provide further commentary on the sources of scientific authority. The findings here underscore the need to use multiple metrics to understand sociological dynamics, such as acceptance by the scientific community. For example, IF scores and Eigenfactor scores led to different rankings within their ISI categories for the same journals. This difference may be the effect of IF’s classifying journals in multiple ISI categories and EF’s classifying journals according to just one category. EF controls for the citation of articles across disciplines that may not relate directly to the topic (Bergstrom, 2007). Also, think tank researchers may not engage in basic research regularly. As a result, they may not repeatedly build upon the same discourse in the academic literature, which would provide them opportunities to reference their own work. Consequently, IF may understate the contributions made by such institutions and further discount their scientific contributions.

 

++++++++++

Conclusions

  1. The scientific record

While a new scientific paradigm is taking shape, it is struggling against the force of inertia and rhetoric about scientific authority. Developers of tools for the new paradigm have already been informed by a clear demarcation between science and other cognitive domains, as their tools have been aimed at academics. Both think tank researchers and developers of research infrastructure would be well–served by developers’ acknowledgement of the needs of researchers working in these types of organizations. Both would also be served by developers’ recognition of the potential for think tanks to generate research products, such as primary data.

The findings here demonstrate that while the primary objective of think tanks is to seek influence in policy domains, as by–products of their legitimizing strategies they also produce work that holds some value in scientific knowledge development. For example, NGOs demonstrated impact through volume (output) and political institutes indicated, through competitive impact factor and Eigenfactor scores and rankings, the community’s receptivity. While these types of organizations may be known for engaging in synthesis and distillation of scientific information in one research area, they may be generating scientific knowledge in other areas.

The range of think tanks represented here, synthesis centers, university institutes, policy institutes, and advocacy organizations with research arms, appear on average to contribute to knowledge development in the sciences. However, contributions within the groupings vary. Based on the metrics used in this study and the mission of the synthesis centers, knowledge developed by synthesis centers that is published in highly regarded journals is likely to surpass that of organizations in the other categories. The concept of a synthesis center allows the scientific community to cling to the authority and credibility of scientific tradition while maintaining relevance in a world that is demanding a more nuanced understanding of our complex physical environment.

Synthesis centers do not fall into the commonly accepted frameworks of think tanks. However, they adhere to Pigliucci’s (2010) definition, which describes them as entities seeking to influence policy. While synthesis centers’ engaging policy–makers in scientific endeavors benefits the knowledge development process, the hope is that participating policy–makers will be influenced by that knowledge or use that knowledge to appeal to other policy–makers.

     b. Conceptual reach

The concept of the think tank is elastic in ways that has served it poorly. This condition is not unrelated to the rhetoric surrounding scientific authority. While a number of classifications could accurately capture some reality, the variability of these models could further delay understanding of the nature of think tanks.

Although the organizations under examination in this study represent a broad spectrum of organizations that might be called think tanks, generalization is limited. First, the small number of cases is but a fraction of the organizations that call themselves or are referred to as think tanks. Also, records returned about these cases were limited to material in Web of Science. In addition, organizations such as EDF that had no articles in this database may be productive in other venues not studied here. These venues may adhere to other important standards, such as societal impact, not measured here.

Although its holdings are large, the Web of Science cannot fully account for the universe of publications that could inform the topics covered here. Just the same, the researchers took care to identify all the available cases that offer some valuable insight. As a general note, media, such as books that may also contribute to the scientific record, are not considered in this study. Therefore, the contributions of all the organizations examined here are underestimated in some way. However, this does not diminish the findings of this study, which establish at least a baseline of scholarly activity among different types of think tanks.

This paper has examined the contributions that think tanks make to the scientific record. While some portion of their contributions may consist of deliberate misinformation, scientific tools such as metadata and documentation could create opportunities for others to review an organization’s data, opportunities to ascertain whether or not data even exists, and/or opportunities to collaborate with other institutions. These opportunities are separate issues from the political agendas of think tanks.

     c. Future Research

Given the inattention to think tanks’ relationships with scientific domains, this line if inquiry is fertile ground for future study. Future research will employ citation metrics and theoretical frameworks from public relations to further gauge the visibility of organizations within the scientific arena and how this visibility factors into their broader marketing strategies. Additional studies will also employ network analysis to describe the institutional connections, created through their work, among the organizations identified in this study. These studies may further demonstrate that even if their work carries clear biases, think tanks are natural inhabitants in both the current and emerging constructions of the scientific domain. Finally, the measures employed here lend themselves to examinations of specific areas of environmental studies, such as climate change, where policy organizations such as the International Panel of Climate Change (IPCC) have managed to balance both credibility and relevance (Keller, 2010). End of article

 

About the authors

Dr. Kimberly Douglass is an Assistant Professor with the School of Information Sciences at the University of Tennessee. Douglass’ research interests include the interface between science and policy, as well as science information, e–government, and information as a commodity. She recently co–authored a manuscript that explores the culture of science and authored an article that examines methodological approaches to teaching information ethics in Africa. Douglass was a post–doctoral research associate for NSF–funded DataONE. She has also worked for the Tennessee Department of Environment and Conservation and the Tennessee Comptroller’s Office. Douglass earned a Ph.D. in political science from the University of Tennessee. She earned a Bachelor of Arts and Sciences and a Master’s of Public Administration from Tennessee State University.
E–mail: kdougla2 [at] utk [dot] edu

Sarah Tanner is a graduate student pursuing a Master’s of Science degree in information science at the University of Tennessee. She received her Bachelor of Arts in history at the University of Tennessee in 2010. In 2012, she worked as a research assistant in the School of Information Science under Dr. Kimberly Douglass and currently works as a graduate assistant for the Great Smoky Mountain Regional Project, Research Services at the University of Tennessee. Tanner is also a working as a graduate assistant in the Special Collections at UTK’s Hodges Library. Her professional interests include archives, records management, and special collections.

 

Acknowledgements

We thank the following individuals for their assistance in the development of this paper: Dr. William Michener, Dr. Rachel Fleming–May, Reid Boehm, and Kerri Courter.

 

Notes

1. Bergstrom, 2007, p. 314.

2. Pigliucci, 2010, p. 24. This paper uses the phrase “think tank” and researchers who write for think tanks interchangeably. When referencing researchers, the assumption is that they act as agents of the institutions under examination.

3. Bell, 2009, p. xi.

4. McGann, 2007, p. 21.

5. Tofthagen and Fagerstrøm, 2010, p. 21.

6. Stone, 2007, p. 275.

7. Stone, 2007, p. 262.

8. Bell, 2009, p. xi.

9. Stone, 2007, p. 264.

10. Stone, 2007, p. 262.

11. McGann, 2007, p. 21; Abelson, 2009.

12. Stone, 2007, p. 275.

13. Ibid.

14. Stone, 2007, p. 262.

15. Stone, 2007, p. 271.

16. Abelson, 2009, p. 10.

17. Ibid.

18. McGann, 2007, p. 21.

19. Stone, 2007, p. 275.

20. McGann, 2007, p. 21.

21. McGann, 2012, p. 46. While the RAND Corporation (http://www.rand.org/) and the Pew Center for Global Climate Change appear on this list, Rand is a corporation and, in a number of ways, functions differently from these organization. Pew is now the Center for Climate and Energy Solutions (C2ES; http://www.c2es.org/). The recent changes in the organizational focus and the organization’s name could result in some records being overlooked.

22. “Mission: Solutions for Sustainable Development,” at http://www.earth.columbia.edu/article/view/1791, accessed 3 April 2012.

23. “About PESD,” at http://pesd.stanford.edu/docs/about_pesd, accessed 3 April 2012.

24. “Overview,” at http://www.nceas.ucsb.edu/overview, accessed 3 April 2012.

25. “About the Center,” at http://www.nescent.org/about, accessed 3 April 2012.

26. “Introduction from the Director,” at http://www.nimbios.org/about, accessed 3 April 2012. A fourth synthesis center, National Socio–Environmental Synthesis Center (SESYNC) was founded only in 2011 and did not have a publication record during the years covered by this study; see http://www.sesync.org/, accessed 30 August 2012.

27. “Who We Are,” at http://www.cgiar.org/who-we-are, accessed 30 August 2012.

28. “What We Do,” at http://www.earthwatch.org/aboutus/whatwedo/, accessed 3 April 2012.

29. “About RFF,” at http://www.rff.org/About_RFF/Pages/default.aspx, accessed 3 April 2012.

30. “About WRI,” at http://www.wri.org/about/wri-history, accessed 3 April 2012.

31. “WorldWatch Research and Programs,” at http://www.worldwatch.org/programs, accessed 3 April 2012.

32. “EarthShare is a national non–profit and federation or an umbrella group of environmental charities that’s worked for more than 20 years to connect people and organizations with effective ways to support critical environmental causes,” at http://www.earthshare.org/about-earthshare.html, accessed 28 June 2012.

33. “Our Mission and History,” at http://www.edf.org/about/our-mission-and-history, accessed 3 April 2012.

34. “About Us,” at http://www.audubon.org/about-us, accessed 3 April 2012.

35. “About Us,” at http://www.nature.org/aboutus/index/htm, accessed 3 April 2012.

36. “Who We Are: About WWF,” at http://worldwildlife.org/who, accessed 3 April 2012.

37. This statement is absent of any judgments about the methodological merits of the processes that lead to scientific claims.

38. McGann, 2007, p. 67; Stone, 2007.

39. Pigliucci, 2010, p. 26.

40. Rich, 2004, pp. 154–155.

41. Stone, 2007, p. 272.

42. Abelson, 2009, p. 57.

43. Koskinen, et al., 2008, p. 137.

44. The ending year reflects the most recent year ISI Impact Factor and Eigenfactor data were available at the time data was analyzed.

45. McGann, 2007, p. 26.

46. Ibid.

47. “Web of Science is a database that provides researchers, administrators, faculty, and students with access to the world’s leading citation databases.” At http://thomsonreuters.com/products_services/science/science_products/a-z/web_of_science/, accessed 28 June 2012.

48. http://admin-apps.webofknowledge.com/JCR/help/h_impfact.htm, accessed 4 June 2012.

49. http://eigenfactor.org (ranking and mapping scientific knowledge).

50. http://admin-apps.webofknowledge.com/JCR/help/h_subjinfo.htm.

51. ISI, or Institute for Scientific Information, http://wokinfo.com/about/whatitis/, accessed 28 June 2012.

52. (a) Missing data and (b) inconsistent returns. (a) For the synthesis center there were no impact nor Eigenfactor scores found for three out of 73 rows of data identified. No impact factor or Eigenfactor scores were found for eight out of the 164 roles of data for the university centers. Of the political organizations reviewed, 13 out of 168 rows of data were missing impact and Eigenfactor scores, and 12 out of the 361 rows of data for non–governmental organizations did not have impact factor nor Eigen factor scores. (b) Using the same parameters, the database yielding different results in different queries. The difference (when they occurred) across queries in the number of abstracts returned was about one to three records.

53. http://admin-apps.webofknowledge.com/JCR/help/h_subjinfo.htm.

 

References

Mark R. Abbott, 2009. “Scientific infrastructure: A new path for science?” In: Tony Hey, Stewart Tansley, and Kristen Tolle (editors). Fourth paradigm: Data–intensive scientific discovery. Redmond, Wash.: Microsoft Research, pp. 111–116, at http://research.microsoft.com/en-us/collaboration/fourthparadigm/contents.aspx, accessed 17 September 2012.

Donald E. Abelson, 2009. Do think tanks matter? Assessing the impact of policy institutes. Montreal: McGillndash;Queen’s University Press.

Gordon Bell, 2009. “Foreword: Microsoft researcher,” In: Tony Hey, Stewart Tansley, and Kristen Tolle (editors). Fourth paradigm: Data–intensive scientific discovery. Redmond, Wash.: Microsoft Research, pp. xi–xv, at http://research.microsoft.com/en-us/collaboration/fourthparadigm/contents.aspx, accessed 17 September 2012.

Carl Bergstrom, 2007. “Eigenfactor: Measuring the value and prestige of scholarly journals,” College and Research Libraries News, volume 68, number 5, pp. 314–316.

Anthony M. Bertelli and Jeffrey B. Wenger, 2009. “Demanding information: Think tanks and the U.S. Congress,” British Journal of Political Science, volume 39, number 2, pp. 225–242.http://dx.doi.org/10.1017/S0007123408000410

Thomas F. Gieryn, 1983. “Boundary–work and the demarcation of science from non–science: Strains and interests in professional ideologies of science,” American Sociological Review, volume 48, number 6, pp. 781–795.http://dx.doi.org/10.2307/2095325

Peter J. Jacques, Riley E. Dunlap, and Mark Freeman, 2008. “The organisation of denial: Conservative think tanks and environmental skepticism,” Environmental Politics, volume. 17, number 3, pp. 349–385.http://dx.doi.org/10.1080/09644010802055576

Ann C. Keller, 2010. “Credibility and relevance in environmental policy: Measuring strategies and performance among science assessment organizations,” Journal of Public Administration Research, volume 20, number 2, pp. 357–386.http://dx.doi.org/10.1093/jopart/mup001

Johanna Koskinen, Matti Isohanni, Henna Paajala, Erika Jääskeläinen, Pentti Nieminen, Hannu Koponen, Pekka Tienari, and Jouko Miettunen, 2008. “How to use bibliometric methods in evaluation of scientific research? An example from Finnish schizophrenia research,” Nordic Journal of Psychiatry, volume 62, number 2, pp. 136–143.http://dx.doi.org/10.1080/08039480801961667

James G. McGann, 2012. “The Global Go To think Tanks report 2011: The leading public policy research organizations in the world” (18 January), at http://www.gotothinktank.com/global-%E2%80%9Cgo-to-tanks-leading-public-policy-research-organizations-world/, accessed 17 September 2012.

James G. McGann, 2007. Think tanks and policy advice in the U.S.: Academics, advisors and advocates. New York: Routledge.

Lawrence M. Mead, 1985. “Science versus analysis: A false dichotomy,” Journal of Policy Analysis and Management, volume 4, number 3, pp. 419–422.http://dx.doi.org/10.2307/3324195

Thomas Medvetz, 2010. “‘Public policy is like having a Vaudeville act’: Languages of duty and difference among think tank–affiliated policy experts,” Qualitative Sociology, volume 33, number 4, pp. 549–562.http://dx.doi.org/10.1007/s11133-010-9166-9

Philip Mirowski, 2008. “The rise of the dedicated natural science think tank,” discussion paper (July), New York: Social Science Research Council.

National Science Foundation. Cyberinfrastructure Council, 2007. “Cyberinfrastructure vision for the 21st century,” NSF 07–28 (March), Arlington, Va., at http://www.nsf.gov/pubs/2007/nsf0728/index.jsp, accessed 17 September 2012.

David D. Perlmutter, 2012. “Good deeds that are most punished, part 3: Research,” Chronicle of Higher Education (29 April), volume 58, number 35, pp. A27–A28, and at http://chronicle.com/article/Good-Deeds-That-Are-Most/131707/, accessed 17 September 2012.

Massimo Pigliucci, 2010. “Science by think tank: The rise of think tanks and the decline of public intellectuals,” Skeptic, volume 16, number 1, pp. 19–27.

Andrew Rich, 2004. Think tanks, public policy, and the politics of expertise. New York: Cambridge University Press.

Beth L. Rodgers, 1989. “Concepts, analysis and the development of nursing knowledge: The evolutionary cycle,” Journal of Advanced Nursing, volume 14, number 4, pp. 330–335.http://dx.doi.org/10.1111/j.1365-2648.1989.tb03420.x

Diane Stone, 2007. “Recycling bins, garbage cans or think tanks? Three myths regarding policy analysis institutes,” Public Administration, volume 85, number 2, pp. 259–278.http://dx.doi.org/10.1111/j.1467-9299.2007.00649.x

Diane Stone, 2004. “Introduction: Think tanks, policy advice and governance,” In: Diane Stone and Andrew Denham (editors). Think tank traditions: Policy research and the politics of ideas. Manchester: Manchester University Press, pp. 1–16.

Daniel Sarewitz, 2004. “How science makes environmental controversies worse,” Environmental Science & Policy, volume 7, number 3, pp. 385–403.http://dx.doi.org/10.1016/j.envsci.2004.06.001

Randi Tofthagen and Lisbeth M. Fagerstrøm, 2010. “Rodgers’ evolutionary concept analysis — A valid method for developing knowledge in nursing science,” Scandinavian Journal of Caring Sciences, volume 24, supplement s1, pp. 21–31.http://dx.doi.org/10.1111/j.1471-6712.2010.00845.x

 


Editorial history

Received 20 July 2012; revised 4 September 2012; accepted 4 September 2012.


Copyright © 2012, First Monday.
Copyright © 2012, Kimberly Douglass and Sarah Tanner. All rights reserved.

Playing science? Environmentally focused think tanks and the new scientific paradigm by Kimberly Douglass and Sarah Tanner
First Monday, Volume 17, Number 10 - 1 October 2012
http://firstmonday.org/ojs/index.php/fm/article/view/4176/3327
doi:10.5210/fm.v17i10.4176





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.