First Monday

Cyberinfrastructure and Patent Thickets: Challenges and Responses by Gavin Clarkson


I: Introduction
II: Background
III: Strategic Responses to Patent Thickets
IV: Cyberinfrastructure and Patent Reform
V: Conclusion



I: Introduction


A common misconception exists among many scientists and engineers that a patent gives the owner the right to a particular technology. In fact, a patent does not guarantee the right to make or do anything. Instead, a patent gives the patent owner the right to exclude others from making, using, or selling anything that embodies the technology covered by the patent. When multiple organizations own patents that are collectively necessary to practice a particular technology, their rights form a “patent thicket” (Clarkson, 2005b), defined by Shapiro as “a dense web of overlapping intellectual property rights that a company must hack its way through in order to actually commercialize new technology.” [1]

The problem of patent thickets has recently caught the attention of much of the scientific and engineering community in a number of technological arenas, and a recent Federal Trade Commission (FTC) report notes that in certain industries the large number of issued patents makes it virtually impossible to search all the potentially relevant patents, review the claims contained in each of those patents, and evaluate the infringement risk or the need for a license (FTC, 2003). For the software industry the report cites testimony about the hold–up problems and points out “that the owner of any one of the multitude of patented technologies constituting a software program can hold up production of innovative new software.” [2] For many firms, the only practical response to this problem of unintentional and sometimes unavoidable patent infringement is to file hundreds of patents each year so as to have something to trade during cross–licensing negotiations or in response to a lawsuit alleging infringement. In other words, the only rational response to the large number of patents in a given field may be to contribute to it.

Additionally, while patent thickets can pose significant impediments to innovation even if every patent in the thicket is a validly issued patent that meets the statutory requirements of novelty, utility, and non–obviousness, the current patent system allows legions of low–quality or invalid patents to pass through the system unchallenged, exacerbating the patent thicket problem and burdening ongoing innovation [3].

As described by the National Science Foundation (NSF), the term cyberinfrastructure describes the new research environments in which advanced computational, collaborative, data acquisition, and management services are available to researchers through high–performance networks (NSF, 2007). Many of these areas may be characterized by cumulative innovations and multiple blocking patents, the existence of which can have the perverse effect of stifling innovation rather than encouraging it.

The typical response to patent thickets involves licensing strategies of one form or another. Cross–licensing is generally the preferred mechanism to clear thickets when the number of firms involved is small. Another alternative is the threat of “mutually assured destruction.” Finally, there are two forms of patent pooling. The first type, the traditional patent pool, is an organizational structure where multiple firms aggregate patent rights for a given device or technology into a package for licensing among themselves and/or to the outside world (Clarkson, 2005b). A more recent type of pool, the “defensive” patent pool, attempts to create a space where member firms have freedom to innovate.

While cyberinfrastructure development may be hampered by patent thickets and a flood of patents of questionable validity, cyberinfrastructure–enabled “virtual organizations” also have the potential to ameliorate these problems. This article presents a survey of responses to patent thickets. The first group involves efforts to either keep questionable patents from ever issuing or removing them from patent space after they have issued — in particular, the “Peer–to–Patent” project, also known as “Community Patent Review.” Proposed by Professor Beth Noveck (2006) and subsequently incorporated into a pilot project by the U.S. Patent and Trademark Office (USPTO), Peer–to–Patent will use distributed online communities to assist in the review of patents for questions of novelty and obviousness and by enabling a virtual community of practice in a field to suggest prior art to the patent examiner. Its success will depend on the ability to leverage developments in cyberinfrastructure in the areas of Computer Supported Collaborative Work (CSCW) and information retrieval. This article also suggests extending Peer–to–Patent into the realm of patent reexamination and post–grant opposition, which are mechanisms that can remove invalid patents once they have been issued.



II: Background

One major difference between the patent space and cyberinfrastructure is the degree of specificity. While most cyberinfrastructure development efforts will involve increasing levels of technological specificity, much of patent space defies any effort at specificity. Although one of the primary purposes of the patent system is to increase technological disclosure, often the strategy pursued by patent applicants is to claim as much as possible and disclose as little as possible. Even if the minimum requirements of disclosure are satisfied, the boundaries of any given patent are often ill–defined.

Such a lack of specificity would be problematic even if there were only a small number of patents. Given that the USPTO issues almost 200,000 patents each year, however, each of which may have numerous citations to prior art and may contain thousands of words, the vastness of patent space is quite daunting. In terms of size, the USPTO generates approximately 2GB of structured patent data and 7.8GB of unstructured patent data [4] each year. Therefore, any attempt to analyze and comprehend the topology and interconnectedness of patent space will almost certainly have to be based on information technology. Similarly, given that the USPTO issues new patent grants each week, any such system would have to be able to update its patent database and provide incremental analysis capability on a weekly basis to address the fast evolving nature of the patent data.

The process of patent search, however, has not progressed very much even with the advent of searchable patent databases. Although automated, the patent search process itself has not been reengineered and thus remains functionally similar to the patent search process developed in the nineteenth century (Wherry, 1995). Although there are several types, some of the common types of patent searches include:

The information necessary for each of these types of searches often includes patent databases as well as non–patent materials such as academic journals, product catalogs, and even Web pages. Professional patent searchers must comb through not only USPTO databases but also dozens of bibliographic databases in general and specialized disciplines to ensure the comprehensiveness of patent searches. The search tools available on the Internet are generally insufficient for a thorough patent search. Most Web–based tools are limited to simple Boolean searches or single taxonomies that are inadequate to address the many ambiguities and complex relationships that are present in patent documents (Spangler, et al., 2002).

The difficulties inherent in the process of patent searching and patent thicket detection coupled with the nebulous nature of the patent space present significant challenges. Shapiro notes that firms in the semiconductor industry “find it all too easy to unintentionally infringe on a patent in designing a microprocessor, potentially exposing themselves to billions of dollars of liability and/or an injunction forcing them to cease production.” [5] Collaborative research between the University of Michigan School of Information and IBM–Almaden, sponsored by the National Science Foundation (Patent Cartography, IIS 0534903), is exploring ways to leverage technology to improve both the ease and efficacy of patent searching, particularly by non–expert searchers.

On their own, patent thickets can pose significant impediments to innovation, even if every patent in the thicket is a validly issued patent that meets the statutory requirements of novelty, utility, and non–obviousness. Unfortunately, the patent system is allowing legions of low–quality or invalid patents to pass through the system unchallenged (Jaffe and Lerner, 2004), exacerbating the patent thicket problem and thwarting ongoing innovation. Underpaid and overwhelmed patent examiners are struggling under the burden of 350,000 patent applications per year and a backlog nearing 700,000. That backlog is increasing as the USPTO currently issues only some 200,000 patents each year.

The patent system has not responded to the changing business, technology, and legal realities over the last two decades and as such does not provide the support needed to foster innovation and fairness in intellectual asset management that the system was built to insure. Multiple patents have been given for the same invention or patents awarded for inventions discovered previously (Jaffe, et al., 2004). Jay Kesan states: “It is widely recognized that the Patent Office grants overly–broad patents because it has deficient knowledge of the relevant prior art, especially in high technology areas with significant nonpatent prior art.” [6] Beth Noveck characterizes the dilemma facing patent examiners as the “goldilocks” problem: too little prior art, too much prior art, and none of it just right (Noveck, 2006). In searching for prior art to deny claims in a patent application, the examiner sometimes turns up nothing. While the patent may sound familiar, often she cannot find other written material that actually teaches how to effect the claims of the patent. Alternatively, she is inundated with related prior art but has trouble winnowing the material, and finding art that is relevant and useful for the examination process in the time allotted to review an application. Even if she can find art that is pertinent, she still may have trouble knowing if the patent is an obvious or non–obvious inventive leap over the combined prior art references from the perspective of one practicing in that area.



III: Strategic Responses to Patent Thickets

A. Cross–licensing

Even if every issued patent met the statutory requirements of novelty, utility, and non–obviousness, the problem of patent thickets would still remain. When the total number of owners of the conflicting intellectual property rights is small, the response to the patent thicket problem has often been to cross–license (Grindley and Teece, 1997; Teece, 1998, 2000). As Shapiro (2000) notes, cross–licenses commonly are negotiated when each of two companies have patent portfolios that cover the other’s products or processes. The relative size of each firm’s patent portfolio often dictates whether or not additional compensation is due. In the case of royalty–free cross–licensing, which usually happens with the portfolios are comparable in size and scope, each firm is free to compete, both in designing its products without fear of infringement and in pricing its products without the burden of a per–unit royalty due to the other. Given the objective of freedom to operation or freedom of action, a cross–license may include fixed fees or running royalties as well as various field–of–use restrictions or geographic restrictions. When there a large number of firms, the more practical alternative is “mutually assured destruction” (MAD). That is, each firm is potentially infringing at least one of the other firm’s patents, and if one firm filed suit for patent infringement, it would face a similar onslaught of litigation, potentially resulting in the effective shutdown of both firms. While a royalty–free cross–license and mutually assured destruction may ultimately produce the same result in terms of freedom to operate, the cross–licensing approach provides a higher degree of certainty.

B. Patent Pools
 1. Traditional Patent Pools

When the patents in a given technology space are owned by multiple firms, the transaction costs of cross–licensing between all of the parties can be prohibitive, and additional economic barriers exist such as hold–ups and double marginalization (Viscusi, et al., 2000) (each firm with a necessary patent extracts the maximum possible license fee resulting in a royalty stacking situation that prices the ultimate product potentially beyond the point of profitability). In response to these challenges, firms have attempted to solve the multi–party patent thicket problem by constructing patent pools. In the traditional patent pool, multiple firms aggregate patent rights into a package for licensing among themselves and/or to the outside world (Clarkson, 2005b). Usually, each firm assigns or licenses its individual intellectual property rights to a specific entity that in turn exploits the collective rights by licensing, manufacturing, or both. Different licensing arrangements are then available, depending on whether the licensee is a member of the pool and how the resulting royalties are subsequently distributed among the members of the pool.

While aggregating necessary technologies into a licensing package reduces transaction costs and eliminates the problems associated with hold–ups and double marginalization, the antitrust and intellectual property regimes were frequently in tension for most of the twentieth century, with patent pooling often facing rather aggressive antitrust enforcement even in situations where the pool was pro–competitive. The USPTO has advocated patent pooling as a solution to the anti–commons problem, particularly in biotechnology (Clark, et al., 2000). Recent work by this author demonstrated the judicial importance of exploring technological interrelationships between patents when considering the legality of patent pooling arrangements (Clarkson, 2005a). That study examined 101 cases of patent pool litigation between 1900 and 1970 and demonstrated that although the patent thickets underlying the respective patent pools were infrequently examined, their examination was potentially quite important for antitrust survival.

A primary challenge under the current antitrust regime is how to limit pool membership to patents that are “essential” for a given technology. In the case of technology standards, one viable path was forged by the formation of a pool based on the MPEG–2 standard for compactly representing digital video and audio signals for consumer distribution. Given that there were nearly 90 U.S. patents and hundreds of corresponding international patents, and given that these patents were held by dozens of firms, cross–licensing was not an option. Therefore, the organizations and individuals involved with MPEG–2 agreed on the need to address the technological risk and develop an innovative way to provide easy, reasonable, fair, nondiscriminatory access to patent rights (Clarkson, 2005a). Otherwise, the market risk associated with the difficulty of gaining access to a large enough body of the necessary MPEG–2 patents would jeopardize the interoperability and implementation of digital video.

With MPEG’s blessing an MPEG–2 intellectual property rights Working Group was formed to explore the possibility of establishing a licensing entity to make access to the necessary MPEG–2 patent rights easily available. Essential patent holders ultimately granted this third–party neutral, MPEG LA, the right to offer package licenses for virtually all of the nearly patents essential to the MPEG–2 standard. The determination of essentialness was a laborious manual process entailing literally hundreds of hours of high–priced attorney time. Ultimately, however, the effort succeeded, as the United States Department of Justice issued a business review letter allowing the MPEG patent pool to proceed. Following the standards–based path forged by the MPEG pool protagonists, additional pools were successfully formed. Sales of devices based in whole or in part on patent pool technologies are at least $US100 billion per year [7].

In contrast, however, aggressive antitrust enforcement recently dismantled a patent pool for Photorefractive Keratectomy (PRK, or laser eye surgery) that was not based on an international technology standard. Professor Josh Newberg was a FTC litigator on that case, and he would later write that the pool in question might actually have been pro–competitive and that the FTC litigation involving the PRK patent pool either ignored or failed to detect the underlying laser eye surgery patent thicket (2000), but by the time his article was published, the costs of the decision to dissolve the pool had already been incurred.

An examination of the respective patent spaces surrounding both the MPEG and the PRK pool reveals that both pools were coincident with patent thickets (see Figures 1 and 2), and thus the primary difference between these two pools was that the MPEG pool was based on an ITU technical standard (Clarkson, 2004). That standard served as an alternate method of determining the existence of a patent thicket, as the standards document itself allowed for an objective assessment of essentialness of the patents in the pool. No such technological standard existed for PRK, however, and the lack of an objective methodology for assessing the existence of an underlying patent thicket proved fatal to the PRK pool. Had the FTC examined the technological interrelationships between the pooled patents, would they have found evidence of an underlying patent thicket, given that the PRK pool was ultimately vindicated as pro–competitive? A comparison of results for both the MPEG and PRK pools suggests they would have.


Figure 1: Visualization of the MPEG-2 (Compressed Digital Video) patent thicket (Clarkson, 2004)

Figure 1: Visualization of the MPEG–2 (Compressed Digital Video) patent thicket (Clarkson, 2004).

Figure 2: Visualization of the PRK (Laser Eye Surgery) patent thicket (Clarkson, 2004)

Figure 2: Visualization of the PRK (Laser Eye Surgery) patent thicket (Clarkson, 2004).


For either case, however, appropriate cyberinfrastructure development may facilitate a greater level of pool formation. If advanced textual analyses techniques can be developed to automate, or at least improve the efficiency of, the determinations of essentialness, the cost of pool formation would drop dramatically. Additionally, if such systems could provide determinations of essentialness for collections of patents that are not based on standards, perhaps by using reference models or other technical specifications, pro–competitive patent pools could proliferate into additional domains.

 2. Defensive Patent Pools

Whereas traditional patent pools are usually built around specific technologies, a recent variation on this theme may provide significant benefit to the development of cyberinfrastructure without running afoul of the antitrust regime or requiring the existence of technological standards. Defensive patent pools are designed to give organizations freedom to innovate in a given technological space when that space may have intellectual property entanglements from multiple sources.

One particularly good example is the Open Innovation Network (OIN). Formed to defend the Linux operating system, OIN acquires patents and makes them available royalty–free in exchange for an agreement that the licensee will not assert its own patents against Linux. This form of patent pool is a fairly recent development, and may prove quite beneficial for the development of certain elements of cyberinfrastructure. In many respects, OIN’s approach combines elements of both MAD and cross–licensing.

Defensive patent pools can primarily be distinguished from traditional patent pools in that defensive pools are intended to give licensees freedom to innovate and operate in a wide area of the patent space. In contrast, traditional pools are more tightly focused on specific technological implementations, usually of technology standards.



IV: Cyberinfrastructure and Patent Reform

A. Community Patent Review

The Peer–to–Patent project stems from a core theoretical insight: the key problem with the current patent process is that access to prior art is mired in outdated technologies. Instead of the collective intelligence available in online distributed communities, we continue to trust in bureaucratic routine that often fails.

As envisioned, the system would enable a community of practice in a given area of art to suggest prior art to the patent examiner. Although this proposal began as an academic exercise, several large technology firms quickly identified its potential, beginning with IBM and followed by such firms as Computer Associates, General Electric, HP, Microsoft, and Red Hat.

As Noveck notes:

Technology presents us with the opportunity to move away from a conception of isolated administrative expertise. The Internet has already enabled large–scale collaborative processes like open source programming and the creation of the Wikipedia encyclopedia. Peer–to–Patent envisions leveraging collaborative expertise whereby we connect the know–how of a large, trained and dedicated governmental staff with legal expertise to the wisdom of experts with deep scientific, subject–matter expertise. Using communication technology, we can create a new mechanism for distributed decision making on a large scale that separates legal from scientific decisions. With procedures in place to distribute but interconnect these two forms of expertise, we can create new legal mechanisms to serve the public interest and reform, not only the patent process, but administrative decision making more broadly. The idea of scientific citizen juries, blue ribbon panels or advisory committees is not new. The suggestion to use newly available social reputation software — think Slashdot karma or eBay reputation points — to make such panels big enough, diverse enough and democratic enough to assist the patent examiner is new.

We have arrived at a unique moment in history when five factors converge to make this kind of community patent reform (and collaborative expertise) proposal possible: first, the state of patenting has become so problematic as to meet with almost universal opprobrium; second patent applications are published after eighteen months independent of grant, making it possible to consider open peer review; third, peer review is widely practiced in the public sector (e.g. EPA, NIH, [and, of course,] NSF); fourth, we finally have the social reputation and social networking technology to make peer review possible on this scale; and, fifth, we have the expertise with such endeavors as Wikipedia, Slashdot, Yahoo Answers, Linux, Apache and many more such collaborative decision–making systems, both online and off, to be able to design and construct a new legal institution that brings the wisdom of experts to bear. [8]

In addition to the collaborative search, the Peer–to–Patent system would also would virtually convene panels of experts having skill in the art to assist the examiner. There is no doubt that, unlike other volunteer peer review projects in academia or online communities such as Slashdot, a peer review system for patents implicates large fortunes and vicious competition. Such an institution must be designed to harness enlightened self–interest to drive participation.

B. Patent Reexamination

The impact of a cyberinfrastructure–enabled virtual organization such as Peer–to–Patent need not be limited to the patent examination process. Such an approach could also be useful in evaluating patents after they have issued. The current process for such evaluations, patent reexamination, begins when the USPTO is asked to review an issued patent because someone has raised questions about the validity of the patent in light of previously unconsidered prior art and/or questions regarding the scope of the patent claims (see Merges 1999; Graham, et al., 2002). The patent reexamination procedure was originally put in place as part of the Government Patent Policy Act of 1980, more commonly known as the Bayh–Dole Act, partially in response to the growing number of patents held invalid based upon prior art. The act created an ex parte process of patent reexamination intended to provide a low–cost means of resolving certain questions of validity as an alternative to litigation. As part of the American Inventors Protection Act of 1999 [9], Congress created a separate inter partes reexamination procedure that allowed for greater third–party participation. Either the patent holder or a third party may seek an ex parte reexamination of a patent, however, inter partes proceedings are only initiated by third parties. Additionally, the basis for reexamination is statutorily limited to certain types of prior art, in particular previously issued patents and printed publications (Merges, 1999). According to data gathered by the USPTO, patent owners generated approximately 34 percent of the reexamination requests between 2002 and 2006, and third parties generated approximately 65 percent of the requests during the same period (USPTO, 1997). The remaining requests are made by the Commissioner of the USPTO (See Table 1).


Table 1: Patent Reexamination, USPTO.
Source: USPTO (1997).
  2002 2003 2004 2005 2006 Average  
Ex parte requests filed, total 272 392 441 524 511 428  
  By patent holder 121 136 166 166 129 144 34%
  By third party 140 239 268 358 382 277 65%
  Commissioner ordered 11 17 7 0 0 7 2%
Inter partes request detrminations 5 20 25 57 47    
  Requests granted 5 18 25 54 43    
  Requests denied 0 2 0 3 4    


Within three months of receiving a reexamination request, the USPTO makes a determination of whether a substantial “new question of patentability” exists. If a new question exists, the USPTO orders a reexamination of the patent, but if the USPTO determines that there is no substantial new question, such a determination is final and non–appealable. On average, the USPTO grants reexamination to 89 percent of the requests (Stacy, 1997). Once the USPTO grants a request for reexamination, the patent owner has the opportunity to respond to the new questions of patentability raised by the request. If a third party filed the reexamination request and the patent owner files a response, the reexamination rules allow the requester to file a reply. This response, however, is a third–party requester’s last opportunity to present arguments to the USPTO regarding the challenged patent’s validity. The remainder of the reexamination process is conducted by the USPTO in a similar fashion as its initial examinations with the patent owner. At the conclusion of the reexamination, including any appeals, the original patent’s claims will be canceled, affirmed, or amended. Note, however, that only the patent owner may appeal any ruling; third parties have no rights of appeal in ex parte reexaminations.

As shown in Table 2, data compiled by the PTO indicates that the patent owner generally receives a reexamination certificate (Stacy, 1997), which heightens the presumption of validity of the patent in question. Graham, et al. (2002) found similar results. Additionally, the patent owner may be able to take advantage of certain strategic opportunities during the reexamination process, particularly if it becomes aware of either previously unknown prior art or evidence of infringement by a third party. The reexamination process gives the patentee the opportunity to amend its claims and add new claims. Therefore, while the patentee may not broaden the scope of its claims, the patentee may amend the claims to (1) make them patentable in view of the prior art, including the new prior art cited by the challenger, and (2) make them more clearly cover the challenger’s allegedly infringing product (Krebs and Bohner, 2004).


Table 2: Patent Reexamination Outcomes.
Source: Stacey (1997).
  Owner Requester Third Party Requester Commissioner Initiated Overall
All claims confirmed 19% 28% 19% 24%
All claims cancelled 8% 13% 13% 11%
Certificate issued
with at least one claim change
73% 59% 68% 65%


The same cyberinfrastructure that can be used to support Peer–to–Patent can also be leveraged for patent reexaminations. The Electronic Frontier Foundation’s (EFF) Patent Busting project is an attempt to mobilize an online community to locate prior art that would invalidate certain patents. One example of of a target for EFF and its community is Acacia Technologies’ U.S. Patent 5,132,992 for sending and receiving streaming audio and video over the Internet [10], which EFF describes as a “laughably broad patent would cover everything from online distribution of home movies to scanned documents and MP3s.” Similar complaints have been lodged against Blackboard’s U.S. Patent 6,988,138 covering centralized course management systems.

The cyberinfrastructure could be utilized for post–grant opposition proceedings, which currently exist in the European context but not in the U.S. Unlike reexamination, opposition proceedings must be initiated within a specified period of time after a given patent has issued. The utility of a collective memory of a particular technological space to support post–grant oppositions has already had a dramatic impact on the freedom to innovate in at least one industry. Noted patent scholars Bronwyn Hall and Dietmar Harhoff have written about an effort by the German chemical industry to create a private museum of all of the technology that their industry had developed over the last 150 years (Hall and Harhoff, 2006; Harhoff, 2006). Those firms successfully used this collection of prior art in opposition proceedings to effectively clear the landscape of questionable patents and thereby give firms freedom to innovate without fear of being sued for infringement based on patents of questionable validity.

A major policy challenge for any development of cyberinfrastructure–enabled virtual organizations to support examination, reexamination, or post–grant opposition, is the reality that many participants may be reluctant to contribute if their participation can be used as evidence in subsequent infringement litigation. Damages for willful infringement are substantially higher, and at least one scholar has suggested that “rational ignorance” may be a strategically optimal approach in the current environment (Lemley, 2001).



V: Conclusion

While the problem of patent thickets presents a significant impediment to the effective development of cyberinfrastructure, participants in those efforts are not without options. As this article has shown, elements of the cyberinfrastructure such as virtual organizations can be mobilized to alleviate some of the problems posed by patent thickets. The Peer–to–Patent system and the reexamination process can be used to exclude or eliminate patents of questionable validity. When the patent thickets composed of validly issued patent that meet the statutory requirements of novelty, utility, and non–obviousness, however, the options discussed in this article are legal rather than technical in nature. The choice between patent pooling or cross–licensing is often a function of the number of firms that have ownership claims to a particular section of patent space, and the more firms that are involved, the more likely the choice might move towards patent pooling or MAD. Although antitrust agencies have been extremely hostile towards traditional patent pools in the past, modern pools based on technology standards have fared well. In addition, the defensive patent pool model promises to open up areas for innovation without the same antitrust concerns that hover over traditional patent pools. In either case, those involved in the development of cyberinfrastructure need to be cognizant of the patent landscape and of the options available should one encounter a patent thicket, and cyberinfrastructure development may in fact facilitate additional pool formation, if appropriate patent analysis and data management capabilities emerge. End of article


About the author

Gavin Clarkson is is an assistant professor in the School of Information at the University of Michigan. He also has simultaneous appointments at the Law School and in Native American Studies.
E–mail: gsmc [at] umich [dot] edu



1. Shapiro, 2000, p. 120.

2. U.S. FTC, 2003, chapter 2, p. 3.

3. Jaffe, 2004, #21; Noveck, 2006, #20.

4. Structured patent data, often referred to as “front page” data, includes fields such as inventor, patent citations, technology class, and abstract. Unstructured patent data includes images and full–text data such as patent claims.

5. Shapiro, 2000, p. 121.

6. Kesan, 2002, p. 767.

7. Estimate based on annual worldwide sales of PCs, DVD players, DVDs, MP3 players, and Digital Video Cameras, each of which contains one or more technologies from a patent pool.

8. Noveck, 2006, p. 6.

9. Public Law 106–113 (29 November 1999). That Act was subsequently amended by the Intellectual Property and High Technology Technical Amendments Act of 2002, Public Law 107–273 (2 November 2002).




J. Clark, J. Piccolo, B. Stanton, and K. Tyson, 2000. “Patent Pools: A Solution to the Problem of Access in Biotechnology Patents?” USPTO White Paper, at

G. Clarkson, 2005a. “Blunt Machetes for the Patent Thicket: 100 Years of Patent Pools and Antitrust Enforcement,” University of Michigan Working Paper.

G. Clarkson, 2005b. “Patent Informatics for Patent Thicket Detection: A Network Analytic Approach for Measuring the Density of Patent Space,” paper presented at the Academy of Management, Honolulu.

G. Clarkson, 2004. “Objective Identification of Patent Thickets: A Network Analytic Approach,” Harvard Business School Doctoral Thesis, at

R. Foltz and T. Penn, 1990. Understanding patents and other protection for intellectual property. Cleveland, Ohio: Penn Institute.

S. Graham, B. Hall, D. Harhoff, and D. Mowery, 2002. “Post–Issue Patent ‘Quality Control’: A Comparative Study of U.S. Patent Re–examinations and European Patent Oppositions,” University of California, Berkeley, Department of Economics, Working Paper Series, number 1046, and at, accessed 9 June 2007.

P. Grindley and D. Teece, 1997. “Managing intellectual capital: Licensing and cross–licensing in semiconductors and electronics,” California Management Review, volume 39, number 2, pp. 8–41.

C. Hall and D. Harhoff, 2006. “Intellectual Property Strategy in the Global Cosmetics Industry,” University of California at Berkeley Working Paper, at

D. Harhoff, 2006. “The Battle for Patent Rights,” In: C. Peeters and B. v. Pottelsberghe de la Potterie (editors). Economic and management perspectives on intellectual property rights. New York: Palgrave Macmillan.

D. Hopkins, 2003. “The Seven Steps: Basic Novelty Patent Searching,” Science & Technology Libraries, volume 22, numbers 1/2, pp. 23–38.

L. Horn, 2003. “Alternative approaches to IP management: One–stop technology platform licensing,” Journal Of Commercial Biotechnology, volume 9, number 2, pp. 119–127.

A. Jaffe and J. Lerner, 2004. Innovation and its discontents: How our broken patent system is endangering innovation and progress, and what to do about it. Princeton, N.J.: Princeton University Press.

J. Kesan, 2002. “Carrots and Sticks to Create a Better Patent System,” Berkeley Technology Law Journal, volume 17, p. 763.

W. Konold, B. Tittel, D. Frei, and D. Stallard, 1989. What every engineer should know about patents. Second edition. New York: Dekker.

R. Krebs and H. Bohner, 2004. “Pre–litigation Strategies: Patent Reexamination,” Intellectual Property and Trade Regulation Journal, volume 4, number 1, and at, accessed 9 June 2007.

M. Lemley, 2001. “Rational Ignorance at the Patent Office,” Northwestern University Law Review, volume 95, number 4, p. 1495.

R. Merges, 1999. “As Many as Six Impossible Patents Before Breakfast: Property Rights for Business Concepts and Patent System Reform,” Berkeley Technology Law Journal, volume 14, p. 577, and at, accessed 9 June 2007.

J. Newberg, 2000. “Antitrust, Patent Pools, and the Management of Uncertainty,” Atlantic Law Journal, volume 3, p. 1.

B. Noveck, 2006. “‘Peer to Patent’: Collective Intelligence, Open Review and Patent Reform,” Harvard Journal of Law and Technology, volume 20, number 1, at, accessed 9 June 2007.

National Science Foundation (NSF), 2007. NSF’s Cyberinfrastructure Vision for 21st Century Discovery. Washington, D.C.: National Science Foundation, and at, accessed 9 June 2007.

C. Shapiro, 2000. “Navigating the Patent Thicket: Cross Licenses, Patent Pools, and Standard–Setting,” Innovation Policy and the Economy, volume 1, pp. 119–150.

S. Spangler, J. Kreulen, and J. Lessler, 2002. “MindMap: Utilizing Multiple Taxonomies and Visualization to Understand a Document Collection,” Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS’02), volume 4, p. 102.

W. Stacy, 1997. “Reexamination Reality: How Courts Should Approach a Motion to Stay Litigation Pending the Outcome of Reexamination,” George Washington Law Review, volume 66, p. 172.

D. Teece, 2000. Managing Intellectual Capital: Organizational, Strategic, and Policy Dimensions. New York: Oxford University Press.

D. Teece, 1998. “Capturing value from knowledge assets: The new economy, markets for know–how, and intangible assets,” California Management Review, volume 40, number 3, pp. 55–79.

U.S. Patent and Trademark Office (USPTO), 2007. Performance and Accountability Report: Fiscal Year 2006, at, accessed 9 June 2007.

W. Viscusi, J. Vernon, and J. Harrington, Jr., 2000. Economics of regulation and antitrust. Third edition. Cambridge, Mass.: MIT Press.

T. Wherry, 1995. Patent searching for librarians and inventors. Chicago: American Library Association.



Contents Index

Copyright ©2007, First Monday.

Copyright ©2007, Gavin Clarkson.

Cyberinfrastructure and Patent Thickets: Challenges and Responses by Gavin Clarkson
First Monday, volume 12, number 6 (June 2007),