First Monday

Cyberinfrastructure, Institutions, and Sustainability by Christopher J. Mackie

Cyberinfrastructure (CI) projects offer great opportunities for U.S. higher education, but also pose significant, long–term sustainability challenges. This paper suggests four general strategies for overcoming those challenges, and poses a range of questions that CI proponents should consider, in the interests of generating CI that can support global academic leadership while remaining sustainable even after NSF funding completes.


The Price of Success
Thinking about Institutional Sustainability




The NSF vision for next–generation cyberinfrastructure (CI) is brimming with possibilities and opportunities for U.S. higher education. It will be both exciting and challenging to push back the frontiers of high–performance computing, to develop more flexible, powerful, and productive virtual organizations, and to integrate scholars more easily and productively into purposive collaborative communities for research and education. The number of projects that could benefit from CI is large, and the number of virtual organizations that could be created around CI projects is potentially limited only by the time of the participants. Over the next five to ten years, we could easily see dozens of CI projects of various sizes coming to fruition.

I foresee only one potential problem. What if they all work?

I don’t mean to be flippant; in fact, the possibility of the arrival of dozens of large–scale CI projects onto college campuses over the next decade has quite serious implications for institutional planning. Nor do I intend what follows as either opposition or even criticism of NSF’s CI initiative; to the contrary, my organization is deeply and happily engaged with the task of supporting the arrival of CI on college and university campuses in the most efficient, expeditious fashion possible. We have enjoyed cooperating so far, and will continue to cooperate, with NSF in the development of higher education’s capacity to support the CI projects being considered now. We expect a large percentage of the CI projects currently under consideration to produce technology that will be of substantial value to researchers and educators.

But I am mindful of John Madden’s observation, upon watching a football player being pummeled by his enthusiastic teammates after a touchdown, [1] that “in sports, don’t do anything great unless you’re prepared to survive the celebration.” That seems like equally good advice for CI planning. This paper will discuss some of the ways that successful CI might pummel higher education institutions, and suggest some questions that everyone engaged with the production of CI could usefully be asking in order to ensure that we all survive the eventual celebration and are able to sustain the resulting infrastructure.

I am Associate Program Officer for the Program in Research in Information Technology (RIT) of The Andrew W. Mellon Foundation. RIT is a funder of open–source software development projects targeted to the not–for–profit community; over the past six years, under the leadership of Mellon Vice–President and Program Officer Ira Fuchs, the program has achieved a substantial track record [2] in creating sustainable, distributed, collaborative development projects for higher education, research libraries, and other domains. The funded projects range from stand–alone educational technology applications, to IT interface specifications tailored to the needs of the academic mission, to frameworks that support research, teaching, and administration. This experience has acquainted us with several factors that can make IT infrastructure projects sustainable or unsustainable. Lately, we have begun to focus on the challenge of providing a middleware layer to support cyberinfrastructure in various aspects of the academic mission, another project in which sustainability is a key concern. I draw on this diverse experience to shape the balance of my discussion; however, the opinions expressed here are my own, and do not necessarily reflect those of my colleagues or the Mellon Foundation.



The Price of Success

Imagine a future in which several dozen CI projects reach the end of their NSF funding cycles at approximately the same moment; in institutional terms, a ‘moment’ might be a year or two. These projects have pushed back frontiers in their respective subject areas; they now encapsulate large quantities of valuable data and other intellectual property (works in progress , tools developed along the way, and so on). Their participants have used the productivity engendered by the CI to advance within their disciplines, and have structured their ongoing work around it. Some have achieved prominence in their fields based on the work performed within the CI. Others are engaged in continuing intellectual combat for which the CI is both weapon and armor. Still others have specialized in some aspect of the CI itself, becoming more engaged with the technological underpinnings of scholarly production than with the product.

Now the funding runs out. The event is hardly unexpected — but what will be the result? We can safely assume that the faculty (and students) invested in the CI will clamor for its continuation. NSF may or may not agree to continue funding for some, but it seems highly likely that many or most projects will need to shift for themselves. Some research–centric CI projects in particularly lucrative fields will find revenue streams in public–private partnerships, but only a few will find enough revenue to become entirely self–sustaining. The rest will be competing for a relatively small pool of external funding, or will be approaching the institutions that house them for ongoing support. External funders, in turn, will most likely require institutional matches or contributions as a condition for funding. One way or another, it seems likely that most CI projects will end up knocking on the doors of their institution’s central administration for continued sustenance.

I assume, for the sake of this discussion, that in addition to their productivity during their funding cycle, most or all of these CI projects could continue to provide a valuable intellectual return if they continue operation after NSF funding completes. I further assume that external funding resources will be sufficient to keep only a small portion of them afloat after NSF funds run out; in fact, the more successful the NSF’s CI initiative, the greater the number of projects that will need post–NSF funding, and the smaller the proportion of the need that will be covered externally. Consequently, I expect the majority of projects to approach institutions for support, by asking the institution to take over the operation of the CI project and absorb it into the institution’s other IT infrastructure. Therefore, all of the questions I will ask in the balance of this paper reduce in the end to a single question:

What can we do to ensure that CI projects are assimilable as easily and inexpensively
as possible into institutional IT infrastructure once their NSF support is concluded?

A few basic facts about IT operations in higher education may help to suggest the scope of the challenge. While higher education institutions are more likely to have IT developers on-staff than are any other type of not–for–profit organization, the percentage of higher education institutions with development capabilities is dwindling rapidly. Even today, the numbers of institutions that could perform the programming required to absorb an externally designed and funded CI project into infrastructure is small; five to ten years from now, it will be smaller still. Also, higher education IT operations run on a comparative shoestring, even at the wealthiest institutions. Large portions of the budget are consumed by services that must be funded, such as network and e–mail access, or by core administrative systems that are effectively untouchable because the institution needs them in order to continue to run. Sizeable sums go to helpdesk and support staffs that assist users to navigate the institution’s often wildly heterogeneous IT environment. These staffs can be re–purposed or laid off, but not without outcry from the user community and some loss in institutional productivity. Moreover, the existing IT infrastructure on most campuses is overwhelmingly administrative, not academic, so IT operations would need to absorb not just incremental new technologies, but whole new domains of responsibility for which they lack trained staff.

Some staff might come with the project, but most staff would not be suitable: IT operations require different skills and practices than does IT research and development. Finding new staff is difficult, especially if the new CI requires new skills that are in demand in the private sector: higher education institutions cannot generally match for–profit IT salaries. For institutions in rural areas and those in labor markets that are weak in technical skills, qualified staff may not be available at any price. Keep in mind, too, that CI generally runs — and in high–performance contexts, it must run — 24 hours/day; consequently, it may require one or more staff per shift, three or more staff per day.

Assuming qualified staff can be hired, the question arises of how to deploy them. The CI projects serve academic departments, but IT is a central–administrative service: should the new CI support staff be seconded to the departments and continue supporting the CI as–is, or should the institution attempt to centralize provision of CI, eliminate redundancies across CI projects, and achieve some economies of scale? Each strategy has costs as well as benefits; each utilizes certain resources effectively and under–utilizes others.

In short, not only will institutions have to absorb CI projects on a one–time, transitional basis, but also they will need to keep sustaining that infrastructure for the indefinite future. Both efforts would require substantial infusions of human and capital resources. If enough CI projects arrive close enough together seeking funding, even the largest, wealthiest institution might not be able to absorb them. Smaller, poorer institutions might not be able to accommodate even one or two such projects.

It is difficult to predict the consequences. One possibility is Sophie’s choice, where institutions decide to support CI selectively. If CI has the kind of positive impact on academic productivity that we all hope it will, this could be tantamount to appointing one portion of the faculty as first–class citizens and the rest as permanent second–class citizens within their respective disciplines. Similarly, legislators could face uncomfortable decisions between concentrating CI in a few institutions or diffusing it very thinly across the state’s public higher education system. The effects on disciplinary sociology and campus politics could be powerfully corrosive. In the very worst case — one I do not expect, but would still prefer to plan against and thereby forestall — the CI initiatives that were intended to improve the productivity of U.S. higher education and form the basis for continued global leadership would instead contribute to a stratification and consequent paroxysmal upheaval in U.S. higher education that would have exactly the opposite effect.

It is possible to do a great deal to prevent outcomes such as this, but in order to do so it is important that everyone involved with the creation of CI projects — funders, grantees, and advisers — ask some crucial questions about the relationship between CI projects and the institutions that will eventually need to sustain them.



Thinking about Institutional Sustainability

What can we do to ensure that CI projects are assimilable as easily and inexpensively
as possible into institutional IT infrastructure once their NSF support is concluded?

There are at least four general strategies for reducing the eventual costs of absorption of CI projects into institutional infrastructure:

  1. Bring the right people to the table

  2. Seek common foundations

  3. Embrace openness and standards

  4. Merge projects prospectively

All are complementary, but all are also substitutable to some degree. For instance, if one cannot get all of the right people to the table at the beginning, one can use common foundations to bring them in by proxy; if one cannot achieve common foundations, then achieving common, open standards is a useful fallback position; and even if projects cannot agree on standards ex ante, it may still be possible to merge them later, based on individual, ad hoc but compatible, technology choices they have made.

Bring the Right People to the Table

As I review the results of the past six years of Mellon/RIT activities, and the histories of our distributed, collaborative, open–source software development projects, the primary lesson I draw is what I call the “Field of Dreams Fallacy”, best summarized as “If you build it, they will not come.” Many sustainability failures can be traced to failures to bring the right stakeholders into the process early enough, or at all. Others can be traced to a failure to bring diverse, representative types of stakeholders into the process. Both types of omission introduce economic and psychological costs and barriers into the project that can be exceedingly difficult to overcome.

Notwithstanding the rhetoric of common cause that one hears frequently in open–source software circles, persuading one institution to adopt open–source software written by another institution is often surprisingly difficult. Inevitably, there are idiosyncrasies in the software which, while individually small, can cumulate into a set of obstacles requiring sufficient effort — especially for software of the complexity of CI software, or of most infrastructural software—that it is likely to seem easier and cheaper to build the software oneself, or buy it, than to adopt software written elsewhere. Moreover, social–psychological factors also obtrude: “not invented here syndrome,” ego/creativity drives, and other incentives push against adoption and in favor of re–inventing the wheel. These psychological forces can be particularly powerful if the individual in question feels snubbed or unrepresented by the institution or group that developed the software, and can be further strengthened if the individual has a competitive relationship with the institution(s) perceived to “own” the project.

Mellon–funded projects counteract these pressures by means of collaboration. When multiple institutions are involved in the development process from the beginning, they are forced to address institutional idiosyncrasies among themselves before releasing the product to the rest of the world. The code will still retain whatever quirks those institutions share in common, so diversity of institutional partners is highly desirable: the more diverse the participants, the less the overlap in quirks or other institution-specific characteristics, and the more seamless the adoption by other institutions will be. Diversity of partners also increases the representativeness of the partnership, which pays psychological dividends as well: if an institution sees a peer institution among the core developers, it is more likely to perceive the project as built by “us” rather than “them.” Finally, collaborative development reduces the perception that any single institution owns the project, and thereby reduces barriers to adoption based in institutional egotism or competitiveness.

If we built it, we’re already there.

Q 1. How can CI projects bring together the stakeholders most likely to reduce resistance (because of their representativeness and diversity) to later adoption of the project by other institutions?

I do not pretend to an exhaustive familiarity with NSF CI projects; I have seen or been told the details of only a few. However, in that admittedly limited sample, I have noticed the omission of two sets of stakeholders, both of which I believe are critical to the long-term sustainability of CI projects, and both of which should therefore be involved in the projects from their inception. I am referring to institutional leaders, especially CIOs but also Provosts, and commercial vendors.

The most effective way to ensure that a CI project can be absorbed into institutional infrastructure is to have the individual who will be responsible for that absorption involved in the project from the beginning. Nobody has more powerful incentives to make sure that the project is developed in ways that will ensure its eventual low–cost absorption into infrastructure and sustainability than the person who will eventually be held responsible for that transition. The logical candidate for such representation is usually the CIO or his or her technical delegate, although at campuses with strong Provosts and weak IT organizations, the Provost’s chief technical adviser might be a better choice. Note that representation from the Computer Science faculty is not an adequate substitute for representation from the IT organization. CS and IT are two nations divided by a common programming language: they have different goals, values, dialects, religious beliefs, and cultures. CS needs to be at the table to represent the innovative aspects of IT that will drive back the frontiers via CI; IT needs to be there to instill the virtues of reliability, maintainability, and cost effectiveness. Involving them both in the project may not be pleasant, because the two groups will probably interact within the project much the way they do outside it: their relationship on most campuses is one of persistent, low–level conflict. Persuading them to find common ground will not be easy, but I can think of nothing more likely to enhance the sustainability of a software development project than the establishment of a modus vivendi between those two poles of the information technology value–compass.

Q 2. Does the CI project include a powerful institutional stakeholder from the IT operation or the Provost’s Office, as well as from Computer Science?

The case for including commercial vendors is almost entirely symmetrical with the case for including CIOs. Recall that most institutions do not have in–house software development capabilities; they rely on vendors for their IT services. Consequently, any given CI project is going to need vendor participation to reach most of its potential institutional audience. Vendors also serve, like CIOs, to address issues of cost–efficiency and sustainability. They will have to charge for support services, and to pay for them; one can rely on them to spot software that costs more to maintain than they think they can charge, and to work within the project to eliminate such code. Vendors committed to an open–source, services–based business model are particularly valuable, because they can contribute substantial amounts of code back to the project, particularly in areas (such as “productizing” the project) where other team members may not have strong incentives to participate. On the other hand, vendors bring to the project one potential cost that CIOs do not; namely, their profit motives can create conflicts of interest.

RIT’s projects make aggressive and effective use of vendors — most, but not all of them open–source, services–based vendors — as part of the sustainability model. At present, there are more than a dozen vendors active on Mellon-supported projects; most sell services around more than one project. Those vendors range from IBM™, Sun™, and Oracle™ to vertical-market specialists like rSmart™ and Unicon™. All of them have found customers in the community and appear to be making money. Several have served in governance capacities as well; for instance, a representative for rSmart currently serves on the Board of Directors of the Sakai™ project, and the Kuali™ project also holds a position on its Board for a vendor representative. These vendor involvements have proved beneficial for the projects so far; nevertheless, I would recommend that CI projects pay careful attention to conflict of interest issues when designing representation and governance policies vis a vis vendors. I suspect you will find that good vendors are as eager as you are to avoid even the appearance of any conflict of interest, lest it taint their relationships with the community and their customers.

Finally, it should be noted that most of the post–funding sustainability models for Mellon projects involve software foundations that offer memberships and commercial partnerships in order to generate revenue to sustain the project. Because the projects are open–source, membership is voluntary, and only a fraction of institutional users of the products become members. In this model, the commercial partners are a key revenue source. They pay fees to be recognized as “approved vendors,” and their fees constitute a considerable portion of all project revenues — enough, in some cases, to support the project entirely. CI projects that are looking ahead to sustainability might think hard about that example — and consider that revenues from vendors could actually begin even before the initial NSF funding ends.

Q 3. What can CI projects do to attract and involve vendors even in the early stages of project development? How can they utilize those vendors effectively to smooth the post-funding transition?

Seek Common Foundations

Projects that share the same underlying architecture, design, and supporting infrastructure (e.g., middleware) are easier and cheaper to absorb than projects that make incompatible assumptions at every point. The more commonalities they share, the less expensive is the absorption.

Architecture is the most abstract commonality that two projects can share, and its effect on absorption is the least determinate. Depending on the architecture selected, and on how it is implemented, commonality may translate into substantial savings or no savings at all. Of the major architectural approaches available today, the one that promises the greatest savings is services–oriented architecture, or SOA. SOA offers many ways that institutions could realize savings in the implementation of infrastructure, from the reduction of redundancy to the easy adaptability and customization of tools built using an SOA perspective. In part because of these features, and in response to increasing institutional demand for SOA–based infrastructure, RIT’s upcoming funding projects are leaning strongly toward SOA approaches. However, despite some significant traction in the for–profit arena, SOA in higher education is as yet only a promise. This will make SOA a difficult architectural choice to rally CI projects around — but the alternatives, as they say, are worse.

Q 4. Could CI projects converge on a common architectural approach or standard? Would it be possible to converge on SOA?

Design and software engineering are the next levels of commonality. Software–developer communities each have distinctive dialects and idioms; many also have different practices relating to coding standards, documentation, commit and integration practices, and higher–level issues such as development methodology. Two projects built using the same methodologies will be easier to manage together than two projects using drastically different methodologies (although a single methodology can conceal as many variations as does a single religious denomination, so the actual gains may be attenuated). Of the various methodologies popular at present, “agile programming” has a large following and is arguably the best–suited of the commercial methodologies for a research programming environment. This is important because the operational programmers in the IT department are much more likely to use a commercially validated methodology because of its emphasis on the reliability and maintainability of the finished product. Agile programming also has several other virtues that may be of significant immediate or long–term value in the distributed, collaborative development world of CI projects: earlier deliverables; systematic use of unit tests; continuous builds; and so on.

Q 5. Could CI projects embrace agile programming or some other software engineering methodology systematically, to at least some degree? Are there particular aspects of agile programming or any other methodology whose systematic adoption by CI projects would substantially improve post–NSF–funding sustainability?

The deepest, most efficient level of common foundation is the use of common platforms and components. When two projects share components, they are sharing code, eliminating redundancy altogether over the domain of that common code. When the shared code is the platform code, as in a shared Web application server, shared operating system, or shared enterprise service bus, [3] the savings can be even greater, because the shared platform constrains many other design and implementation decisions, leading to convergence even in portions of the two projects that do not share code.

As part of our exploration of SOA solutions for higher education, Mellon is engaged in discussions with a number of institutional consortia around the development of an entire suite of shared components — a SOA middleware layer that we are calling “scholarly middleware.” The components under discussion include an enterprise service bus, workflow and business rules engines, and a user–interface layer that would permit institutions to protect institutional values such as usability, accessibility, branding, and compliance with laws and policies, while still devolving a great deal of autonomy out to departments and end–users for the customization of the actual user interface. The institutions involved face many of the same incentives and tradeoffs as will the CI projects, and they have concluded that joint development of these shared tools is the most cost–effective way forward. It will be at least two–three years before most of these tools are available in a mature release, although working alpha and beta versions will be available much sooner. CI projects that choose to use, and possibly even to contribute to, these tools will be able to reduce the amount of time and effort they spend on ancillary IT support and therefore to focus more of their resources on the substantive challenges they are tasked to meet.

Of those tools, the one with the most profound impact on sustainability would be the enterprise service bus, because two or more SOA applications built on the same bus are highly likely to be easy to absorb, while the cost of moving an application to a different bus (absent some special considerations regarding standards, which I will raise in the next section) is relatively high.

Q 6. Could CI projects embrace the use of a common platform, or at least common middleware such as an enterprise service bus?

Use of the same workflow engine or user interface layer pays similar dividends to a common bus, and for similar reasons, but the scope of the long–term technical dividends is diminished because workflow and the user interface do not impose as many constraints on the application as a whole. On the other hand, the scope of the near and long–term productivity dividends may be even greater, because unlike the enterprise service bus, the workflow and user interface components come into direct contact with the researchers, educators, and students who use the CI. By standardizing on common components, people would be able to learn how to use a given component on one project and apply that learning across several. The net gain in productivity and efficiency would not accrue to any single project, but would be shared by all — and if CI becomes widespread, the productivity gains could be substantial.

Q 7. Could CI projects embrace common components, such as widely shared workflow engines or user–interface engines?

One point worth making about convergence around any of these components is that, if the components in question are open–source, then the convergence has secondary benefits as well. The use of one of these components by multiple projects will multiply the number of eyes turned toward the component’s performance, and will tend to accelerate enhancement of the component with new features, better performance, and/or greater reliability, depending on the needs of the projects that use it. Thus, the convergence of multiple projects makes each project better off than it would be if it had followed its individual interest narrowly.

Embrace Openness and Standards

As much as the selection of common foundations would enhance short–term productivity and long–term sustainability in CI projects, it is a difficult challenge to meet. The most appealing technologies around which to standardize are only now coming into focus, and will not be ready until the present generation of CI projects are well underway. Moreover, asking projects to standardize on, say, a programming methodology imposes costs that many project teams will themselves be poorly prepared to absorb.

Fortunately, there is a fallback position. Some portion of the savings that would have been realized by standardization on common foundations can be preserved by the careful adoption of open standards for interoperability and standardized interfaces. These two approaches do not permit the deep fusion of infrastructure that common foundations permit, but they do permit two CI projects to interact and interface at multiple points with minimal effort, reducing costs and increasing the chances that productive synergies will arise in the interfaces between the two.

Interfaces and standards are deeply technical and detail–oriented issues. The potential candidates are so numerous that I cannot review them all and, without a specific project example at hand, I do not have an obvious subset to highlight. Instead, I will note that the use of interfaces and open standards pays dividends in many other ways beyond the sustainability issue that is my primary focus here.

For instance, RIT is currently using interface definitions and open standards as a way to manage the chicken–and–egg problems associated with the development of the scholarly middleware tools mentioned in the previous section. It will be some years before those tools are available, and each tool requires a large number of technical decisions, any or all of which may prove in retrospect to be incorrect or counter–productive. The first project that RIT funded was a set of interface definitions, the Open Knowledge Initiative (, designed to support the academic mission. Those interfaces, as well as newer technologies that complement and in some cases supersede them, are at the core of the planning process for the new tools. By using interfaces at every juncture, the institutions planning the middleware tools hope to protect themselves against both erroneous decisions and the external march of technology. Every component — enterprise service bus, workflow engine, user interface — will be connected to every other via a standardized interface definition. Should one of the components need to be replaced, either due to problems or because another, better component has become available, it can be “plugged” into place behind the interface definition, with minimal disruption to the larger project.

Moreover, the interfaces are already defined, so that the tools can be built separately in confidence that they will work together once they are all complete and connected. This allows the tools projects to proceed separately — and allows still other projects to build as if the tools were already available. Use of interface standards such as these would permit CI projects to start building today and still be ready to use the upcoming middleware tools when they arrive in a year or two. Equally important, they would let any two CI projects share components of their own; for instance, a particularly good workflow engine, or a set of services that multiple projects need but only one needs to build.

One example of an interoperability standard is the BPEL workflow language. BPEL (“Business Process Execution Language”) is a language for describing the kind of work performed by workflow engines [4]. Few if any engines use BPEL internally, but many use it as a way to import and export workflows to and from other engines, and thereby coordinate workflows among multiple engines that each speak BPEL but use different internal languages. BPEL permits two or more differently designed workflow engines to operate together. In practical terms, even if it is not feasible to require two or more CI projects to use the same workflow engine, it might be practical (because less constraining) to require that any workflow engine they use “speak BPEL.” In fact, such agreement would be particularly valuable around workflow, because there are at least 40 research–centric workflow engines available in the open–source world. Without some agreement on interoperability, the resulting Babel of workflow languages might seriously impair the ability of CI projects to work together, let alone integrate with the rest of an institution’s infrastructure post–funding.

Q 8. Could CI projects agree on interoperability and interface standards in any aspects of their designs?

Merge Projects Prospectively

Suppose that none of the previous strategies is feasible: no comprehensive congregation of stakeholders, no common foundations, and no standards. There may still be opportunities to achieve efficiencies that arise from the coincidental alignments of independent choices by CI project teams. If two or more teams make very similar foundational technological choices, the benefits of those serendipitous alignments could be realized if the projects were encouraged to merge their foundations into a common infrastructure. Merging reduces the absolute number of CI projects that institutions will have to deal with post–funding, which reduces the magnitude of the challenges that institutions will face. It need not mean that one project loses its identity: the shared underpinnings are what matter most to institutions in terms of absorption and sustainability, so the two projects could keep independent and self–governed research or educational application layers, sharing only the deeper infrastructure that is never used directly by participants.

Generalized slightly, to the point where four or five diverse projects are brought together to merge their technological underpinnings into a single infrastructure to which they each connect by means of open standards and interfaces, this process becomes more–or–less the model that RIT uses to fund our own infrastructure projects. We can attest to its effectiveness, and if NSF wanted to grow a shared infrastructure out of its CI initiatives but did not care to use the components being planned by the institutions currently in discussion with RIT, this might be an effective way to accomplish that goal. Even if the development of full–fledged infrastructural products is not the goal, however, NSF or other funders could do a great deal to prepare CI projects prospectively for institutional adoption by encouraging mergers of this sort to reduce the quantity, redundancy, and incompatibility of infrastructure that those institutions must someday face and absorb.

Q 9. As CI projects mature, can we identify projects whose technology choices are similar enough that we could abstract out those choices into a common infrastructure? Could that common infrastructure then be generalized and used as a foundation for still more projects?




Helping CI projects to make the transition from project to institutional funding will not be easy. If the thinking and planning for the transition is left until the end of the projects, the chances of success will drop substantially. The right time to think about the transition to post–NSF funding is now, as the projects are being conceptualized and funded. I cannot say which combination of the strategies I have outlined above would produce the best results — though none of them are mutually exclusive or mutually harmful. Nor can I say which of these strategies would be easiest to implement around the NSF CI initiatives. It might be that the answer is more contextual than universal, so that different strategies might fit better with some projects than others.

To return to John Madden’s observation about “surviving the celebration,” if the post–funding transition to institutional support is not planned–for adequately, institutions attempting to adopt CI might be in for quite a post–touchdown pummeling. The sheer number of CI projects will cause serious financial pressures on institutions — and the less compatible are the different CI projects, the more the costs will explode. The end result will be more institutional pain around CI than is necessary, and less institutional uptake of CI than is desirable. It is in all our interests to start thinking now about how we can transform that vision into one in which CI, as it matures, moves seamlessly into the fabric of institutional IT infrastructure, from whence it can continue its productivity–enhancing mission in a cost–effective and sustainable fashion. End of article


About the author

Christopher J. Mackie is Associate Program Officer for the Program in Research in Information Technology (RIT) of The Andrew W. Mellon Foundation.
E–mail: cjm [at] mellon [dot] org



1. In the film The Replacements.

2. There are nearly two dozen of these projects, funded directly through RIT or in consultation with other Mellon programs and other funders. They range in total capitalization from US$1 million up to US$40 million and beyond; for the larger projects, RIT’s funding is only a small fraction of that total. In only one case did we fund a project for more than three years. All of the projects are still operating.

3. An enterprise service bus is a foundational technology — loosely equivalent to an operating system — for services oriented architecture. It is basically a messaging system, although commercial vendors usually add–on security and access controls, maintenance and monitoring tools, development and testing tools, and a bundle of other ancillary tools that are useful in keeping an SOA–based infrastructure running smoothly and reliably.

4. “Workflow” is the term of art used in software development to refer to a sequence of tasks that must be executed in series or parallel in order to complete a larger task. An example of workflow is the journal publication process, where the sub–tasks include steps like submitting the article, selecting reviewers, distributing the article to the reviewers, aggregating the reviews, reaching a decision, copy editing, and publication. A workflow engine automates as much of these interactions as possible, and governs the sequential passing of tasks among people and computers, as the task moves toward completion.



Contents Index

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License

Cyberinfrastructure, Institutions, and Sustainability by Christopher J. Mackie
First Monday, volume 12, number 6 (June 2007),