Economic productivity in the Knowledge Society
First Monday

Economic productivity in the Knowledge Society

Abstract
Economic productivity in the Knowledge Society: A critical review of productivity theory and the impacts of ICT by Ilkka Tuomi

According to several widely publicized and influential studies, information and communication technologies (ICTs) were a major source of productivity growth during the 1990s in many developed countries. The diffusion of ICTs has been argued to permanently change the rate of sustainable economic growth, and they have frequently been described as core technologies of the emerging knowledge–based economy.

This paper examines critically the concepts and methods of ICT productivity studies. It concludes that current analytical techniques do not allow quantification of the productivity impacts of ICTs.

ICT productivity studies are problematic due to three main reasons. First, the current measures of economic output miss essential parts of output in knowledge–based economies. Second, productivity calculations measure inputs and outputs in ways that are conceptually and empirically problematic. Third, the theoretical models that have been used to analyze the impacts of ICTs often make assumptions that may be unrealistic; for example, they require that innovation can be neglected as a competitive factor and as a source of growth.

Although economic studies are only partially able to grasp the significance of ICTs, these technologies are transforming the foundations of economy and society. A number of important conceptual, methodological and empirical issues need to be studied. To fully analyze the socio–economic impacts of ICTs we may need a new "productivity paradigm."

Contents

Introduction
Why does productivity matter?
ICT as the driver of the New Economy
Disappearing consensus
Productivity and technical change in the neoclassical theory
Growth contributions from ICTs
The logic of growth contribution calculations
Empirical and conceptual limits of the Oliner–Sichel study
Price indices as a source of growth
Productivity paradox or paradigm anomaly?
ICTs as contextual and composite resources
Research challenges
Conclusion

 


 

++++++++++

Introduction

Common experience tells us that information and communication technologies (ICTs) are critically important for the modern economy. Firms are increasingly dependent on the effective use of ICT and it has a fundamental role in the ongoing socio–economic transformation. It is therefore important that we understand what we know about the economic impacts of ICTs and where are the limits of our current knowledge.

There exists a large and rapidly growing literature on the macroeconomic and industry–level impacts of ICTs. One might, therefore, expect that we have today a solid body of well–known and uncontested facts about ICT growth and productivity effects. This is not the case. A detailed review of existing studies reveals important conceptual and empirical challenges that have relevance also beyond ICT–related studies.

For example, empirical studies on the economic impacts of ICTs rely heavily on price indices that convert nominal prices and production into "real output" by adjusting for quality changes in products. A closer look on the sources of quality adjustments in computing prices reveals that such adjustments are conceptually ambiguous, as they make the value of money dependent on technical progress. This, in effect, translates the economic output into "real output" using technical characteristics that by default impute the assumed economic growth and points to a circular logic in the treatment of output value, growth, and productivity. Without quality adjustments, semiconductor and computer industries have expanded relatively slowly during the last decades, as rapidly dropping nominal prices have to a large extent cancelled the growth in volume. Much of the measured productivity growth depends on the very rapid technical improvements in semiconductors, which price indices translate into growth of real outputs and investments. Theoretically consistent quality adjustments also lead to prices that are not additive across product categories or time, indicating that the conventionally adopted approaches are conceptually inadequate. It is therefore not clear how much economic growth there has been during the last decade, and to what extent the growth can be associated with ICTs.

Economists have extensively discussed alternative explanations of the apparently limited impact of ICTs on growth and productivity during the last two decades. This discussion has made visible important measurement problems, and it has also highlighted potentially important conceptual challenges in the conventional ways of measuring growth and productivity in the knowledge society. The present paper argues that these challenges are particularly visible in ICT–intensive industries, but that they also require new approaches for understanding productivity and growth. The famous "Solow paradox" can be interpreted as an indication of a need for a new productivity paradigm.

A conceptualization of productivity that would allow substantial analysis of the impacts of ICT seems to require reconsideration of the links between growth and development. At the organizational level, ICTs are composite goods that consist of hardware, software, skills, systems integration, operational support, and infrastructure. Beneficial deployment of ICTs requires incremental innovation and reconfiguration of existing investments and resources. This makes it difficult to isolate the impact of specific investments in ways that typical growth accounting frameworks would require. ICTs themselves facilitate rapid recombination of existing investments, accelerating creative destruction within organizations and the overall economy, and making structural, contextual, and institutional factors increasingly important for growth. Conventional approaches in business accounting and national accounts treat some of the associated costs as consumption and some as investments, and they often remain blind to the historical and social investments that are needed for the productive use of ICTs. Research on knowledge management has highlighted the systemic nature and the interdependent elements that make ICT investments productive. As accurate measurement of investments in "ICT products" is critical for productivity studies, it is important that we conceptualize these products in an appropriate way. A multidimensional and holistic conceptualization of ICTs allows the researchers to address the different complementary elements that are needed to make ICTs productive.

In the knowledge society, the traditional concept of productivity faces both old and new challenges. The motivation for productivity measurement was to understand the sources and development of economic growth and welfare. This will remain a central issue in the knowledge society. Knowledge society, however, also makes visible productivity problems that are theoretically and empirically difficult and central, and which require careful consideration.

For example, although investments in research and development have been studied extensively since the 1960s by productivity researchers, knowledge creation is still only very inadequately accounted for. It has been estimated that investments in intangibles — not including human capital accumulated in the household sector — could be about $1 trillion of business fixed investment in recent years in the U.S. This is almost as much as the accounted business investment, about US$1.2 trillion in 2001 [1]. Productivity is measured as the efficiency of producing outputs from the inputs, but if we miss important outputs and inputs, our efficiency measures and policy recommendations easily become misleading.

The question what should be included in the inputs and outputs in analyzing production efficiency is an open question, with major socio–economic and political consequences. An important economic debate has centered on "externalities." They include unaccounted negative externalities such as pollution and depletion of non–renewable resources, and unsustainable use of renewable resources, such as ocean fisheries, but also positive externalities, such as knowledge, scientific advances, socio–economic institutions, infrastructure, and social capital. Measures of economic output that try to correct for traditionally unmeasured or mismeasured inputs lead to different productivity estimates than the traditional approaches. Sometimes the differences can be considerable [2].

Macroeconomic concepts such as growth and productivity are aggregate concepts and typically, by definition, understood to be independent of history and structural factors of production. As a consequence, the macroeconomic concept of productivity is conceptually blind to structural dimensions of productive activity. This is an important limitation when we try to understand networked and knowledge–based economies. Productive knowledge is often context dependent, situated, and embedded in configurations of material and symbolic artifacts. Contrary to the traditional idea of abstract context–independent knowledge, recent research on organizational knowledge has emphasized the importance of tacit and location specific knowledge. Similarly, researchers have increasingly emphasized the importance of "social capital" that is embedded in historically accumulated reputations, systems of trust, and social networks. Firm–level productivity often depends on historically developed informal social networks that extend beyond firm boundaries, and regional productivity often depends on international and inter–institutional networks of knowledge [3]. The contextual view on knowledge and productivity imply that productivity differences are often rooted in the position of social and economic agents in relation to material, social, and cognitive resources.

For example, task productivities of professional software programmers have typically been reported in various studies to vary at least an order of magnitude. To a large extent, programming productivity depends on specific knowledge about program libraries, collaborative work practices, and configuration of development tools into effective programming environments. Local improvements in such complex systems do not necessarily lead to productivity improvements. In fact, they easily lead to productivity losses, for example, through destruction of accumulated human capital.

Historically the most influential ICT–related productivity studies have been based on neoclassical growth models. The neoclassical theory understood the economy to be in or close to the equilibrium. The conceptual starting point in this framework is that resources are allocated optimally. In this framework, efficiency improvements come from outside the economy, as external "shocks" or "manna from heaven" that remain without explanation.

A truly networked and knowledge–based society does not easily accommodate the neoclassical abstractions. It is a mix of market transactions, business firms, and social networks that extend beyond organizational boundaries and markets of economic exchange. Decision–makers in firms, for example, may not always be able to reinvest the outputs generated in innovation networks that only partially remain within organizational boundaries. Markets may only see a small glimpse of all those social interactions that actually generate outputs and productive capital. In general, markets and firms, therefore, may be unable to optimally allocate resources.

This introduces a qualitative, structural, and social dimension to the problem of productivity, which becomes increasingly visible as networked information and communication technologies diffuse across societies and economies. Challenges for current theoretical approaches are increasingly visible, but no consensus exists about the right ways to formulate the core issues.

Although much research on productivity and its relationships with economic growth is available today, large uncharted terrains remain outside the currently known domains of productivity research. It is therefore important to try and outline the area within which current research sheds light, as well as to see the limits of our current knowledge. To see where we stand, we have to step outside the best explored areas and across disciplinary boundaries.

This study therefore attempts a challenging task. The present paper tries to clarify what we know today about ICT productivity impacts. This requires that we understand the basic theoretical concepts that underlie economic research on ICT and productivity. It, however, also requires that we discuss the particular ways in which these theoretical concepts are implemented in empirical research. For example, when economists talk about "technical change" or "constant prices" we have to understand what they actually mean by these concepts and where they get the data that is used to measure them.

The second part of the study, to be published separately due to space considerations, then, tries to provide some tentative ideas and elements of a broader productivity framework that could complement current ICT productivity frameworks.

This paper is organized as follows. The next three short sections will revisit the motivations for studying productivity and discussions on the "new economy" that argued that ICTs had an important role in the productivity revival in the second half of the 1990s. The basic concepts that underlie productivity research are then described in an attempt to clarify what concepts such as "productivity" and "technology" actually mean for economists working in this area. I will then move on to describe how the macroeconomic impacts of ICT are studied. For this I use an influential and representative study by Oliner and Sichel (2000). Walking through its procedures and architecture allows us to see the machinery of scientific knowledge production in operation, and to understand the theoretical and practical assumptions that produce the outcomes. This reconstruction of a scientific knowledge creation machine is followed by a discussion that evaluates the robustness of the reported empirical results both from conceptual and empirical points of view. In particular, I will show that the main results of neoclassical growth accounting studies critically depend on the way ICT price indices are handled in these studies. In a sense, we try to see whether the neoclassical productivity research machine works and whether it produces what we think it should produce.

The answer is to some extent negative, as productivity researchers know. This opens a question whether the problematic issues require just minor improvements in theory and practice, or whether they indicate a need for more profound revision of theory and research on ICT productivity impacts. I will explore this question by focusing on the famous Solow productivity paradox, reviewing known explanations for it. Based on the earlier sections, I will then draw up a list of open and potentially important research challenges.

The overall result of this discussion is that there are conceptual and empirical problems in the current approaches to productivity research. Some of these appear to be quite fundamental, pointing to a need for alternative approaches for understanding ICT and productivity. The paper ends by summarizing the main results and by making concluding remarks.

 

++++++++++

Why does productivity matter?

Policy–makers use productivity studies to understand how productivity and economic growth could be increased. Although the links between welfare, economic output and productivity are complex in practice and in theory — requiring discussion on social distribution of wealth for example — conceptually the idea is simple. If productivity increases, other things equal, aggregate economic welfare increases. As Paul Krugman once put it, productivity is not everything, but it is almost everything.

Productivity measurement has also become important for monetary and fiscal policy. Productivity trends are used to forecast potential economic growth and, for example, tax revenues. If labor income grows faster than labor productivity, the expected result is inflation. Productivity measurement, therefore, is used in the difficult act of balancing unemployment and inflation. Long–term productivity growth is commonly viewed as the speed limit for sustainable economic growth.

By analyzing productivity developments in different firms, industries, and across regions and countries, we can try to find productivity bottlenecks and locate opportunities for improvement. Firm– and industry–level comparisons are important, for example, for effective firm–level decision–making and for understanding the real benefits of information technology. International comparisons of productivity trends, in turn, have often been used as sources of policy recommendations. Recently, for example, productivity researchers have argued that the institutional flexibility in the European countries should be increased if they want to catch up with the U.S. growth rates [4].

 

++++++++++

ICT as the driver of the New Economy

In the Knowledge Society an important question is how information and communication technologies (ICTs) impact growth and productivity. During the last decades, discussions about information society and knowledge–based economy have pointed out the increasing importance of information for economic growth. The more recent discussions on the "new economy" centered on the question whether information and communication technologies have irreversibly changed the productivity growth rate of the economy. If that were the case, accelerated investments in ICT could lead to increase in economic growth and productivity. This could imply a new balance between inflation and labor income growth. For example, Jorgenson and Stiroh (2000) argued in their influential article "Raising the speed limit" that ICT had, indeed, altered the speed of productivity growth.

On the other hand, Jorgenson and others also emphasized that the ICT productivity story has been closely connected to the advances in information technology, and in particular semiconductors [5]. If technical advances in these areas slow down, productivity growth will slow down, and perhaps turn negative. In particular, Jorgenson [6] argued that the rapid productivity increase in the U.S. during the second half of the 1990s resulted to an important extent from the accelerated product cycles in the semiconductor industry. Due to exceptional competitive conditions in these years, the traditional three–year semiconductor product introduction cycle was temporarily compressed into two years [7]. If we want to understand the future of productivity, we therefore need to understand the drivers of innovation in ICT production [8].

A closer study on the developments in semiconductor, computing and communication technologies shows, however, that developments have also to large extent been driven by innovative users of these technologies. The productivity potential associated with ICT is not created only by technical advances, per se. Instead, productivity opportunities become articulated and realized when technologies are taken into use. This means that economically important innovations are not simply technical advances. They are always social innovations. To understand the productivity potential of ICT, we therefore also have to understand social learning processes that underlie the adoption of new technologies.

In the Knowledge Society, economists are now starting to ask how learning, competencies, social and economic networks, and, for example, social capital, trust and reputation should fit the picture. Although the picture is not yet clear, it is becoming important to ask what we currently know about the productivity impacts of ICT.

 

++++++++++

Disappearing consensus

Macroeconomic studies on the effects of information and communication technologies have received considerable attention during the recent decade. These studies almost univocally refer to Robert Solow’s 1987 statement that computers can be seen everywhere except in the productivity statistics. This observation has become known as the productivity paradox. Despite a general belief on ongoing information technology revolution, since the early 1970s productivity growth appeared to be slowing down in the U.S. and in many other developed countries.

Firm–level studies on ICT productivity impacts in the 1990s revealed a different picture, where computers had major positive productivity effects [9]. These studies and historians of economy and technology also noted that productivity effects become visible only with a delay, after firms have made organizational changes and acquired skills and experiences that open the gates through which the productivity benefits flow. Investment in ICT can become productive only after organizations have adjusted their operations to take advantage of the productivity potential of new technologies and when complementary investments have been made. According to this view, the productivity impacts of ICT should only be seen with a delay and the impact would be contingent with complementary changes in organizational practices. Investment in ICTs, in itself, does not guarantee productivity growth.

After firms have had sufficient time to experiment with the possibilities of ICT and adjust their operations to take advantage of its productivity potential, firm level productivity impacts could be expected to become visible also at the aggregate level. Indeed, during the last years several macroeconomists have claimed that this has been the case. Towards the end of the 1990s, a widely accepted view emerged that the productivity paradox was a temporary event and that computers and other ICTs finally had became one of the main drivers of economic and productivity growth [10]. More specifically, the rapid price declines in computing equipment and semiconductors seemed to be an important driver in the process. As Jorgenson [11] noted, "despite differences in methodology and data sources, a consensus is building that the remarkable behavior of IT prices provides the key to the surge in economic growth." Gordon summarized the consensus:

"... by 1999–2000 a consensus emerged that the technological revolution represented by the New Economy was responsible directly or indirectly not just for the productivity growth acceleration, but also the other manifestations of the miracle, including the stock market and wealth boom and spreading of benefits to the lower half of the income distribution. In short, Solow’s paradox is now obsolete and its inventor has admitted as much." [12]

More accurately, the emerging consensus seemed to be that ICT investments had increased productivity growth in durable goods manufacturing in the second half of the 1990s, and in particular in ICT equipment manufacturing. Some researchers (e.g., Van Ark, et al., 2003; Oliner and Sichel, 2002; Nordhaus, 2001; Colecchia and Schreyer, 2002) argued that the impact of ICT could also be seen in the ICT using sectors, at least in the U.S. Jorgenson and Stiroh, however, labelled popular accounts of widespread impacts of ICT the "phlogiston theory of new economy":

"... the evidence already available is informative on the most important issue. This is the ‘new economy’ view that the impact of information technology is like phlogiston, an invisible substance that spills over into every kind of economic activity and reveals its presence by increases in industry–level productivity growth cross the U.S. economy. This view is simply inconsistent with the empirical evidence." [13]

ICT–producing and –using industries seemed to play very different roles in different countries, indicating that the level of ICT investments and the widespread use of ICTs were not directly related to growth or productivity [14]. Furthermore, the discussion on methodological and measurement problems started to erode the consensus. Vijselaar and Albers (2002), for example, pointed out that the observed differences in the U.S. and EU productivity studies were to an important extent created by the different methods of calculating computer prices, and by the simple fact that the U.S. happened to have a bigger ICT sector than most of the EU countries. They also noted that the data did not point in the direction of significant positive spillover effects of ICT investment to the rest of the economy in the Euro area, since the overall productivity growth apparently had slowed down in the second half of the 1990s. Although some researchers claimed that the Solow paradox had been solved, Gordon, for example, argued that:

"These results imply that computer investment has had a near–zero rate of return outside of durable manufacturing. This is surprising, because 76.6 percent of all computers are used in the industries of wholesale and retail trade, finance, insurance, real estate, and other services, while just 11.9 percent of computers are used in five computer–intensive industries within manufacturing, and only 11.5 percent in the rest of the economy ... Thus, three–quarters of all computer investment has been in industries with no perceptible trend increase in productivity. In this sense the Solow computer paradox survives intact for most of the economy ... ." [15]

Gordon’s argument was based on his analysis of long–term productivity trends and cyclical productivity effects. Although the relevance of cyclical factors was widely accepted, no clear consensus existed about their importance. Gordon (2000:65) [16], for example, maintained that it was possible that computers had shown their biggest productivity impact already before the 1990s.

Researchers also noted that industry–level studies do not show any consistent influence of ICTs. O’Mahony and Vecchi (2002), for example, found that standard econometric approaches show negative impacts of ICT on output and productivity growth, arguing that this was caused by aggregating industries where ICTs have different effects. They also noted that most econometric studies on ICTs implicitly have assumed that the impacts of ICTs have remained the same during the last decades.

In general, researchers now agree that several important conceptual and empirical issues still require clarification. On a closer look, the apparent consensus about productivity trends seemed to be built of research footnotes that explained methodological and data limitations. The bust of the Internet boom also questioned the sustainability of the productivity patterns seen in the end of the 1990s. If there was a consensus, it started to fall apart. The U.S. Congressional Budget Office stated in its report on ICT productivity impacts that "In contrast to the unanimity about the effects of computer hardware manufacturing, no consensus exists yet on the degree to which computer use has boosted total factor productivity growth" [17]. As Mahadevan [18] put it: "Expert opinion is solidly divided on the IT–productivity debate. One view is that the IT–productivity paradox exists, and the other that there is no such paradox."

So what, exactly, was the paradox? It centered on total factor productivity growth, which economic literature associates with "technical advance" and the overall efficiency of economy. The long–term trend in total factor productivity can be interpreted as the change in the economy’s underlying productive capability when the inputs remain the same [19]. This somewhat complex sentence describes the reason why economists call the productivity paradox a paradox. Despite astonishing advances in technology, including ICT, total factor productivity grew slower after about 1974 than during the previous decades. This was in stark contrast with Solow’s classic calculations in the 1950s, which showed that in the last century most of the economic growth had been generated by increases in total factor productivity. After about 1974, technological advances, however, seemed to lose their economic importance. This paradox is visible in Figure 1, which shows this surprising disappearance of total factor residual in productivity growth.

Figure 1: Sources of labor productivity growth in the U.S., 1960–1994. [20]

No one has been able to explain why exactly total factor productivity growth slowed down after 1973. Two decades later the problem, however, seemed to go away. Between 1995 and 1999, labor productivity grew in the U.S. at nearly double the average pace of the preceding 25 years. According to neoclassical productivity studies, the main sources for productivity growth were increased investments in ICT and unexplained improvements associated with technical progress and ICT use. Productivity growth was also strongly focused on ICT producing industries and perhaps also on industries that used ICT extensively. Although it still remained unclear why the productivity paradox originally had appeared, it seemed increasingly clear that information technology made it disappear. The question then remained whether this was a temporary event, or something that we should take into account in economic forecasting and policy–making.

It is impossible to understand discussions about the productivity impacts of ICT without a clarification of the basic concepts that are used in this discussion. Economists use productivity–related concepts in very specific ways that often create confusion among non–specialists. In particular, when we try to assess the current state of knowledge about the impacts of ICT, it is necessary to understand what economists actually mean by terms such as production efficiency, total factor productivity, real output, and technical change. By clarifying these terms, we can try to see how far the traditional approaches can take us, and where we need to move beyond the traditional interpretations of these terms.

 

++++++++++

Productivity and technical change in the neoclassical theory

There are many alternative ways to measure economic efficiency [21]. Labor productivity is a "single–factor" productivity measure, which can be measured either from total output or value–added. When labor productivity is measured as value–added per hour worked, or per person employed, productivity becomes closely related to the average per capita income [22]. This is the main reason why labor productivity is understood to be a central concept in economics.

Labor productivity can also be measured from gross output. When measured from gross output, labor productivity simply gives the ratio of output to labor inputs.

In general, changes in labor productivity reflect many different types of changes in the production process. For example, when more capital is used in the production process, the relative amount of labor in the total production inputs decreases and the measured labor productivity increases. In addition to such "capital deepening," labor productivity can also increase when existing production equipment is used more efficiently, for example, during economic growth periods, when the utilization of production capacity typically increases. It is therefore important to note that labor productivity, despite its name, does not measure workers’ efficiency or labor–related changes in production. As a single–factor productivity measure, it measures all changes in output using labor inputs as a reference point [23].

It is also useful to note that although macroeconomic labor productivity is often understood to reflect task productivity, today we do not know how they are related [24]. The concept of labor productivity is perfectly agnostic about the causes of output growth. In recent years, an important source of labor productivity improvements in the ICT using sectors, for example, has been the increased importance of self–service. As more work is allocated to customers, whose labor is not compensated, the measured labor productivity increases [25].

As noted, productivity can also be measured using value added instead of gross output. Value–added productivity indicators are common in studies that try to isolate sources of productivity within and across specific industries or firms. Instead of measuring total gross output, they subtract intermediate inputs and measure as output only the incremental production that occurs within the measured process. For example, total sales of computer manufacturers do not necessarily tell much about industry output or labor productivity. A better measure of produced output is the value added in the process [26]. Subtraction of intermediate inputs is necessary, for example, when productivity differences between specific industries are studied. This is typically the case in research on sectoral impacts of ICT. Productivity changes in the computer assembly and retail industries, for example, need to be distinguished from productivity changes in the semiconductor and software industry, if we want to understand the real sources of productivity growth.

Most macroeconomic analyses of the productivity impact of ICTs are based on the neoclassical growth accounting framework originally proposed by Solow in 1957. Solow’s calculations indicated that over 80 percent of labor productivity growth remained unexplained by increases in labor and capital inputs [27]. The difference between the observed growth rate and the theoretical growth rate generated by increases in labor and capital inputs became subsequently known as the "Solow residual." The residual reflects economic growth that is left unexplained after increases in labor and capital inputs are taken into account. As was noted above, it has commonly been interpreted as technical progress.

The Solow residual is closely related to the concept of total factor productivity (TFP). In Solow’s original formulation, output growth is completely determined by changes in labor and capital inputs, and the change in total factor productivity. In Solow’s framework, economic output is represented as:

Total output = TFP * F(L,K);

where the "production function" F describes how labor, L, and capital, K, are converted into total economic output, and a "total factor productivity" multiplier describes changes in the overall efficiency of the conversion process.

The concept of multifactor productivity (MFP) emerges when labor and capital are separated into many qualitatively different types and these different types of labor and capital are explicitly added to the production function. In practice, TFP and MFP are used synonymously [28].

Solow’s residual appears when one studies the growth rates of inputs and outputs. Without changes in total factor productivity, increased labor and capital inputs directly determine the growth of total output. Under the assumptions of full competition, perfect and efficient allocation of resources, constant returns to the scale of production, and independence of total factor productivity of the relative composition of capital and labor, the rate of change in total factor productivity multiplier becomes the Solow residual. As the list of conditions indicates, the link between TFP growth and the Solow residual is not trivial. It is, however, almost trivial in the neoclassical framework that typically takes these conditions as starting points [29].

In contrast to labor productivity, total factor productivity is a combined measure of production efficiency. The importance of total factor productivity in ICT productivity studies is to an important extent related to the fact that it is associated with technical progress [30]. In fact, economic literature often states that total factor productivity represents the current level of technology. This is a common source of misinterpretations. Total factor productivity does not measure technology or technical progress; instead it measures all those factors that are not explicitly taken into account when we describe the way in which the economy turns inputs into outputs.

In other words, the growth rate of total factor productivity, or the Solow residual, incorporates all those elements of output change that do not result directly from increased capital or labor inputs. The residual collects productivity effects that are not modeled, as well as those that are mismeasured or modeled incorrectly. Strictly speaking, therefore, the Solow residual is a measure of our ignorance [31]. Much of the research on productivity has in fact tried to improve economic growth models to get rid of the residual. Economists know this well, but they still frequently make the claim that the residual reflects the "current level of technology." In fact, this usage is so common in the research literature that it has become the definition of the economic concept of "technical advance." As this confusing terminology has often led to misunderstandings in the interpretation of ICT productivity studies, it is important to note that there is no empirical or conceptual link between technological change and the residual. The residual will show productivity effects that are unrelated to technology, such as earthquakes, global warming, international trade agreements, new diseases, the aging of population, sunspot activity, forest fires, and wars [32]. If we remember that "technical advance" does not have anything to do with technical advances as they are understood by scientists or engineers, the confusion goes away.

The importance of the Solow residual is in the fact that it filters out the growth impacts of pure labor and capital increases. Savings that are profitably reinvested in production by hiring workers or through investing in capital equipment and materials lead to growth in production. The Solow residual accounts for growth that is not explained by such increases in inputs. New tools and production capital obviously embed technology and when workers are given more capital in the form of better tools, production technology, and plants, such capital deepening increases labor productivity. If it is fully paid for, it, however, does not change total factor productivity. The "current level of technology" that is captured in the Solow residual represents "costless technological progress" or, more accurately, "costless improvements in productive efficiency," and excludes technological progress that is embedded in capital investments. The Solow residual therefore is, for neoclassical economists, a "free lunch" [33].

In fact, productivity researchers often point out that the "technological progress" represented by the Solow residual does not imply technological progress. It can simply mean more efficient ways to organize work, better institutional structures, or good weather. Economic historians have also frequently pointed out that the reverse is true as well: technological advances do not imply changes in productivity [34]. New technologies that become important in many domains of economic activity often require lengthy adoption processes and institutional change before they become visible in the macroeconomic measurements [35].

Solow originally distinguished only two different types of inputs, labor and capital. In Solow’s model, capital included only fixed tangible capital. Denison (1962) and Griliches (cf. Griliches, 2000) extended this model to include several different types of labor, and Jorgenson (1963) and Jorgenson and Griliches (1967) further to different types of capital [36]. Such additions try to accommodate the fact that workers have different levels of education and skills, and that different types of capital may have very different productivity characteristics over time. Researchers therefore now commonly talk about multifactor productivity instead of total factor productivity.

More recent productivity studies have also used models that view technical progress as an inherent part of economic growth, and not just an externally given "current level of technology." Such endogenous growth models have, for example, emphasized research and development spillovers, network effects, and changes in the technical quality of products. As more sophisticated models are developed and more parameters are used to describe the economic translation from inputs to outputs, in theory the unexplained component of economic growth shrinks. A perfect description of the ways in which knowledge, organizational and institutional factors, and improved work tools and methods influence work outputs would therefore imply that the Solow residual disappears [37]. On the other hand, in the 1970s it already did disappear in many developed countries, and this, exactly, was the source of the productivity paradox. As Griliches [38] noted: "First we wanted to get rid of the residual, now we want it back!"

It is important to realize that most influential studies on ICT productivity impacts are based on neoclassical models [39]. They typically start from the assumption that economic actors allocate their resources optimally, price their products in fully competitive markets, and maximize their profits in an economic equilibrium. This theoretical set-up has a somewhat paradoxical consequence: by definition, productivity cannot be increased by improved use of current technologies. If the economic actors really are perfectly rational economic actors, and if the economy really is in the state of allocative equilibrium, as the conventional mathematical treatment in this framework requires, the actors cannot become more productive. The drivers of efficiency improvement cannot be economical in this framework; instead, they require magic acts that kick the economic system from outside, keeping it in the growth path. Neither can the neoclassical framework describe the productivity impact of qualitatively new products and technologies (Hulten, 2000; Pakes, 2002). As Schreyer [40] notes: "equilibrium concepts may be the wrong tools to approach the measurement of productivity change, because if there truly was equilibrium, there would be no incentive to search, research and to innovate, and there would be no productivity growth."

This is an interesting challenge if indeed innovation is important for growth in the Knowledge Society. Although productivity studies often start from the neoclassical assumptions, innovation researchers commonly adopt the Schumpeterian approach to economics of innovation, which, in contrast, starts from the idea that innovators and entrepreneurs systematically create disequilibrium and extraordinary profits through innovation. So, although researchers working in the neoclassical framework have made important contributions in the economics of innovation, the applied theoretical framework is conceptually limited in its capacity to describe innovative processes and technical change.

Some productivity studies, therefore, adopt a radically different perspective. Whereas the traditional neoclassical studies assumed that all firms operate with optimal efficiency, defined by a "current level of technology," some recent studies conceptually divide production efficiency into two components. First, the unobservable theoretical "production frontier" gives the maximum output that could be achieved using the available inputs. Actual firms rarely operate at this theoretically optimal production frontier. This allows for, for example, bad management decisions and work practices, as well as institutional and historical constraints. In actual organizations, production efficiency is usually only a fraction of the theoretically possible maximum [41]. This fraction characterizes the "technical efficiency" of the firm in question [42]. Productivity change, therefore, has two components: one related to a "current maximal level of productivity" and another related to the "productive efficiency" of the firms (Färe, et al., 1994; Mahadevan, 2002).

In this framework, the theoretically best current productivity can be approximately found by studying the most efficient producers. For example, if two firms have exactly the same inputs, and one produces twice as much as the other, the productivity of the worse producer is half of the more efficient one. By studying the existing inputs and outputs of real producers, one can find those input combinations that would lead to maximal outputs. The maximal outputs can then be used to define the best current theoretically possible efficiency of production. The actual technical efficiency of real firms can then be defined as their distance from that maximal level of production. As the maximal level of production is also known as the production frontier, this productivity analysis framework is therefore often called "frontier analysis" [43].

In the traditional neoclassical "production function" analysis, changes in TFP result from "costless progress" that was associated with technical progress or advances in knowledge. In the production "frontier analysis" framework, productivity changes also result from more efficient use of existing technologies and knowledge. As advances in technology can also lead to short or long–term losses of technical efficiency, technical progress can also decrease TFP. Furthermore, in this framework, technical efficiency can depend on organizational processes and the quality of management. Managers can, for example, over–invest in new technologies [44].

 

++++++++++

Growth contributions from ICTs

In the neoclassical framework, ICTs influence labor productivity in three different ways. Other things being equal, more ICT production means more total output. When ICT manufacturers learn to produce more powerful computers without increasing their economic inputs, this learning becomes recorded as total factor productivity growth in the ICT producing sector. Second, for industries that use ICTs as end users, they are capital investments. The increasing use of ICT capital in the user industries implies capital deepening and labor productivity growth. Third, if the user industries become more efficient because of their new ICT investments, this can also lead to increase in productivity, for example, by accelerating knowledge creation and by decreasing coordination and transaction costs.

The importance of these effects depends on the importance of ICTs in the economy. The overall effect may be small, if ICT production and investments represent only a small fraction of the total economy. In fact, this has been the case until recently. In the EU, ICT manufacturing and services are less than six percent of GDP, Ireland and Finland being the most ICT manufacturing intensive countries [45]. The value added in ICT manufacturing and services reached about 8.6 percent of the Finnish GDP in the year 2000, whereas it was about 8.1 percent in the U.S. [46] The role of ICT investments, however, has been growing during the last decades, and as a result ICTs now start to have a considerable impact on economic growth. In OECD countries, the contribution of ICT capital to GDP growth increased from about 16 percent to about 20 percent from the first to the second half of the 1990s [47]. The impact has been pronounced in the U.S., where the level of ICT investments has been relatively high already for several decades.

Although the visible effects of ICT use have been small until recently at the level of aggregate national economies, ICT production, however, has been very important for total factor productivity growth already since the early 1980s. This is because technical developments in computer technologies have been extremely rapid. Technical improvements have doubled capacities of computer memory and disk storage and halved microprocessor feature sizes roughly every two or three years during the last three decades [48]. When such improvements have not led to price increases, they are recorded as increases in total factor productivity. Although the total volume of ICT production has been a relatively small part of total economy in most countries, its extremely rapid growth rates have made it important for TFP growth. According to the U.S. Department of Commerce, ICT manufacturing and services grew by an average 22 percent per year, and was responsible for an average 29 percent of the country’s overall real economic growth during the 1996–99 period [49]. In fact, Oliner and Sichel [50] estimate that the in the 1974–95 period, ICT production contributed about half of the TFP growth in the U.S.

In practice, productivity calculations make a simplified assumption that ICT production consists of computer manufacturing, software production, and telecommunication equipment manufacturing. Often the studies have focused on computer manufacturing, arguing that telecommunication equipment prices and software are not well measured in the current statistics. Most of what we know about economy–wide ICT productivity impacts comes from such studies. It is therefore useful to see how such a simplified analysis can be conducted. In the next section, I will use the widely quoted study by Oliner and Sichel as an example to discuss the basic logic of dissecting economic growth into its various elements, including ICT. This discussion allows us to evaluate the robustness of those theoretical and empirical assumptions that underlie our current knowledge on ICT productivity impacts, and point out important areas where we need further analysis and research.

 

++++++++++

The logic of growth contribution calculations

Oliner and Sichel (2000) calculate the contributions of ICT to output growth using five inputs: computer hardware, computer software, communication equipment, other capital, and labor hours. In addition, they adjust labor inputs for labor quality changes. In the neoclassical growth accounting framework, this leads to a simple equation that describes the total output growth rate as a weighted sum of input growth rates.

The question about appropriate weights is a central question in the neoclassical framework. If we only had one type of input, the output growth rate would equal the growth rate of the input, plus the total factor productivity Solow residual. In the case of several inputs, the weights are called output elasticities. They are multipliers that describe how much the output grows for an incremental increase of the specific type of input.

Output elasticities cannot be directly measured, as we cannot freely experiment with the inputs and outputs of the economy. Solow, however, pointed out in his landmark article in 1957 that the neoclassical assumptions can be used to estimate the elasticities. In fact, he showed that output elasticities equal the income shares earned by each type of input. This is because in the neoclassical market equilibrium, each input is paid its marginal productivity. Economic actors invest in different types of capital and labor until the last marginal investment cannot increase profit. In this equilibrium, adding, for example, one hour of work should increase the total output by the cost of one work hour. Output elasticity of labor, therefore, equals the cost that has to be paid to cover the cost of the added hour of labor. Similarly, the output elasticities of different types of capital equal the "rents" that should be paid for the marginal investment. These rents are also called "the user cost of capital" [51].

To calculate the total income of a specific type of input, we need to estimate the rate of return and amount of capital that generates the input. Firms do not always rent their capital from the market, but instead invest in capital and receive income from their investments over a period of time. Capital rents therefore cannot be directly observed, and we have to estimate them.

Capital stocks that are used in productivity calculations have to reflect economic services generated by the investments. Such "productive capital stocks" do not equal the market value of accumulated capital. For example, although an old computer may have very little market value, it may still generate economic services that reflect a large productive value. On the other hand, old computers may be unable to run new programs and therefore their value may depreciate even when they do not deteriorate much physically. Productive stocks are therefore calculated by correcting investment costs for depreciation and price changes. Such calculations require quite sophisticated modelling of time–dependent characteristics of productive value for each type of capital and detailed price indices. The results, however, are readily available, at least for the U.S. economy. In other countries, researchers typically use the U.S. price and volume indices as the starting point.

Contributions of the different types of investments to total economic growth can be calculated by multiplying the growth rate of capital by its weight, which equals to its income share, as was noted above. When the neoclassical assumptions are valid, income shares can be estimated using the cost of capital. According to the neoclassical assumptions, in the equilibrium each type of capital has to earn the same net rate of return. If this were not the case, the investments could be reallocated for increased profit, and by definition this cannot happen in the equilibrium. To generate the same net rate of return, the different types of capital, however, have to generate very different amounts of gross return. The gross return has to cover basically three different factors. One is the loss of productive value due to wear, tear, and obsolescence. The second factor accounts for the loss or gain that results from the change of the price of the asset across time. The third factor is the net rate of return that the capital would earn if it would generate returns on the market, perhaps with adjustments for possible taxes. Computers, in particular, have a relatively short investment lifetime and their prices drop rapidly, so that the gross rate of return has to be high. Oliner and Sichel estimate that computer investments depreciate roughly about 30 percent and that computer prices drop about 30 percent per year. This means that computer investments must earn about 60 percent above the net rate of return for investments in the economy, which Oliner and Sichel assume to be four percent.

After we have estimates for productive capital stocks for the various years and corresponding estimates for the income, the share of income for each type of capital stock can be calculated by dividing the income earned by the total income of the economy. Oliner and Sichel further assume that the income share of labor is what remains after the incomes earned by different capital stocks are subtracted from the total income. This, then, allows them to say how much each type of input has contributed to the growth rate of the economy. The rest is the total factor productivity residual. The resulting numbers are shown in Table 1.

 

Table 1: Contributions to Growth of U.S. Non–farm Business Output, 1974–99 (Oliner and Sichel, 2000, Table 1).

  1974–90 1991–95 1996–99
Growth rate of output 3.06 2.75 4.82
Contributions from:      
2. Information technology capital 0.49 0.57 1.10
3. Hardware 0.27 0.25 0.63
4. Software 0.11 0.25 0.32
5. Communication
equipment
0.11 0.07 0.15
6. Other capital 0.86 0.44 0.75
7. Labor hours 1.16 0.82 1.50
8. Labor quality 0.22 0.44 0.31
9. Multifactor productivity 0.33 0.48 1.16

 

From Table 1, one can easily see that ICT was a major factor in the extremely rapid growth of the U.S. economy in the second half of the 1990s. An even more important factor was growth of labor inputs. The decline in the importance of labor quality indicates that the labor markets grew by employing also people who were not categorized as highly skilled labor. To avoid misinterpretations, one should however note that, strictly speaking, labor quality does not measure the level of skills in productivity studies. Instead, the labor force is categorized using a combination of characteristics that cluster workers in groups that have similar levels of labor costs. Labor quality therefore varies with, for example, the average level of formal education, industry of employment, age, and gender. "Labor quality" in productivity studies can often be interpreted as "wage category." The visible decline in labor quality contribution in the 1996–99, therefore, can be also interpreted as a "trickle–down" in the labor market. More generally, decreasing labor productivity can sometimes indicate that the members of the society broadly benefit from economic growth. In the neoclassical framework, wage categories, of course, were assumed to perfectly reflect the marginal productivities of different worker categories, so that labor costs and labor quality are more or less synonymous [52].

As was noted above, the growth impact of ICTs may also become visible in the total factor productivity residual, which Oliner and Sichel call multifactor productivity. In the above table, the growth contribution of ICT capital reflects the increasing investments in ICT capital by user industries. To analyze the impact of ICT production, we have to dissect the total factor productivity residual into components that originate from ICT manufacturing industries and from other sources. To achieve this, Oliner and Sichel decompose the economy into three segments: semiconductor manufacturing, computer manufacturing, and the rest. They further assume that the total factor productivity growth rate of the total economy, shown in Table 1, is a weighted sum of TFP growth rates of these three sectors. For the weights, they use the gross output share of each sector.

To implement the decomposition, Oliner and Sichel need estimates of the sectoral total factor productivity growth rates. Here they use a common and theoretically interesting assumption. This is that price decreases in computer and semiconductor manufacturing result from improvements in total factor productivity. The basic assumption is that if the semiconductor industry, for example, is able to continuously cut prices without losing its profitability, and if the input prices for the industry do not decrease, the source of the output price decline has to be better productivity. Output price decline, therefore, becomes a simple measure for total factor productivity increases in the industry. For researchers, this has the major benefit that output prices can be observed directly.

Using this procedure, Oliner and Sichel find out that computer and semiconductor manufacturing, indeed, have been major sources of total factor productivity growth. In the U.S., in the 1974–90 and 1991–95 periods they accounted for about half of the growth and in 1996–99 about two–fifths.

 

++++++++++

Empirical and conceptual limits of the Oliner–Sichel study

The Oliner and Sichel study is an exemplary academic piece of work, and it has been extremely influential in recent productivity discussions. It is, however, also clear that it leaves several interesting questions open. In the above described procedure, the neoclassical assumptions and accurate measurement of inputs and outputs are of fundamental importance. In this section I discuss points that have particular relevance in the context of the Oliner–Sichel study, and for studies that have adopted similar approaches for ICT productivity analysis.

The first point is that the study assumes constant returns to the scale of production. This is a common assumption in productivity studies, but it is also a quite unintuitive assumption for semiconductor and software production. The rapid declines in semiconductor prices result, to a large extent, from the fact that there are large scale benefits in semiconductor manufacturing. In fact, the most important semiconductor industry products, such as memory chips and microprocessors, are quite similar to packaged software, where additional copies can be printed with low cost. The assumption of constant returns to scale might be particularly misleading in studies on ICT productivity impacts.

Oliner and Sichel use price and capital stock estimates provided by the U.S. Bureau of Economic Advisers (BEA) and Bureau of Labor Statistics (BLS). Although these estimates are based on extensive studies, they are also known to have conceptual and empirical problems. One important conceptual problem is that the underlying price estimation models are not able to account for qualitative changes in the use of computers. Although the price indices that provide the basis for calculating ICT investments and capital stocks are commonly thought to model quality change, in reality they are only able to measure improvement in existing qualities [53]. Oliner and Sichel therefore implicitly assume, for example, that personal computers from different decades can be categorized as members of the same class. Empirically, however, it looks probable that the use of broadband–connected PCs with advanced audio and video capabilities is essentially different from the use of 1980s PCs, for example. The price indices that underlie the Oliner and Sichel study have theoretical difficulties in crossing such qualitative discontinuities. It is therefore possible that the existing capital stocks and the user costs of ICT are over– or underestimated. As the neoclassical framework does not say anything about discontinuous qualitative change, we actually cannot say much about the size of the resulting estimation errors in general. One could, however, speculate that when the importance of innovative activities grows in the economy, these discontinuities become increasingly important. Although they could have been relatively invisible when production was largely based on mass–produced goods, they perhaps need to be taken into account in knowledge– and innovation–based economies.

An important and well–known empirical problem in the ICT price indices is that there is no accurate information about software output. Average labor costs are therefore used to derive the price index for in–house developed software [54]. The price index for custom–developed software is then derived by averaging these in–house development cost estimates with estimates for prepackaged software [55]. Pre–packaged software indices, in turn, are based on spreadsheet and word processing software prices. A conceptual problem with this approach is that it measures output using labor input as a proxy for much of the produced software output [56]. With this assumption there can be no changes in computer programmer and systems analyst productivity [57]. Empirically, this is a troublesome assumption, for example, as it is often noted that professional software programmers can have very big task productivity differences, on the order of 1 to 20.

In their analysis of the growth contributions of ICT production, Oliner and Sichel use the assumption that price declines reflect productivity improvements. This assumption is called the "dual approach" for measuring total factor productivity growth (cf. Jorgenson and Stiroh, 2000; Barro, 1998) [58]. Instead of trying to measure TFP changes directly, this approach simply uses prices, and assigns all price change to TFP improvements. Implicitly, this procedure relies on a model of economic activity that does not necessarily closely reflect the realities in the semiconductor, software, and computer industries.

The underlying idea of the dual approach is that declines in relative ICT prices reflect productivity growth in the ICT sector. The semiconductor industry, for example, has continuously been able to produce more outputs with the same inputs. Without productivity increases that compensate the price declines, the industry would have gone bankrupt. This intuition is supported by the common sense observation that the semiconductor industry has been amazingly successful in inventing new product generations that have almost exponentially increased computing capability without increasing prices.

The public awareness of dizzying technical progress in ICTs does not, however, immediately translate to economic progress. Strictly speaking, the price developments in computers and semiconductors do not show much decrease. If we simply look the product prices and use the dual approach of measuring TFP growth using price declines, the result is that there has not been much TFP growth in the computer or semiconductor industries.

In nominal terms, the prices have remained relatively stable for new ICT products. Hard disk drives show a constant decline per sold unit [59], but until recent years, microprocessor prices have been relatively constant at introduction [60]. The median price of desktop computers sold in the U.S. has been about US$2000 since the 1970s, although recently the nominal prices have declined [61]. As there obviously have been amazing technical developments in the semiconductor and computer industries, price indices are therefore used to correct the nominal prices. This has led to rapid decrease in the estimated "real prices" of ICT products. A large fraction of the growth that is measured in the national accounts and related to ICT originates from these corrections. This is particularly visible in the U.S., where prices and investment stocks are extensively adjusted for quality. It is therefore important to understand whether ICT prices actually have been measured right. Indeed, this is a critical element in the ICT productivity story and therefore deserves a closer look.

 

++++++++++

Price indices as a source of growth

Simple price indices can be used for homogenous products, such as wheat and sugar. Many products, however, change across years. Price researchers therefore try to develop price indices for carefully controlled homogenous product groups, which can then be aggregated to generate, for example, consumer price indices. A common approach is to use "matched model" price indices, where the price for the exactly same or closely similar product is compared across time.

Matched model indices have been used for computer products, but they have important deficiencies. In practice, new computer models and versions emerge frequently, and completely new functionality is regularly introduced in ICT products. Only few product models exist on the market long enough that matched model price indices could reliably be developed for them. As the matched model approach cannot count new products for which historical data does not exist, it also weights old products whose prices may develop differently from the current ones.

The hedonic method tries to overcome these problems by creating statistical estimates of the value of product characteristics. For example, price researchers can observe the prices of ten different PCs that are otherwise similar except that they have hard disks that are of different capacity. Using the prices and the data on hard disk capacity, the researchers can fit a statistical model that estimates the current market value of hard disk capacity. By extending this approach to multiple technical characteristics — for example, processor clock speed, amount of random access memory, and other similar characteristics — the researchers can develop a mathematical model that describes how the different technical parameters influence the price.

If we plug in a specific product specification to the hedonic equation, it tells us how much the product would cost. Hedonic equations, therefore, can also give prices for products that actually do not exist on the market. Specifically, they can tell us what would have been the price of a product model if exactly the same model would have been on the market already a year ago. The price difference can then simply be used to derive a price index for that specific product. More generally, the same method can be used to derive price indices for products that consist of "bundles" of constant technical characteristics. The estimated price change of the constant bundle of product characteristics, therefore, gives a "quality–adjusted" or "constant–quality" estimate of price change.

Computers have been important for measured growth because computer prices have been aggressively adjusted for quality improvements. In other ICT products and services the adjustments have been much less prominent. This can be seen in Figure 2, which shows the U.S. price indices for computers, communications, software, and other products using the year 1996 as the base year. These indices are commonly used as the starting point in international ICT productivity studies. To put it very simply, the reason for the rapid productivity growth in the second half of the 1990s is the rapid decline in computer price indices, shown in the picture. In neoclassical productivity studies, this decline becomes extremely influential. This is because it affects both the growth rate of quality–adjusted productive assets and the user costs that multiply the growth of these assets. Further, as the user cost is calculated by multiplying the total quality–adjusted volume of productive assets with its gross rate of return, which itself depends on the rate of quality–adjusted depreciation and revaluation of the assets, the quality adjustments effectively have an impact that is in the third power.

Figure 2: Price indices used to adjust the productive value of different products.

The reason why productivity studies find ICTs as the driver of productivity improvements, therefore, is to be found in the fact that neoclassical growth accounting studies allocate productivity growth to those sectors where productive assets grow fast and where price declines are rapid. Productive computer assets, in turn, have been growing fast because the quality–adjusted prices that are used to measure the discrepancy between the actual market value of computers and their assumed "productive value" have been declining rapidly. The contribution of ICTs to labor productivity has been large as the rapid increase of ICT assets — or more exactly computer assets — becomes in this theoretical framework translated into capital deepening. Similarly, as the framework associates total factor productivity growth rate with the speed of price declines, it should not be a surprise that those industries where price indices decrease rapidly become important for productivity growth. As such, these results follow purely from the mechanics of growth accounting and there is nothing that would differentiate ICTs in this framework from any other products that have similar price dynamics.

Without quality adjustments, the story would be quite different. First, the amount of computer capital would be only a fraction of what productivity studies now assume it to be. Second, the growth rate of computer capital would have been much slower. This can be seen in Figure 3. The value of U.S. computing assets has roughly doubled over the two decades since the 1980s, while growth in the 1990s was relatively modest. The estimated value of productive assets that generate computing services, however, grew extremely rapidly in the second half of the 1990s. This rapid growth, in fact, has been the main source of research results that show that ICTs became important for economic growth and productivity improvements in the 1990s.

Figure 3: Computer assets in the U.S. Market value vs. value used in productivity studies.

The fundamental question, then, is whether the hedonic price indices measure correctly the productive value of computers. If productivity increase is conceptually independent of improvements in technical parameters, there should be no obvious reason why hedonic indices would correctly estimate productive computer assets, or that the "dual approach" would, in fact, measure total factor productivity change.

The hedonic price indices do not directly measure productivity. Instead, they measure a set of technical characteristics that empirically seem to be correlated with price, and which conceptually are associated with the production of valuable outputs. For microprocessors, for example, price indices are constructed by characterizing different microprocessor chips using their clock speed, internal bus bandwidth, existence of multimedia capabilities, and other similar characteristics. An important factor for semiconductors is also the time that has passed from the introduction of the chip [62]. Using the hedonic approach, the semiconductor industry output, therefore, becomes represented as a bundle of technically defined characteristics, such as the total volume of megahertzes and, for example, megabits that the industry has produced in a year.

To understand this question, it is necessary to understand what actually drives technical improvement in, for example, semiconductors. If the prices drop because of productivity improvement, price indices could be used as proxies for productivity. If, on the other hand, prices drop for reasons that are essentially unrelated to productivity improvements, the dual approach does not work.

It is also critical that the technical characteristics are productive and valuable for the buyers. For example, if semiconductor prices drop because old chips are being substituted by new ones, this does not necessarily lead to real output growth. From the productivity point of view, new chips reflect growth only to the extent that they lead to more valuable products.

The neoclassical theory assumes that market prices are determined by the productive value of products. In this framework, if producers pay for the production of some characteristics, they have to be productive, and as the buyers are fully rational, they pay exactly the productive value of the characteristic. In other words, the theory requires that price changes, in fact, are directly associated with productivity changes.

The problem, however, is that the nominal prices of these products actually do not drop as much as we believe they should. The conventional way to deal with this problem is to assume that the improvements in technical parameters can be interpreted as growth of output. Although the market prices do not drop, the argument goes, the buyers get more for their money. Furthermore, this adjustment is made in a way that exactly explains the observed price changes by imputing a component of quality change that makes neoclassical economists happy.

In a simplified way, the hedonic approach could, for example, count the number of transistors on a chip, and when more transistors are shipped, this would by definition mean more output. The fact that this output growth would not be captured by nominal sales would then be corrected using price indices that lead to "real" growth numbers that multiply nominal dollars until the output seems to measure the number of transistors shipped, instead of the chips that contain these transistors. The difference between sold chips and sold transistors would then be measured as an increase in total factor productivity.

Of course, transistors are rarely sold separately these days. Most transistors, indeed, are on semiconductor chips that use transistors in multiple different ways, for example, for building memory cells and microprocessor logic. Improvements in technical functionality depend on technical characteristics, such as the size and speed of transistors, internal data bus bandwidths, but also on skillful organization of the transistors on the chips and innovative designs. In fact, the processing power of a microprocessor depends to a large extent on the software that uses the underlying hardware capabilities. The compiler — that converts high–level programs into code that actually runs on the microprocessor and uses its transistors — is often the main source of processing power. Even a relatively technically unsophisticated microprocessor can easily beat state–of–the–art chips if the latter use compilers that are not skillfully optimized for the underlying hardware capabilities.

"Task level productivity" or "processing power" of a microprocessor, therefore, cannot be captured simply by looking at the technical characteristics of the chip. The technical characteristics make sense only in a context that combines hardware capabilities, software architectures, and specific types of applications [63]. As long as this configuration remains stable, we may be able to forget the operating environment of the microprocessor and focus simply on the microprocessor itself. Empirically, the configurations, however, change often. As was noted, in such environments, price indices become theoretically difficult to define.

In general, one should therefore ask in what sense total factor productivity increase could be associated with technology improvements. In the neoclassical productivity framework, total factor productivity was associated with the overall efficiency of the economic process. As was noted above, although total factor productivity often is interpreted as "the level of technology," economists are well aware of the fact that it is not in any direct way associated with technology. The dual approach, however, makes the link between "level of technology" and total factor productivity in a different way. It becomes a definitional link. The efficiency increases when technological change in products becomes interpreted as growth in real output.

Hedonic price indices often lead to confusing interpretations because they mix two essentially different concepts of technology. In the economic framework where these indices are used, "technical advance" is purely an economic parameter that is independent of any technical considerations. Hedonic indices, on the other hand, rely on an independent concept of technical advance that is deeply rooted in knowledge of engineering processes and the uses of technical products. The information that is needed to derive hedonic indices is not modeled in economic theory because the basic historical assumption of economics was that it can be an autonomous domain of study, where technical, social, or, for example, ethical sources of values do not have to be considered. In this sense, hedonic indices import into economic theory information about values that the theory itself assumed to be irrelevant. Strictly speaking, consistent use of hedonic indices, therefore, would also require that we revise the economic theory of value.

Of course, if we want to say something about the relative impact of ICT industries, we also need to do similar output corrections in other industries as well. If we only adjust one industry and get high growth rates for it, our productivity studies pick up this specific industry and show that it has been an important source of growth. This, in fact, is what typically happens in ICT productivity studies. A broad and consistent adjustment for quality in economic outputs would make ICTs less prominent sources of growth.

A fundamental question is to what extent the technical improvements really reflect conventional ideas about output growth. As was noted, implicitly the price indices, and therefore also the accounted output and investment growth, assume that, for example, increased microprocessor clock speed implies economic growth. End users, however, are not necessarily interested in technical characteristics, per se. If they were, technical improvement could be expected to lead to increasing demand. Although the demand in semiconductors is very cyclical, it however, seems to have been surprisingly independent of technical improvements over the years (Gordon, 2000). The apparent demand growth exists to a large extent because the "real output" is corrected by the price indices. In other words, we get growth because the conceptual system that we use to measure growth puts it where we want to see it.

In a simplified way, the use of quality–adjusted price indices essentially implies that smaller transistors mean growth. Strictly speaking, quality–adjusted indices measure the decline of prices in the ICT products, but this becomes interpreted as increase in the real value of the new products that replace earlier products, or as efficiency improvement when old product generations are dumped on the market at rapidly declining prices. The semiconductor industry has continuously been able to ship memory chips that have smaller transistors and microprocessors that have higher clock speeds, but the jump from purely technical characteristics to economics requires a crossing of an interesting conceptual boundary. Why, indeed, we think that smaller transistors imply more economic output, but smaller cars, for example, do not? Why do we count the increasing millions of transistors on a chip, instead of counting the chips?

The reasons, of course, are complex, and price indices are derived using many technical characteristics, not only the size or number of transistors on a chip. It is important to note that the arguments for making specific quality adjustments, however, cannot in any conventional sense be economic arguments. They are arguments about the usefulness and value of technical characteristics in specific uses of technology. To be able to generate a list of potentially valuable technical and functional characteristics, we need to specify a particular way and context of using the product and ask engineers and business managers about the alterative ways to produce these functionalities within currently known constraints.

If the uses and the constraints for production are stable, these contextual factors may sometimes be taken for granted and they do not have to be explicitly described. Conceptually, the dual approach would seem to be best justified in industries that compete on product price in relatively perfect markets, where innovation does not matter. Assuming that firms would try to retain or grow their profits in such a setting, they would attempt to squeeze more outputs from a given input. This, indeed, could reasonably be called production efficiency improvement.

In the ICT industry, the setting is, however, quite different. Firms compete by introducing new innovative products and product variations. "Products," therefore, are only ambiguously defined [64]. The economic meaning of products is continuously reinvented by the users and as the uses evolve continuously, existing value systems become reconfigured. A large fraction of the total profits are usually generated in the first months after the introduction of a new product, when it has limited competition and when the product price can be high. As soon as competitors enter the new product category, prices start to decline extremely rapidly. For example, to cover the development costs, semiconductor manufacturers sell their products with very high margins at the beginning of the product lifecycle and try to keep the product price above the manufacturing cost by effective use of scale effects [65]. Intel’s microprocessor chips, for example, were typically introduced at prices between US$600 and US$1000 during the 1990s. When the chips were discontinued, their prices had usually fallen to under US$100. The manufacturing cost for the chips has typically been much lower. Aizcorbe [66] notes that for the Pentium I chips, which were introduced in the first quarter of 1994 for US$1000, the manufacturing cost was at the fourth quarter of the same year about US $53 per chip [67]. Productivity increase would probably be reflected on the decline of the manufacturing costs, but the link between product price and productivity is not clear when product price is fifteen times the manufacturing cost. All this indicates that innovation and qualitative change are central in this industry.

Neoclassical assumptions normally require that producers operate in markets where producers have no influence on prices, and where the production of one additional product costs as much or more than the first exemplar of the product. Oliner and Sichel, and many other influential studies, implicitly assume that all firms are perfect users of ICT, allocate their resources with perfect economic rationality, and that they realize without delay all possible productivity opportunities. Furthermore, due to difficulties in data collection, they start from the assumption that price changes perfectly reflect total factor productivity changes. Theoretically, this might be so, if the markets were perfect and if the competitive environment would be in equilibrium. It is, however, not clear whether such a competitive equilibrium would conceptually make sense, or be a contradiction in terms in industries that compete through innovation [68].

The link between economic growth and technical change is not a trivial one also because quality adjustments are necessarily based on a retrospective selection of quality parameters. For example, power consumption became important with the introduction of portable PCs, but it is not included in the microprocessor price index calculations in the U.S. In fact, Berndt, et al. (2000) show that the emergence of portable PCs created a discontinuity in price indices for PCs around 1987. The historical contingency of the relevance of technical characteristics means that potentially we should re–evaluate historical price indices whenever new product functionalities or uses emerge. Intel, for example, has recently moved to describe its microprocessors in terms of MIPS (millions of instructions per second) per Watt, reflecting increasing problems with power consumption [69]. Strictly speaking, the accumulated ICT capital stocks should now be computed anew, as we understand that important parameters, such as investments in chip cooling and computer room air conditioning, were not taken into account.

In general, it is not conceptually clear what we mean by constant prices when technological change becomes the main source of price changes. This makes the value of money dependent on qualitative advances in technical parameters, and the conventional theoretical economic frameworks move to a new uncertain terrain [70]. This terrain is particularly shaky in the domain of ICT, where measured price changes have been driven by extremely rapid improvements in semiconductor quality [71].

Indeed, if we do not correct for quality improvements in semiconductors, both overall growth and productivity numbers look quite different [72]. Much of the productivity increase goes away if we roll back the effect of quality improvements in investment stocks and consumption. For example, the global sales of semiconductors do not show any obvious trend after the beginning of the 1990s when measured in current dollars, as can be seen from Figure 4. In year 2003, the world semiconductor market was 166 billion U.S. dollars, or some 11 percent more than in year 1995, without adjusting for inflation. If we account for the fact that there are now more computers in the world than a decade ago, and that annual sales in the semiconductor industry therefore increasingly replace rapidly decaying computer investments, it seem possible that new semiconductor demand has slowed down [73]. Although there was a clear upsurge in semiconductor sales during the last years of the 1990s, this was perhaps a temporary peak. Of course, if we correct for quality improvements and treat replacement sales as new sales, the picture looks quite different. This, indeed, is what happens when national accounts and productivity studies convert the actual sales numbers into "real" investments. The point, however, is that the picture crucially depends on the accuracy of the quality adjustments. It is possible, for example, that the rapid increases in semiconductor quality result from the fact that demand for computing has saturated, and that competition has forced semiconductor firms to aggressively cut their prices to keep the total volume from shrinking [74]. Neoclassical productivity analysis cannot easily shed light on such issues, as it does not tell about the causal sources or drivers of productivity change.

Figure 4: World semiconductor sales, 1982–2002.
Data: World Semiconductor Trade Statistics (WSTS).

Accurate estimates for the impact of quality corrections in price indices are difficult to make, as multiple potential sources of error should be considered simultaneously. Landefeld and Grimm (2000), for example, have argued that quality adjustments cannot to an important extent explain the rapid GDP growth in the US in the 1995–99 period. This result is based on their estimate of the size of the errors created by the quality adjustment methods [75]. As the selling prices of typical computers has declined about five–nine percent annually, and as the hedonic price indices used for computers have declined about 33 percent annually during this period, one could estimate that the difference created by quality adjustment could be perhaps 25 percent per year. If such a potential error would in fact be realized when computer assets are calculated in national accounts, the impact on GDP growth would be relatively small. This is because the share of computers of total assets is small. Landefeld and Grimm estimate that the use of hedonic quality adjustments could have introduced no more than a one quarter of a percentage point to the average annual 4.14 percent GDP growth over the 1995–99 period.

On the other hand, the main result of growth accounting studies was that the rapid growth in the second half of the 1990s was strongly associated with ICT production. Although quality adjustments may have little importance for overall growth, they have central importance for studies that focus on ICTs.

One way to see this is to compare ICT asset growth rates, measured in their historical–cost values, as they would be recorded in the company reports, with the growth rates of quality–adjusted productive assets that are used in productivity calculations [76]. Computers and peripheral equipment grew in the 1990–2000 period 5.3 times faster when measured in quality–adjusted quantities than when measured in their historical–cost values. For software and communications equipment, where quality adjustments play a smaller role, the growth in historical–cost values was almost exactly equal to the growth of quality adjusted productive assets. This can be seen by comparing Figure 5 and Figure 6.

As the figures show, in historical–cost values, computers and peripheral equipment assets grew three–fold in the 1980–90 period and they doubled in the 1990–2000 period. There was growth, as we could expect. Quality–adjusted quantity, however, grew 10.5 and 10.7 times during these periods. Software stocks increased about five– and three–fold in historical values, and communication equipment about three– and two–fold, respectively. In quality–adjusted values, software stocks increased about three–fold and communication equipment stocks about two–fold in both periods.

Figure 5: ICT assets in the U.S. historical–cost value, 1980–2001.
Data: Bureau of Economic Affairs (BEA).

Simply looking the historical–cost values, one can see that software became more important than hardware in year 1991, but that communication equipment still represents about 1.6 times bigger assets in the U.S. than software. Software assets and investments have also grown faster than hardware since the early 1980s. This picture, however, turns upside down after quality adjustments. Computers and peripheral equipment assets grew three times faster than software in the 1990–2000 period, when measured using productive stocks. In historical–cost money, software assets, however, increased twice as fast as hardware.

Figure 6: ICT productive assets in the U.S., 1996 dollars, 1980–2001.
Data: Bureau of Labor Statistics (BLS).

In current–cost value, which attempts to measure the replacement value of these stocks, computer and peripheral assets have been relatively stable. The replacement or market value of computers and peripheral equipment assets was close to US$100 billion for the 1987–98 period in the U.S., peaking at about US$150 billion in year 2000. The replacement value of software assets was US$345.5 billion, communication equipment US$499 billion, and computers and peripheral equipment US$138.6 billion in 2001.

Quantity indices try to measure the "physical–volume" or the "real" stocks of assets by taking into account quality change. The differences between quantity indices of assets and historical values of the same assets therefore reflect the impact of quality adjustments. Depreciation rates, however, also play a role. In ICT investments, the expected lifetime is short and depreciation rates are high. The average age of computer and software stocks was less than two years in the U.S. in year 2001 [77]. To a large extent this was because computers and software are assumed to depreciate extremely quickly. In this sense, the U.S. economy has perhaps wasted money by investing heavily in rapidly depreciating stocks that lose their value in just a couple of years. If all computers in the U.S. had been thrown to a garbage can in year 2000, the ongoing investment rate would have been sufficient to rebuild these assets in about two years. On the other hand, this movement towards rapidly decaying investments has generated growth opportunities, as more production is needed to compensate the increasing consumption of capital. This rapid decay of many ICT products makes them somewhat of a borderline case between capital and consumption goods. Indeed, some hard disk industry managers have complained that they are in the fish business, as product prices constantly drop about one percent per week and products on the shelf begin to stink [78].

For communication equipment, depreciation rates are estimated to be much smaller than computer and software depreciation rates. This is reflected in the fact that the average age of communication equipment investments was about five years in the U.S. in year 2001. The reasons for the slow decay of communications equipment are to some extent historical. Depreciation includes declines in value that results from wear, tear, accidental damage, obsolescence, but also from aging. If incumbent telecom operators would depreciate their assets at computer and software rates in their accounts, they probably in many cases would go bankrupt, as the accelerated depreciations would destroy their profits and assets.

Accurate price indices are critically important for productivity studies, and there has been extensive theoretical and empirical research on price indices in ICTs during the last two decades. As was noted above, it is, however, not clear what we measure with price indices. Theoretically correct price and volume indices have to be calculated by "chaining" changes from one time period to the next one. This implies that prices start to measure the value of the particular type of good, and the value becomes incompatible with the values of other goods. Chained indices can measure theoretically correctly year–to–year changes in the prices of goods that they measure, but they also become independent of price changes of other goods that they do not measure. "Microprocessor money," "hard disk memory money," and, for example, "car money" become different in this treatment. As a result, quality adjusted Euros or dollars cannot be added anymore in the traditional sense. Quality–adjusted values become particularly incompatible when the quality adjustments are large, and when we move away from the time period that acts as the base year used for defining the original values. In technologies such as computers, where the value of investments decays in just a few years because of the introduction of new innovative technologies, quality adjustment of price indices is therefore a theoretically interesting challenge.

Quality adjustments look natural in products such as computers or semiconductors, but conceptually we also have to jump from a world of physical characteristics to the world of economics when we start to use statistical regressions to derive hedonic prices for products. The fact that quality money is not additive, indicates that the basic concepts of economical theory are not valid anymore.

In sum, the reality of semiconductor industry does not easily accommodate the assumptions of the dual approach of identifying total factor productivity increases with price decreases. Nor does it fit well with the assumptions of the neoclassical growth accounting framework. Hedonic adjustments lead to quite profound questions about the ways in which value and capital investments are conceptualized in ICT productivity studies. As a result, the findings of the Oliner and Sichel study, which crucially depend on the dual approach, the neoclassical equilibrium assumptions, and quality–adjusted price indices, do not necessarily well reflect the productivity impacts of ICT. Faster processors or bigger MIPS ratings do not necessarily mean more productivity, at least in a world where processors may be idle over 90 percent of time.

 

++++++++++

Productivity paradox or paradigm anomaly?

When we analyze existing ICT productivity studies it becomes clear that the results of these studies depend both on broad conceptual assumptions and the details of data collection methods. As Kuhn (1970) and other historians of science have shown, existing research paradigms can always be extended by adding new explanatory variables. This process of continuous improvement expands the boundaries of current theories without any fixed limit. With a sufficient number of crystal spheres and good computers, the Ptolemaic model of the solar system is as accurate as the best Keplerian models. However, as Kuhn also noted, the approaching breakdown of a theoretical paradigm is often indicated by persistent anomalies that remain unexplained in the present theoretical framework. The Solow paradox, the mysterious disappearance of the total factor residual after 1973 and the invisible impact of ICTs, is a potentially interesting example of such an anomaly. If it truly is a paradox, the theoretical approaches that are used to discuss productivity may require rethinking. If, on the other hand, the paradox can sufficiently be explained within the current framework, no fundamental revision is needed. It is therefore interesting to see how productivity researchers associated with the neoclassical approach have explained the paradox.

Triplett (1999) has reviewed several explanations for the paradox. The first explanation is that ICT is only a small fraction of GDP and national accounts. Brynjolfsson (1993), for example, calculated that IT investments in the U.S. contributed perhaps no more that 0.06 percent to aggregate GDP in the 1980s [79]. In fact, it was only around 1982 when the annual investments in computers and peripheral equipment started to exceed investments in farm tractors in the U.S. [80] As was discussed above, if ICT production and investments are relatively invisible parts of the total economy, also their productivity impact should be quite invisible [81].

The second explanation is that the methods used to measure computer investments and costs of ICT perhaps mismeasure the value of these investments. As was pointed out above, if we measure ICT investments in nominal current dollars, they have increased only relatively slowly [82]. The picture changes radically when we adjust computer prices for quality improvements.

ICT production may also be difficult to measure, for example, because productivity studies that use national data may not be able to see all productivity impacts of international division of labor. Another example, which is more difficult to handle within the neoclassical framework, is open source software. Open source software now forms over half of the core software on the Internet, but it often has no transaction price [83]. It therefore remains invisible and unaccounted in national accounts and investment stocks.

The third possible source of the productivity paradox is that computers are often used in economic sectors were productivity is not easy to measure. Output in services, in general, is difficult to measure. As the service sector of the economy increases, the measured productivity growth may be slowing down simply because qualitative improvements in services are underestimated.

The fourth explanation is that ICT impact is poorly captured in economic statistics. Software firms have, for example, invested heavily in the usability of their products, and these qualitative improvements could be interpreted as increased consumption. Increased product variety may also imply more valuable products for their users, and variety of choice itself could be valuable. Such qualitative improvements are not necessarily captured in current output measures. Although systematic mismeasurement of quality improvements may cancel out in calculations of productivity growth rates, Brynjolfsson and Hitt [84], for example, have argued that computers are associated with an increasing degree of mismeasurement. According to Brynjolfsson and Hitt, this is likely to lead to increasing underestimates of productivity and economic growth.

The fifth explanation is that the effective use of ICT requires learning and adjustment costs, and the impact of ICT should become visible only with delay. As the widespread use of ICT is only a relatively recent phenomenon, according to this explanation the productivity paradox is a temporary phenomenon, and will go away soon.

The sixth explanation could be called the "Dilbert theory of productivity paradox." This is a more serious conceptual challenge for productivity research. According to Dilbert (cartoon of 5 May 1997), as quoted by Triplett (1999) "the total time that humans have waited for Web pages to load ... cancels out all the productivity gains of the information age." In fact, this theory could be extended to a Schumpeterian model, where the extremely rapid obsolescence of ICT leads to negative productivity impacts through destructive creation. In this model, advances in technology might push Dilbert into a cyberspace singularity where he has to go for shopping for new computers faster than his computer seller’s e–commerce Web pages become visible [85].

The final explanation is that there is no paradox. Although there are all kinds of new products around us, and the world seems to be transforming into a "new economy" driven by ICT, in economic terms this revolution may be an illusion. The Solow residual measures the growth rate of economic efficiency, and it grows only if the rate of growth of economy grows faster than inputs. In other words, to have an impact on productivity growth rate and the Solow residual, the rate of "technical advance" should be increasing. Although it is common to talk about the increasing speed of technical change, historical data does not necessarily support this view. It is clear that there are now more new products introduced every year than before. The rate of change may, however, have remained quite stable for centuries. Triplett quotes Diewert and Fox (1999), who pointed out that the growth in the number of products in the average grocery store had actually fallen from its 1948–72 level in the 1972–94 period [86].

Brynjolfsson (1993) added an important point to this discussion by highlighting the point that it is quite possible that decision–makers in business firms can also make bad decisions. They can, for example, over–invest in ICTs. If economists are unable to accurately measure the benefits of ICT investments, why would an average manager be a more perfect decision–maker? [87] In fact, a 1995 study by Standish Group estimated that 32 percent of corporate IT projects were abandoned before completion in the U.S., leading to a cost of $81 billion. A U.K. Study by OASIG estimated in 1996 that 40 percent of software projects were totally abandoned before completion and that a further 25 percent were substantially truncated and simplified during the implementation (cf. Ewusi–Mensah, 2003).

Brynjolfsson and his colleagues also noted that if the main impact of ICT investment is to redistribute profits among competing firms, investments do not necessarily lead to increase in total industry output. The competitive impact of ICTs is conceptually independent of the aggregate productivity impact (Hitt and Brynjolfsson, 1994). In fact, much of the ICT investment in the 1990s was justified and motivated from the perspective of competitive advantage. Large firms invested heavily, for example, in business intelligence, data mining, market information, intellectual property management, and executive information systems. Investments in ICT can also be defensive investments that are necessary to avoid deterioration of competitive positions. The competitive impact of such systems may have been considerable but they have not necessarily increased total output, except by perhaps generating demand for ICT products, software, and consulting.

Such competitive strategic use of ICTs, indeed, could partly explain why ICT manufacturing saw major productivity improvements in the 1990s, when total factor productivity stagnated in most industries. Non–ICT industries may simply have given some of their profits to ICT industries [88].

In the second half of the 1990s, productivity grew impressively in the U.S. and also in other countries (Colecchia and Schreyer, 2002; Jorgenson, et al., 2003). This recent productivity revival has switched the focus of productivity paradox explanations somewhat. Steindel and Stiroh (2001) point out four important questions that remain open within the neoclassical growth framework.

First, there is the question whether the productivity revival in the second half of the 1990s was purely cyclical [89]. During the upswings in the economy, firms tend to invest more and this leads to capital deepening and increase in labor productivity. Total factor productivity is also pro–cyclical. As the economy grows, firms use their existing production capacity more efficiently. This typically becomes registered as an increase in TFP growth rate. It is quite clear today that the growth rate in the second half of the 1990s was unsustainable, and therefore it is possible that much of the productivity revival, in fact, is unrelated to the long–term productivity impacts of ICT. As Steindel and Stiroh note, this is particularly a challenge to the neoclassical productivity framework, which assumes that the economy is in equilibrium. For example, correct estimates of capital services are critically important in this framework, but they are calculated using the theoretical assumption that prices perfectly reflect the marginal products of different types of capital goods, which is clearly not true, at least over short time periods. In particular, it is not easy to argue that in the second half of the 1990s ICT investment decisions reflected perfect economic rationality [90].

The second issue highlighted by Steindel and Stiroh is the question why almost all productivity effects were seen in high–tech manufacturing [91]. As was pointed out above, the observed increase in TFP during the second half of the 1990s was strongly focused on ICT manufacturing industries in the U.S. [92] As Steindel and Stiroh note, the neoclassical productivity framework is by definition unable to answer this question. In this framework, total factor productivity change is defined as the residual growth that cannot be attributed to any known causes.

The third issue is why the measured productivity growth outside ICT manufacturing has been slow or negative. A common explanation for this is that service productivity growth is underestimated, as qualitative improvements in services are rarely visible in national accounts [93].

The fourth issue highlighted by Steindel and Stiroh is the estimation of sustainable productivity growth rates. Recent projections of economic growth have sometimes been based on the rapid productivity growth rates of the second half of the 1990s. If, for example, the slow rates of total factor productivity growth of the 1980s would be used instead, the estimates of future growth would look quite different.

Basically, the neoclassical explanations for the productivity paradox, then, argue that the theoretical apparatus used for measuring and analyzing productivity works, but the data, or our interpretations of it, might be inaccurate. It is, however, important to keep in mind the actual logic of growth accounting calculations. The growth accounting equations only output what we put in. If the starting point is that the equations exactly describe the contributions of different sources of growth, changes in the contributions of the various growth factors exactly reflect the changes we make in measuring these different factors.

This is important for two reasons. First, growth accounting studies typically very heavily rely on the validity of the used theory. As data that would be needed for empirical studies usually is not available, researchers in practice use the assumption that the theory is empirically accurate to switch to alternative data that are available and to fill in missing data sets. For example, because the weights that are needed to add growth factors together are not measurable, researchers assume that all income is distributed exactly according to the marginal productivities of the different productive factors. Furthermore, because marginal productivities of many capital investments are not available, researchers assume that firms adjust their capital stocks in real time, based on perfect knowledge about the future revenues generated by different investments. Missing empirical data are therefore filled in by defining how they should look if the theory would be correct. These results are then used to describe what has happened in the actual economy. Obviously, there can be no empirical test that could falsify the theory or its results, if we start from the assumption that the theory, in fact, is correct. As long as the theory defines what productivity is, and what kinds of jumps from theory to empirical world are allowed, the measured productivity growth is exactly what the theory tells it to be.

Therefore, it becomes also very important that we do not leap from these theoretically derived results back to empirical world without proper caution. For example, if we multiply the value of computer services by ten, we simply reallocate some of the total output toward computers, and the share of productivity that becomes associated with computers grows. In this framework, there is no causal connection between the different growth components and common sense "productivity" concepts [94]. As long as, for example, Barbie dolls would represent a similar share of investments in the economy as computers, the studies would show that Barbie dolls have been a major source of capital deepening and thus labor productivity growth. If the quality adjusted prices of Barbie dolls would be mismeasured, Barbies would also show up in total factor productivity growth. Although we might quickly protest that computers are obviously more productive than Barbie dolls, there is nothing in the theoretical structure of the growth accounting framework that could be used to argue for this [95]. Similarly, if we would have corrected the price indices for cars and transport equipment as aggressively as they have been adjusted for computers, productivity studies would reveal that cars have been a major driver of productivity in the 1990s in most developed countries. In this sense, ICT, in fact, is the "story" that underlies the productivity miracle of the 1990s. This is because in the neoclassical growth accounting framework any investment goods or services whose prices are deflated as fast as semiconductors and which become obsolescent as fast, become a disproportionately important factor of productivity growth, in particular if the adjustments are done only in this specific industry.

Computers, however, are special partly because of their rapid decay. Steindel and Stiroh, for example, argue, following the U.S. Economic Advisers, that the capital deepening component of labor productivity growth could be sustained also in the future. This is because computer prices drop rapidly and the current productive ICT assets can be maintained and increased with small nominal growth in investment spending. Here the logic is that because we do not need much money to keep on investing in ICT, capital deepening probably could go on. On the other hand, if the average age and lifetime of these assets is about three years, the total depreciation of these assets grows rapidly when their accumulated stock increases. Capital deepening requires that we invest more than all historically accumulated ICT stocks decay annually. The sustainability of capital deepening as the source of labor productivity growth, therefore, greatly depends on the time ICT products become obsolescent.

Steindel and Stiroh also highlight the open issue why the productivity has been concentrated in high–tech manufacturing. As was show above, an important source of growth in the ICT sector has been the way price indices are used. There is actually not much productivity increase in "high–tech manufacturing." The productivity increase strongly focuses on the semiconductor industry, for the reasons discussed above. Another potential reason, however, is that ICT manufacturing industries are exceptionally global. The biggest productivity increases are often seen in industries that use extensively international outsourcing of production. Some recent studies have argued that total factor productivity differences depend to large extent on international terms of trade and productivity flows across countries [96].

ICT producing industries and several ICT using industries are tightly connected international networks, where intermediate products and services flow constantly across industry and country borders. This creates major challenges for correctly allocating productivity changes among industries and national accounts. Productivity researchers have conventionally focused on national data, describing productivity developments in national "industries" that do not always exist in practice. Comparative assessments of productivity developments, and the impact of ICTs, therefore remain difficult to interpret. Productivity researchers may have been measuring the impact of globalization, instead of ICT [97].

ICT–related industry sectors also typically rely on labor compensation schemes that produce large errors in studies that use traditional sources for labor compensation data. When ICT developers are hired with stock options, for example, the recorded average income may be a minor part of the actual compensation [98]. Similarly, the average hours that are used to measure labor productivity may only inadequately reflect actual working hours. In the second half of the 1990s, these effects were clearly visible in the ICT industry. Their impact on productivity statistics is, however, unclear [99].

One important conceptual challenge in productivity studies is the apparent empirical mismatch between firm–level and macroeconomic productivity impacts. From the perspective of industry practitioners, international trade, financial services, semiconductor design, scientific research, and the production of movies, for example, are obviously much more efficient when we have telephones, computers, and data networks. On the other hand, we do not know how to link such task–level efficiency with macroeconomic measures of efficiency. As David (1999) has noted, there is a conceptual disconnect between our views on productivity at these different levels. Although it appears intuitively clear that computers allow us to be more productive at work, it is not clear what the macroeconomic effects of improved task productivity are.

In interpreting empirical research on ICT productivity impacts, it is also important to note that the neoclassical productivity framework is by definition unable to account for "non–market" factors that may influence productivity [100]. For policy–makers a particularly relevant invisible factor is policy. Although it is, in principle, possible to incorporate policy–related variables in the neoclassical framework, conceptually this often contradicts the basic neoclassical assumptions [101]. In fact, sometimes productivity changes result more from policy choices than from technological progress. Studies on productivity developments in the ICT manufacturing sector usually do not, for example, take into account the semiconductor trade agreements between Japan and the U.S., which led to price increases at the end of the 1980s [102]. Instead, policy changes become registered as changes in TFP, and sometimes interpreted as productivity impacts of technical advances in ICT.

 

++++++++++

ICTs as contextual and composite resources

If we look beyond traditional growth accounting for possible explanations of the invisibility of ICT productivity impacts, one important factor can be found from the way we have conceptualized ICT products. The accounting for ICT investment has mainly focused on ICT equipment purchasing costs, neglecting complementary costs in organizational change, skills, and system operating costs. If direct software and hardware costs account for less than 20 percent of the cost of new information system deployment, as for example Brynjolfsson and Hitt [103] suggest, the true cost and investment in ICTs is perhaps considerably higher than commonly estimated. Most of the ICT–related investments in firms and the society have perhaps therefore been left unaccounted for. More fundamentally, we do not know today how costs and benefits should be allocated in systems of production that consist of interoperating technical components and software applications, but also non–technical elements, such as accumulated human and social capital and organizational routines and work practices. For example, studies on electronic commerce and collaboration show that investment in learning and trust may be crucially important in realizing the productivity potential of ICT. Such investments are often accounted as consumption and expense, if at all.

From the user and investor point of view, ICT investments could be described as "composite resources." ICT hardware, which is commonly the focus of economic studies, has no productive value as such. It has to be combined with other resources before its productivity potential can be released. Furthermore, the optimal ways to combine ICT with other resources depend on historically accumulated stocks of material, institutional, and cognitive resources. For example, the return on investments in electronic commerce systems for the end users depends on institutional and social factors such as reliability and trustworthiness of social infrastructure, enforceability of contracts, uncorrupted dispute resolution systems, and the level of adult literacy. This makes ICT productivity impacts context dependent. For productivity studies this is a challenge, as some elements of the context are not accounted in conventional economic accounting or theory.

It is also quite clear that the common use of the terms ICT or IT mix several qualitatively different types of technology. This is a problem for studies that try to isolate productivity impacts in different industries, based on their "ICT intensity." Transaction processing systems are quite different in their organizational impact from, for example, computer–aided design or computer–mediated communication systems. Similarly, manufacturing planning, process control, and logistic support systems are different from market segmentation, data mining, and management decision support systems. For example, integrated supply chain management systems may enable efficient utilization of globally distributed production networks, leading to extraordinary productivity increases in national accounts, whereas manufacturing data management systems may allow handling an increasing number of product variations, leading to challenges in accounting for quality changes. Point of sales data collection systems can be combined with data mining and logistics systems to deliver goods to supermarkets and malls, but only in countries where the consumers frequently drive to the mall. Indeed, between the U.S. and EU, one of the biggest productivity growth differences in the last decade can be seen in retail trade [104]. Catching up in this area might require building six–lane freeways, opening up old city centers so that SUVs do not get stuck in the street corners, as well as kitchens with fridges that can store a couple of gallons of milk and half a dozen boxes of cereals. Such investments have traditionally not been captured as ICT investments.

Conceptually an "ICT product," or an "ICT investment," therefore, is not very easy to define. A simplified general description could look like the one in Figure 7. For productivity analysis, it is important to note that some elements of the composite are traditionally counted as consumption, whereas others are counted as investments. In software, for example, pre–packaged software is nowadays increasingly counted as investment, whereas the treatment of in–house and tailor–made software varies. Outsourcing of infrastructure may make it intermediate consumption, whereas parts of in–house infrastructure technologies are usually counted as investments, and therefore appear in the estimates for ICT stocks. The media that is used to store bits may be counted as investment but the work that has generated them as consumption, and the resulting bits may remain completely outside any accounting systems. This conceptual challenge, of course, makes it also difficult to correctly estimate returns on specific hardware or software investments. Existing firm– and industry–level estimates of ICT productivity impacts may, therefore, give a rather misleading view.

Figure 7: The basic ICT product composite.

On a more detailed level, the composite nature of "ICT products" shows up in price indices. As the benefits of ICT products, in general, depend on resource combinations where the accounted hardware represents only a minor cost, the price developments of ICT hardware may in fact reflect complex shifts in the users’ costs of the underlying resources. For example, when network infrastructure becomes widely available, the returns of investment in computer–mediated communication applications may change. From the point of view of an organizational decision–maker, the value of the accumulated stocks of ICT capital can, for example, rapidly increase. Such contingent, contextual, and essentially historical changes make theoretically solid estimates of income shares difficult to find. This is important, for example, because the accuracy of income share estimates is crucial for the neoclassical growth accounting and for much of existing research on ICT productivity impacts.

 

++++++++++

Research challenges

The above discussion highlights that there are important and interesting research challenges in the economic theory or productivity. Griliches was probably right when he noted:

"Real explanations will come from understanding the sources of scientific and technological advances and from identifying the incentives and circumstances that brought them about and that facilitated their implementation and diffusion ... . This leads us back to the study of the history of science and technology and the diffusion of these products, a topic that we have left largely to others." [105]

In general, policy–related discussions probably need to bring together elements from economic theory, but also from innovation research, organization theory, and history. Of course, also social and political theory is needed if we keep in mind the fact that ICTs can fully be understood only in a broader context of socio–economic transformation, and the possibility that values cannot be reduced to prices.

The discussed challenges for ICT productivity are summarized in Table 2.

 

Table 2: ICT productivity research challenges.

Measurement  
  Intangible assets; e.g., knowledge creation; process, relational, structural, and social capital
  International production networks; e.g., allocation of value-added and productivity gains
  Labor quality–adjustment; worker productivity vs. compensation; development of productive competences, e.g., informal learning in communities of practice
  Labor costs; e.g., stock-based compensation, outsourcing
  Non–market transactions; e.g., open source
Conceptual  
  ICTs as composite and combinatorial resources
  Incompatibility of networked innovation economy and neoclassical assumptions
  Lack of causal models in the neoclassical productivity theory
  Conceptual coherence of quality–adjusted price indices
  Missing links between task productivity and aggregate economic productivity
  Mismeasurement of outputs in sustainable knowledge society; "GDP vs. GPI"
  Missing link between output growth and socio–economic development
Empirical  
  Profit distribution; productive vs. predatory ICT use
  Differential impacts of different types of ICT use; e.g., data processing, computing, computer–mediated communication, knowledge management systems
  Optimal investment "recipes" and important contextual factors
  Current levels of technical efficiency; location of the current production frontiers
  Drivers of ICT productivity impact; sources and sustainability of innovation in the semiconductor industry

 

The open challenges of productivity measurement, of course, do not mean that ICTs have no productivity impacts. Indeed, the core question at the firm–level, as well as for policy–makers, is how to realize the productivity potential of ICTs. In other words, we should ask what it takes to make ICT investments productive. In asking this question, we should, however, keep in mind that the current productivity theories do not necessarily conceptualize productivity in the best possible way.

 

++++++++++

Conclusion

In this paper we have outlined and discussed several different ways to understand the productivity impact of information and communication technologies. Much of the current knowledge about the impact of ICTs on productivity and growth are based on the neoclassical growth theory and in particular the neoclassical growth accounting framework. We started by recalling the fundamental motives for generating knowledge about productivity, which were intrinsically connected with political and economic concerns on wealth and welfare generation. We introduced the main concepts that economists use to discuss productivity and described in some detail one influential study that has analyzed the growth impact of ICTs in the U.S. in the recent decades. Using this example, we then tried to estimate the robustness of current theoretical frameworks and the validity of their empirical outcomes by critically reflecting them against the practical realities of ICT industries and ICT use. In particular, we described how computer price indices have become the main source of growth in the second half of the 1990s, and argued that the hedonic methods may require rethinking, as they introduce technical and engineering considerations into the core of economic theory. The analysis was complemented by a review on proposed explanations of the Solow productivity paradox. This discussion led to a set of questions that appear to be important for further conceptual and empirical research, when we try to gain better understanding of the impacts of ICT.

Although productivity, by definition, seems to be a concept that has meaning only within specific economic theoretical frameworks, its conceptual relevance was founded on ideas about economic and social development. Therefore we should see the impacts of ICT also in a broader context of economic and social development.

The economic impact of ICTs is often described as a modern industrial revolution, which is leading us to a new information age. If ICTs are creating a structural transformation, to the extent that it could be labeled as a "revolution," current productivity statistics probably should not capture such a transformation. This is because productivity fundamentally only measures improvement. Technical efficiency can be measured only as long as the underlying process does not change qualitatively. Radical innovations break this possibility of measuring progress as improvement of something previously existing. This is clearly visible at the level of task productivity, where new objectives require new measurement systems, but the same problem exists also at the level of national economies and also at the level of global economy. Thus, if ICTs really were creating a "socio–economic revolution," one indication of this would be that current productivity measures should be strongly in conflict with our everyday experience of the importance of the ongoing change.

New important areas of economic growth are small before they become big. Often their emergence means that old industries disappear. Such periods of creative destruction rarely become clearly visible in the aggregate statistics that have been aligned with the needs of the historical structures of economy. If the emerging growth drivers are really important, their emergence makes a qualitative difference and the old measurement systems mainly capture the decay of old industries. In fact, some researchers have argued that new socio–economic paradigms first become visible on aggregate level mainly through increasing inequality and unemployment [106]. During such periods, as Griliches noted, we may learn more from studies on the history and diffusion of technologies than from neoclassical growth accounting studies.

A fundamental problem with the conventional neoclassical approaches is that they cannot easily be used to describe innovative activities. Innovation creates new consumption and investment opportunities "ex nihilo," by expanding the economic space. They create growth opportunities where economic opportunities did not exist before. In this sense, innovative products are qualitatively different from traditional resources. Whereas traditional resources are limited, and therefore usually lead to diminishing rates of return, innovative products are created in a process that continuously refines their uses and meaning. Innovation is not a scare resource. The neoclassical economic theory was specifically tailored to model worlds where resources were scarce, and therefore innovation and knowledge remained external to the theory. Technical advances were seen in this framework as "manna from heaven." In today’s economy, innovation, however, is not manna from heaven. It is one of the most important factors of modern economy and the core driver in the developments of information and communication technology. The modern economy necessarily remains incomprehensible if innovation is not taken into account.

Here it probably is not sufficient to extend the neoclassical framework by internalizing some of the costs of innovative activity in the growth models. The "new growth theories" basically have tried to follow this route. It is, in principle, possible to argue that research and development expenditures generate intellectual capital and intangible assets that should be taken into account in productive capital stocks, and that knowledge spillovers — from innovators to the rest of society and network effects among product users — lead to positive returns on investment. Or it can be pointed out that neoclassical theory may surprisingly well describe innovative activities when these become routine, as Baumol (2002) has done. Empirically, it is clear that such things are important. In economic theory, however, the fundamental problem is not in the improved accounting for investments. The core problem is that the source of growth is in intentional disequilibrium. When innovation becomes important, it cannot simple be regarded as a perturbation in an aggregate economy that is close to equilibrium. This is one of the main theoretical challenges when we move from economics of scarce resources to the knowledge–based economy. Knowledge–based economy is fundamentally an innovation–based economy and this becomes particularly visible in the ICT producing industries.

It is not a trivial task to create economic theories that are compatible with the concept of innovation. It is, however, not impossible, either. For example, one might try to link the theory of value with theories of social practice, and make social change and technical change aspects of the same process of co–evolution. This approach would imply that the value of new products would be understandable only in the context of their use, which in turn could meaningfully be described at the level of social practices or activity systems [107]. Value would be created by incrementally enabling new social practices and by expanding the space of possible socio–economic activity. This approach would require that we replace the concept of abstract utility with a socially grounded concept of value. Such an approach could be linked with the capability–based model of economic development (Sen, 2000). When the scope of social practice expands or becomes more efficient, we could say that the new products and technologies that make this possible have created value.

This would also mean that innovation should be fundamentally conceptualized as a demand side phenomenon. Innovations become real only when they are taken into use. Often this means that innovation occurs when social practices change [108]. In innovative industries, the demand and supply sides, of course, cannot be separated as the practice of technology producers is closely linked with both technology and demand creation. The focus on innovative demand, however, would probably show that the current quality adjustment methods used in national accounting are misleading. In particular, the hedonic price indices are based on technical characteristics that describe new technologies from the supply side. They conceptualize quality improvements as improvements of technical parameters, such as processor clock speed, internal bus bandwidth, and the number of transistors on a chip. In practice, users typically are not interested in such parameters, or understand what they are. Instead, users value technological products from the demand side; their valuation depends on the context of use. For example, users may be interested in knowing whether a computer has capability for e–mail, whether it can be used to play music and videos, whether its screen resolution is good enough for displaying pages of text that resemble the ones that come out from the printer, and whether there are friends and colleagues who can give advice if the system does not work. Such use characteristics are only loosely connected with the underlying technical characteristics. On the other hand, when we try to understand the productivity impacts of new technologies, the demand side is clearly relevant. Hedonic models that focus on technical characteristics might indicate that computers are now hundred times more valuable in constant prices than they were in the late 1980s. The users, however, might say that for the applications that did exist then, current computers are, for example, five times more valuable. For applications that did not exist, such as the World Wide Web, hedonic models allocate the created value to improvements in computer characteristics, instead of allocating it to the web applications, content, and networks. As a result, the estimated "real" growth rates of ICTs become extremely high, and the growth accounting studies find out that almost all growth in the modern economy results from ICT investments.

The current economic research on growth and productivity extends a vast range of literature, and it is probably beyond the reach of any single person. As always, the more we know, the more we understand the limits of our knowledge. Paradoxically, the results of the current study can be summarized by saying that we know very well that information and communication technologies have central importance for economic development and productivity, but we know very little about it. ICT may be the answer, but we perhaps have to find out what actually was the question. Perhaps current productivity theories force us to conceptualize productivity in a way that simply is inadequate for the types of questions that we originally tried to answer.

One important result of the present study, however, is quite clear. Neoclassical growth accounting studies have been widely used to argue that ICTs became the main drivers of productivity growth in many developed countries in the 1990s. The previous sections could probably be summarized by stating that this claim does not have solid scientific support. Growth accounting studies have too many conceptual and empirical problems, and they rely on theory that does not seem to be appropriate for studying ICT impacts. If we argue that the growth accounting studies reveal something essential about the impacts of ICTs, the justification has be based on some intuitive knowledge about the expected research results, which enables us to decide that the errors made in growth accounting studies do not really matter in practice.

It may be that growth accounting studies actually produce correct numbers. It may also be that the measurement errors do not matter, or that, for example, the prices of semiconductors exactly reflect total factor productivity growth in the semiconductor industry, instead of supply and demand, as economists usually would claim. Given the theoretical and empirical limitations of current studies, this would, however, be quite surprising. In fact, it would apparently require that the different errors made in estimating and modeling productivity and ICT impacts would in some rather miraculous way cancel each other.

Perez (2002) has argued that the appearance of new key technologies, such as semiconductors and computers, can lead to broad changes in techno–economic paradigms. The existing societal and institutional structures often constrain the possibilities for creating economic benefits from new key technologies. Only after the social and institutional structures become aligned with the requirements of productive uses of the new technology, the impact of technology becomes visible in the overall economy. Such changes in techno–economic paradigms have historically been associated with profound changes in the systems of education, law, financing, organizing, and management, as well as in the geographical location of centers of material and knowledge production.

The emergence of a new techno–economic paradigm can often be detected by increases in innovative activity. When new technologies become real in technical sense, their possibilities are actively explored and experimented with. Abundant financing flows become directed to such promising new areas of economical growth.

In the Perez model, the emergence of a new techno–economic paradigm leads to a gold rush where unrealistic expectations and irrational exuberance dominate. Innovation opens up new domains for economic activity and creates transient monopolies that can lead to extraordinary returns on investment. Investors are interested in extraordinary profits and they willingly fund entrepreneurs who try to develop new economic domains. Although the perceived potential may be there, the perceptions, however, are often wrong and the institutional structures may make their realization impossible. The real potential of new technological opportunities is revealed only gradually through trial and error. In this model, new techno–economic paradigms arrive with a bubble and crash.

The bubbles and crashes in the information and communication technologies can be interpreted to mean that they are starting to become important. Although ICTs are perhaps exceptionally protean technologies that can be reused and reinvented in continuously new forms, economically speaking, they have started to become visible only in the last two decades. It is therefore quite probable that the social and economic transitions that move us towards the knowledge society are mainly ahead of us. End of article

 

About the author

Ilkka Tuomi is researcher at the European Commission’s Joint Research Centre, Institute for Prospective Technological Studies, Seville, Spain.
E–mail: Ilkka.Tuomi@cec.eu.int.

 

Acknowledgements

The views expressed in this report are intended to promote discussion and research. They do not represent the views of the Joint Research Centre, the Institute for Prospective Technological Studies, or the European Commission. An earlier version of this paper was published as a background report for the European Commission’s Socio–Economic Expert Group on Information Society Technologies. The author would like to thank Jean–Claude Burgelman, René van Bavel, Andries Brandsma and Paul Duguid for useful comments.

 

Notes

1. Corrado, et al. (2002).

2. For example, Daly and Cobb (1989) developed an Index of Sustainable Economic Welfare (ISEW) that corrected GDP for environmental damage, crime, and defensive costs that are needed to overcome negative effects of consumption. They argued that the real growth in the U.S. economy had been relatively flat since the late 1960s, and decreasing in the 1980s. Similar results have been obtained for the U.K., Sweden, Germany and other countries (cf. Jackson and Stymne, 1996). Other attempts to develop measures for socio–economic development that go beyond GDP include Nordhaus and Tobin’s (1971) Measure of Economic Welfare (MEW), the Genuine Progress Indicator (GPI) that was derived from Daly and Cobb work, and UNDP’s Human Development Index (HDI).

3. The role of inter–organizational social networks in knowledge–based organizations has been discussed extensively in the literature on communities of practice (e.g., Brown and Duguid, 2000; Tuomi, 1999). Duguid (2003) has recently pointed out important challenges for economic conceptualizations of tacit and socially embedded knowledge. The importance of inter–institutional factors for regional economies has been emphasized, for example, by Kenney (2000) and Saxenian (1994). Saxenian (1999) has also highlighted the importance of international social networks in ICT innovation and production. For a discussion on social capital and its relation to ICTs and economic theory, see Tuomi (2004).

4. See, for example, O’Mahony and van Ark (2003).

5. This rapid advance in semiconductors is commonly described as "Moore’s Law." It is named after Gordon Moore, who in 1965 noted that the number of components on semiconductor chips had been doubling annually. Moore later became one of the founders of Intel. Moore’s Law is commonly quoted to say that the number of transistors on chips doubles every 18 months, or that the cost of computing halves about every 18 months. These versions of Moore’s Law, as well as most other existing versions, are both historically and empirically incorrect (Tuomi, 2002b).

6. Jorgenson,2000; Jorgenson, et al., 2003, p. 465.

7. The reasons include the Asian currency crisis, rapid diffusion of multimedia applications and the Web, increased competition in the microprocessor industry, and other factors (cf. Tuomi, 2003).

8. For example, in the last decade, advances in the semiconductor technology have to a large extent been based on industry coordination. The industry actors have created detailed roadmaps that have allowed equipment producers, manufacturers, designers, and customers to synchronize their activities and reduce uncertainties. Similar coordination mechanisms underlie successful open source software initiatives (Tuomi, 2002a). Mobile communications industry tries to use a somewhat similar approach in its Open Mobile Alliance initiative, but it is not yet clear whether collaboration in the application domain operates in the same way as in the semiconductor industry. Some open source software projects have been very effective in their coordination also in application development, but this has been because of specific interactions between core technologies, and informal socialization of developers into shared value systems. Both semiconductor industry and open source projects create their roadmaps around relatively well–defined core technologies. In the wireless arena, several potentially competing future architectures make collaboration and coordination more difficult.

9. An influential contribution was Brynjolfsson and Hitt, 1996.

10. E.g., Oliner and Sichel, 2000; Jorgenson and Stiroh, 2000; Nordhaus, 2001; Colecchia and Schreyer, 2002; Pilat and Wyckoff, 2003; Jorgenson, et al., 2003; Van Ark, et al., 2002.

11. Jorgenson, 2000, pp. 1–2.

12. Gordon, 2002, pp. 4–5.

13. Jorgenson and Stiroh, 2000, pp. 33–34.

14. E.g., OECD, 2003, Chapter 2.

15. Gordon, 2000, p. 57.

16. Op. cit., p. 65.

17. U.S. Congressional Budget Office, 2002, p. 3.

18. Mahadevan, 2002, p. 64.

19. Total factor productivity and the Solow paradox are discussed in more detail in the following sections.

20. Baily, 1996. The picture is reconstructed from Griliches, 2000, Chapter 5

21. The alternative productivity measures are discussed in Schreyer, 2001.

22. Measured labor productivity, however, also depends on changes in workforce participation, employment, labor quality, average working hours, and income and profit distribution. In economic downturns, labor productivity, for example, can increase if unemployment increases rapidly enough. This has probably occurred in the U.S. during the last few years.

23. In fact, in the neoclassical growth theory labor productivity is a rather uninformative concept. Its growth should measure change in labor compensation, and nothing more. Although the capital deepening component of measured labor productivity is often understood to reflect worker productivity, it is conceptually independent of worker productivity in this framework.

24. When twice as many widgets are produced per labor hour, the reason may be better machinery, better ideas how to design them, better work organization and division of labor, improvements and cost reductions in the intermediate products and services, reduction of quality problems and waste, errors in counting the hours, more motivated workers, reductions in the work force, better understanding of what actually needs to be done, increasing demand for widgets, outsourcing, and other similar factors. The macroeconomic labor productivity simply measures the economic value of output over the economic value of labor input.

25. Some of the productivity improvements may flow to customers through lower prices. These price changes, however, are used to adjust the sales to get estimates of "real output" and to compensate the apparent price drops in national output statistics and productivity calculations. Sometimes, however, the reason for productivity growth can simply be the fact that firms outsource their production to their customers. A more accurate treatment might consider, for example, IKEA and Home Depot as producers of intermediate goods, and the consumer household as the final producer.

26. The shifts in use of intermediate inputs relative to capital and labor over time may, however, mean that gross output can be a better starting point for productivity comparisons across industries (cf. Bartelsman and Doms, 2000, p. 8). For a discussion of the different uses of alternative productivity measures, see Schreyer (2001).

27. Earlier studies have argued for an even higher fraction of unexplained labor productivity growth. In 1954, Fabricant had calculated that about 92 percent of labor productivity growth remained unexplained in the 1870–1950 period in the U.S. (Griliches, 2000, Table 1).

28. A conceptual footnote may be useful here for some readers. Total factor productivity is in this formulation independent of the amount of capital and labor inputs, but can change with time. Economists also call it "Hicks–neutral" efficiency parameter. Hicks–neutrality means that total factor productivity changes are independent of the relative composition of labor and capital. When changes in total factory productivity are interpreted as technical change, Hicks–neutrality implies that technical innovation improves the marginal productivities of labor and capital equally. One should note, however, that this is usually not a valid assumption, if history matters. When not only the aggregate amounts of capital but also the way they have accumulated matter, the economic system becomes "path dependent." Neoclassical economics is here similar to classical physics where many important phenomena can be described in a history–independent fashion. The dynamics of planets, for example, depend on their relative positions and velocities, but not on the way they got to where they are (mathematically, the dynamics can be described by a potential function). As soon as history and structure matters, such descriptions become impossible. Neoclassical economics could therefore be characterized as a search for a "Newtonian model" of society, where transactions have no memory, where actors are not relational actors and where the world has no topology. This is one source of difficulties in describing network economies using the neoclassical concepts.

29. Strictly speaking, the assumption of constant returns to production scale is necessary only when independent estimates for return to capital are not available. Typically this is the case (cf. Hulten, 2000, p. 12). Productivity researchers also typically study growth rates, instead of actual levels of productivity. Therefore researchers often talk about TFP with the assumption that their readers understand that they are in fact talking about changes in the TFP growth rate.

30. The idea that the efficiency changes in the economic system are associated with technical change has been commonly shared by economists since the 1930s. It seems that Jan Tinbergen was the first to study the differences in economic efficiency across countries, in 1942, based on the assumption that the differences are created by technical developments (cf. Griliches, 2000, Table 1.1).

31. Abramovitz had noted this already in 1956. Historically, the concept of residual emerged in the early 1960s (cf. Griliches, 2000: Chapter 1; Hulten, 2000).

32. In practice, national accounting tries to remove exceptional events in the way productivity statistics are handled. An important example — which seems often to have been missed in ICT productivity research — is that the U.S. national accounts have excluded so-called "Y2K" expenditures from software investments and therefore also from the accounted ICT capital stocks (cf. Parker and Grimm, 2000, p. 4). The impact of wars is partially filtered out from the growth statistics in the U.S. Their impact on international growth and productivity comparisons is difficult to estimate also because government investments are often counted as consumption. The large learning effects in ICT production mean that government consumption has an impact on quality–adjusted prices, which become recorded as TFP growth. The U.S. "war on terror," for example, is very ICT–intensive, so that the impact on ICT productivity comparisons may be relatively large in the first half of the current decade.

33. Conceptually, free lunches, of course, are not exactly compatible with the neoclassical assumptions. This contradiction in the neoclassical growth theory is an indication of the fact that "technology" and innovation are something essentially unexplainable in this framework.

34. David (1999; 1991), for example, has emphasized that the economic impact of electronic motors become clearly visible in total factor productivity some forty years after Edison introduced the first central generators for electronic lighting in 1881.

35. See, e.g., Freeman, et al., 1982; Perez, 2002; 1985; Rosenberg and Steinmueller, 1982; David, 1991; Helpman and Trajtenberg, 1998.

36. Jorgenson and Griliches, in fact, were able to explain away the Solow residual in 1967. Inspired by Denison’s critique, they later revised their estimates and found that about half of the residual could be explained by quality changes in capital and labor. Griliches (2000, p. 23) later noted that "there was a certain youthful recklessness in the Jorgenson–Griliches paper that announced, with a ‘Look Ma! No hands!’ attitude, an almost complete ‘explanation’ of the residual based on correcting various measurement errors in the standard ways of doing things."

37. One reason for the importance of Solow residual for economists has been the fact that it can be used to argue that technical progress can lead to continuous economic growth. "Technology," in this sense, is the external factor that lets economy to escape the fate of diminishing profits and limited growth.

38. Griliches, 2000, p. 76.

39. There exist also important research that has tried to understand productivity drivers by studying longitudinal data on firms and business establishments. This research has shown that there are very large variations of productivity across firms that may depend on management, ownership, human capital, ICT use, and regulation (cf. Bartelsman and Doms, 2000). The findings of this microlevel research are to some extent incompatible with the assumptions of macrolevel research. If the conventional assumptions of perfect competition, efficient allocation of resources, and pricing at marginal cost are valid, productivity differences should not exist between firms.

40. Schreyer, 2001, p. 116.

41. In white–collar environments, value–adding work has sometimes been estimated to be about 20 percent of total work time; see e.g., Quevedo, 1991. The idea that radical redesign could lead to order–of–magnitude improvements in production led in the early 1990s to a widespread interest in business process re–engineering (e.g., Hammer and Champy, 1993). As many re–engineering projects failed, organizational developers, however, learned that many organizational activities that appear to be waste, slack and chaos actually are highly productive and important for knowledge–based organizations (Nonaka, 1988; Nonaka and Takeuchi, 1995). For example, I have argued (Tuomi, 1999) that strategic allocation of slack may sometimes be the most productive investment in innovative organizations.

42. Strictly speaking, productive efficiency can also be lost because the used resources are badly allocated. This element of efficiency is usually called allocative efficiency.

43. In practice, this is done using linear programming methods and theoretical frameworks such as Data Envelopment Analysis. The frontier analysis approach has its roots in operations research, but it is also increasingly applied in economics and policy analysis (e.g., Färe, et al., 1994; Mahadevan, 2002; ten Raa and Mohnen, 2002; Steering Committee for the Review of Commonwealth/State Service Provision, 1997).

44. Indeed, Morrison and Berndt (1991) found in their influential study that, on average, every dollar spent on IT in the U.S. manufacturing industry delivered only about US$0.80 of value on the margin in the mid–1980s. It is not clear whether average returns and economic decision–making rationality have been better or worse in the recent years.

45. According to Van Ark, et al. (2003, Table 1), ICT manufacturing and services were 5.9 percent on average in the EU in year 2000, and 10.1 percent in the 1995–2000 period in the U.S. The U.S. Department of Commerce calculates the ICT manufacturing and services to be about seven percent in the 1996–1999 period.

46. Cf. Nordic Information Society Statistics 2002 (Nordic Council of Ministers, 2002). About 8.8 percent of total number of persons employed in the private sector were in ICT manufacturing and services in the Nordic countries in year 2000. This number exceeded 10 percent in Finland in 2002.

47. Ahmad, et al., 2004, p. 78.

48. Many different technical characteristics can be used to describe technical progress in the semiconductor and computer industries. Different types of semiconductors have very different rates of improvement, and these rates have not historically been constant. Computer memory and in particular hard disk technologies have shown the most rapid developments. The areal density of hard disk drives increased at about 30 percent annual rate between 1957–91, at about 60 percent rate between 1992–97 (McKendrick, et al., 2000, p. 17), and about 100 percent in the recent years (Morris and Truskowski, 2003). The overall capacity, however, has been growing slower, as the seek time has been improving only about 12 percent and the transfer rate about 40 percent per year. The cost of stored bits have roughly halved on hard disk and flash memory every year during the recent years, and electronic storage is now cheaper than paper (Grochowski and Halem, 2003). PC microprocessors increased their component counts and clock speeds rapidly in the last years of the 1990s, but the qualitative impact of these parameters is unclear (Tuomi, 2002b). The rate of technical improvements in microprocessors is expected to slow down in the next years (ITRS, 2001).

49. Digital Economy 2002, U.S. Department of Commerce, at http://www.esa.doc/pdf/DE2002r1.pdf.

50. Oliner and Sichel, 2000, p. 17.

51. Strictly speaking, economists define elasticities as percentage changes to make them independent of the units used. For example, the output elasticity of labor is defined as the percent change of output per percent change of labor input. This means that elasticities equal the income shares, i.e., the relative proportions of income earned, instead of actual income in absolute terms. Also more generally, the rate of growth in productivity calculations refers to percentage change and not absolute growth of input and output variables. This scaling makes growth independent of the actual size of the economy and the dimensions that are used to measure it. It also means that the variables in growth equations are expressed as logarithms, and that instead of describing changes of growth in some absolute dimensions, the equations describe changes of logarithms of output and inputs. Mathematically, this is because difference scaled by the current value of the variable equals the difference of the logarithm of the value, i.e., dx/x=dln(x). In these logarithmic variables, the different input terms of growth equations separate into terms that can simply be added together, and where their "contributions" to growth become independent and can be easily studied.

52. In the neoclassical productivity theory, trickle–down usually represents a failure in the correct distribution of income. If the economic system is in equilibrium, workers are paid what they deserve, by definition. Changes in relative wages are conceptually possible in this framework only if their source is in marginal productivity changes. In the neoclassical world, strikes don’t pay, women get lower salaries because they produce less, cultural and institutional factors do not matter, and you can tell your boss what you think. Your salary only reflects your productivity.

53. Although these price indices are normally called quality–adjusted price indices, a more accurate description would be "quality–improvement" price indices. Existing quality–adjusted indices do not capture true innovation, which changes the meaning of the measured technical parameters. In other words, hedonic models may work when innovation is incremental, but they break down when innovation is radical. I will discuss hedonic indices in more detail in the following section.

54. In calculating investments in own account software, i.e., software that is developed in–house, BEA uses the reported numbers of programmers and computer systems analysts, and assumes that half of their labor costs are related to creating new software–related capital stocks. Investments are measured as the sum of production costs, which include employee compensation and the costs of intermediate inputs. The estimates are based on the numbers of programmers and computer systems analysts in different industries, engaged in the production of non–embedded software or software produced for sale. If the employment in these job categories exceeds 0.2 percent of total employment in the industry, the rest is assumed to be developing software for sale or software that is bundled with products. The adjusted estimates are then multiplied by 0.5, with the assumption that about half of the work time goes to minor upgrades, maintenance, and other tasks that are not counted as investments (cf. Grimm, et al., 2002, p. 7).

55. Custom software price indexes are calculated using a 75 percent weight for changes in business in–house software development index ("own–account software") and 25 percent weight for prepackaged software price index (cf. Parker and Grimm, 2000, p. 17).

56. Software was recognized as fixed investment in the October 1999 revision of the U.S. national accounts. The price indexes for prepackaged software are based on modeling quality improvements in spreadsheets and word processing software in the U.S. but the practices for accounting software vary widely across countries. The growth numbers would look different if, for example, computer games (with some US$17–20 billion annual sales or about 10 percent of all software sales) would be included in hedonic estimates. Games are typically counted as consumption, but estimates of their real price also impact output estimates. For ICT productivity research, computer games are interesting because they represent products that are particularly "hedonic" with product cycle times that are measured in months and which often rely on "quality" as the main competitive factor.

57. Grimm, et al., 2002, p. 19.

58. According to Triplett (1996, p. 124), this approach was first proposed by Copeland and Martin in 1938.

59. Cf. McKendrick, et al., 2000, and [footnote 36 above.

60. Cf. Aizcorbe, 2002a.

61. Cf. Berndt, et al., 2000.

62. Age is an important factor in price index studies, although it is also conceptually unclear how it should be included in hedonic models. In principle, hedonic characteristics should only include characteristics that are valuable for the user from a productivity point of view and costly for the producers to produce (Triplett, 1996). Academic studies typically use pooled data that includes age as one parameter. The producer price indexes developed by the U.S. Bureau of Labor Statistics use cross–sectional data without age corrections. The BLS producer price indexes are used to calculate investment stocks that underlie most of the macroeconomic productivity studies in the U.S. Most international studies also rely on the BLS ICT price deflators. Aizcorbe (2002a) notes that the Bureau of Economic Advisers (BEA) aggregate index for semiconductors falls 15–30 percent faster in 1997 and 1998 than the producer price index developed by the BLS. The price index developed by Aizcorbe, in turn, falls 15–20 percentage points faster than the BEA index.

63. Chwelos (2003) has developed hedonic laptop computer indices that take interactions between hardware components into account, using results from computer benchmark studies. He finds the rather surprising result that there has apparently been no significant performance interaction between laptop hardware and operating systems in the 1990s. Partly this could be because computer performance benchmarks are known to be notoriously unreliable. Computer scientists often regard them essentially as marketing tools. The classical MIPS rating, for example, is often translated as "Meaningless Instructions Per Second." Partly the reason could also be that application suites are compiled with compilers that already are optimized for the dominant system platforms and can take their technical characteristics into account.

64. I have earlier (Tuomi, 2002a) argued that products should actually be conceptualized as parallel socially founded uses and "carriers" of social practice. In this approach, products can be associated with multiple parallel social ways of using them, and they change when the underlying social practices change. Technical product characteristics may "sediment" some of these changes, but they do not always do so.

65. Since the early 1960s, semiconductor firms have aggressively discounted their learning in the product prices. Bob Noyce, for example, sold the first integrated semiconductor chips much below their current manufacturing costs at Fairchild, believing that larger volumes would cut the costs rapidly. Noyce went on to found Intel, using this phenomenon as a core of its strategy. This "falling ahead" strategy has become central in much of the semiconductor industry (cf. Tuomi, 2002b).

66. Aizcorbe, 2002b, p. 11.

67. This cost includes labor and material costs plus depreciation of the equipment and part of the building. They do not include R&D, which from the manufacturer’s pricing point of view are sunk costs (cf. Aizcorbe, 2002b, footnote 15).

68. For example, Oliner and Sichel and, following them, van Ark, et al. (Van Ark, Melka, et al., 2002) assume that competitive forces have not changed profit margins in the semiconductor industry and that price changes are directly related to total factory productivity changes. Oliner and Sichel base their assumption on the aggregate profit development of some U.S. semiconductor firms. In general, these assumptions are probably inaccurate in the ICT industry, where structural changes have been extensive during the last decades (cf. Langlois and Steinmueller, 1999; Langlois, 2002; Bresnahan and Malerba, 1999). There are only few computer microprocessor manufacturers in the world, (including Intel, AMD, Motorola and IBM), and, in PC microprocessors, Intel’s share is about four–fifths of the world market. Intel is now the dominant semiconductor manufacturer, but its profits have fluctuated during its history considerably. In 1985, for example, Intel’s losses were more than its book value (Langlois and Steinmueller, 1999, pp. 47–48). As noted above, Intel’s margins in its microprocessor industry were about 90 percent in the early 1990s (Aizcorbe, 2002a).

69. Of course, the MIPS characteristic has not been a theoretically very robust foundation for price indexes, as it is a rather fictive construct in modern processor architectures (cf. Tuomi, 2003; 2002b). Empirically, performance–based and technical characteristics may lead to similar price indexes (as, for example, Chwelos (1999) has shown) simply, because people often value technology using technical criteria. When computer manufacturers sell their products based on clock speed ratings, clock speed becomes relevant. The actual performance implications of clock speed, however, often remain unclear, as their understanding requires extensive studies on computer architecture. Furthermore, the economic value of globally accumulated processor clock cycles is unclear because it is quite probable that, on average, about 90 percent of time processors stay idle.

70. In a simplified way, utilitarian economic theories start from the philosophical assumption that prices reflect value (or utility), and that no other discussion about values is needed. Price deflation implies that prices need to be "corrected" to get the real value. It is, therefore, not obvious that the current national accounts are conceptually compatible with the neoclassical theory, for example. For ICT–related products, a sociologically oriented analysis, along the lines of Simmel (1990) might provide some useful insights. This would lead to discussion on value and capital theory, which is beyond the scope of the present paper.

71. A related problem is that rapid price changes make it impossible to construct meaningful price and investment indexes. As Diewert (2003, p. 6) notes, normal index number theory cannot be applied if the economy is experiencing a hyperinflation. The semiconductor industry shows the same problem in reverse. Its price developments can be described as hyperdeflation (Tuomi, 2003). In the 1985–1994 period, Intel 80x86 quality–adjusted microprocessor price indexes have been estimated to decline annually at about 27 percent rate in the U.S. (Grimm, 1998). On average, prices per unit of memory have declined 32 percent per year during the 1978–2000 period (U.S. Congressional Budget Office, 2002, p. 18). In the 1990s, microprocessor prices declined at about 52 percent annual compound rate on average and the quality–adjusted prices dropped at over 60 percent compound rates during the last years of the decade (Aizcorbe, et al., 2002; Aizcorbe, 2002a). Controlling for the impact of exchange rates, the price decline rate for PCs has been estimated at about 30 percent per annum in Germany in the 1985–1994 period (Moch, 2001).

72. One should note that there are different ways to make quality adjustments (Triplett, 1996). The hedonic approach leads to price indices that are different from traditional "matched model" indices, but the differences are not huge (Aizcorbe, et al., 2000). Both explicitly and implicitly quality adjusted price indices, however, have conceptual difficulties with truly hedonic products, such as oysters and Bordeaux wine, and for products where innovation and product substitution are important, such as fashion and cars. In fact, ICT price indices probably should explicitly separate "hedonic value" from productive value to arrive at accurate measures for productive assets.

73. In fact, the trend growth rates of the global semiconductor industry seem to have slowed down since the early 1960s if the sales are adjusted for overall economic growth (cf. Tuomi, 2003, Figure 4). If we subtract U.S. GDP growth rates from the worldwide semiconductor sales growth rates, semiconductor growth was about 20 percent above the overall growth in the early 1960s whereas it was about 1.6 percent in the 1995–2002 period. This calculation is, of course, a rough estimate, as the global GDP has not grown at the speeds of the U.S. GDP, and as available semiconductor sales data may be inaccurate, but also because semiconductors actually now represent more than a negligible fraction of total growth.

74. This possibility has been discussed by Gordon, (2003; 2000, pp. 60–66).

75. The U.S. estimates for investments and fixed assets use five different methods for quality adjustment. The explicit quality adjustment uses the ration of the producer’s cost of the new product to the old product. If, for example, a new microprocessor chip would cost US$60 for the producer and the cost for the previous version was also US$60, there would be no quality adjustment. The overlap method uses the ratio of prices when both the old and the new products are available on the market. If the old chip sells for US$500 when the new is sold for US$800, the quality improvement is worth US$300. The "matched model" method tries to find products that are similar to the old ones, and registers their price drop as quality improvement. If a chip that last year cost US$800 costs this year US$600, the quality adjustment would be US$200. Fourth, hedonic quality adjustment tries to characterize products as bundles of technical characteristics and measure statistically the theoretical value of each characteristic. For example, microprocessor chips can be described as bundles of transistor counts, clock speed, etc., and the observed prices for existing chips can be used to derive theoretical values of increases in the modeled characteristics. For example, one could come up with the result that one gigahertz of processor speed adds US$100 to the price. This result can then be used to adjust observed prices so that processors with different clock speeds become comparable. Finally, direct price comparison is used when quality change is assumed to be unimportant. European countries have mainly used matched model indexes in their national accounts for ICT products, whereas the U.S. has increasingly used hedonic methods. In the U.S., hedonic price indexes are now used for about 18 percent of GDP, with the biggest impact on ICTs.

76. Productivity calculations use capital stock values that are different from the ones used in national accounts. This is because in productivity calculations one needs estimates of the service flows that different assets generate, instead of their market values. In the U.S., the Bureau of Labor Statistics (BLS) calculates these productive stocks. The price indexes enter the calculations through corrections of the costs and depreciations of past investments. When, for example, quality–adjusted computer prices drop quickly, a US$1000 PC this year may embed twice as much "computing power" as a US$1000 PC three years ago. Past investment costs are therefore adjusted so that their value reflects the actual computing capabilities of the purchased computers. The resulting capital stock estimates are called capital service stocks or productive stocks.

77. The average age of assets is calculated by averaging the remaining value of past investments. It can be calculated either using historical costs or current costs that reflect price changes. Using current–cost valuation of past assets, the average age of U.S. computer and peripheral equipment assets was 1.6 years.

78. McKendrick, et al., 2000, p. 30.

79. This estimate is an extremely rough one, and it simply tried to make the point that economically speaking, computers should not have been very visible in the national accounts. In the U.S., the nominal share of computers and peripheral equipment has been about 1–1.4 percent of the GDP in the 1980–2000 period (cf. Gordon, 2000, p. 51; U.S. Congressional Budget Office, 2002, Table 2–2). Oliner and Sichel estimated that the income share of computers was 1.0 percent of total income in the 1974-90 period in the U.S. (Oliner and Sichel, 2000, Table 1). In the EU, the share of ICT investment has been lower (Van Ark, Melka, et al., 2002), although differences in national accounting systems make exact comparisons difficult.

80. Katz and Herman, 1997, Table 3.

81. The impact also greatly depends on the relative levels of ICT imports and domestic production (Colecchia and Schreyer, 2002).

82. In the U.S., the share of computer and software of total private investment has been growing about 3.4 percent annually in current dollar terms during the 1959–2002 period, speeding up to about 5.4 percent after 1980 (Tuomi, 2003). In nominal terms software investments started to exceed computer hardware investments in 1990. According to BEA data, software investments were $182 billion and computers and peripheral equipment $74 billion in year 2002 in the U.S.

83. Important open source software include sendmail, bind, Apache, Perl, FreeBSD, Linux, MySQL, and the GNU gcc compiler (cf. Tuomi, 2002a). They are now commonly distributed commercially and also as components of commercial products, but the prices mainly cover distribution costs and value added services. Also many other important software systems are nowadays distributed without transaction price, including Java, Adobe Acrobat reader, and Web browsers.

84. Brynjolfsson and Hitt, 2000, pp. 41–42.

85. A more positive view of the effects of rapid obsolescence is, of course, that we continuously get better ICTs. Indeed, one may interpret the Dilbert productivity problem also in a positive sense. If technology advances roughly exponentially, the fact that Dilbert has to wait for his Web page to open does not necessarily have long–term productivity impacts. Assuming the validity of a common version of "Moore’s Law," Gottbrath, et al. (1999) noted that if a computational problem can be solved in less than 26 months, we get the problem solved fastest if we start computation now. On the other hand, if the computation is more complex, it is more productive to wait for better computers, and to do nothing. If the problem is complex enough to require 41.2 months of computation today, we could slack for one year, buy a new computer, and still get the computation ready over three months earlier. The more complex our problems become, the more productive it is to go to the beach and read Dilbert.

86. There were about 2,200 products in the average grocery store in the U.S. in 1948, 9,000 in 1972, and 19,000 in 1994, according to these estimates. Today we may be overwhelmed with consumer choice, but the compound growth rate was in fact almost four times faster in 1948–72 than in 1972–94. I have discussed accelerating change in ICTs and conceptual problems in measuring it in Tuomi (2003) and in my "Response to Kurzweil" at http://www.jrc.es/~tuomiil/moreinfo.html. In fact, a constant rate of innovation leads to an increasing number of innovations when the size of the economy grows, as it has been doing for the last centuries. Most of this growth can be explained by population growth. It should also be clear that common sense cannot tell us much about the rate of technical change simply because no single observer can have a very accurate picture of it. In practice, of course, we only count and note change that is relevant for us. The "increasing speed of change" therefore mainly reflects our increasing socio–economic interdependencies, which also become increasingly transparent in the information society. The fact that the society is becoming more modern, however, does not mean that technological change would accelerate.

87. In fact, business surveys typically find out that over fifty percent of IT project fail to achieve their original objectives. Perhaps a reasonable rule of thumb could be that about a third of the projects are cancelled before they produce results, about a third become transformed into new projects with new objectives and budgets, and perhaps a third actually deliver results that are related to the original goals. Although the failure of IT projects would be a rather obvious explanation of the productivity paradox for information systems professionals, for economists the starting point is that investments are made by rational agents. In economic theory, the problem is about making a choice; in business practice, it is about implementing decisions and learning in the process that the problem actually should have been stated differently.

88. U.S. industry profits actually show that the communications and electronic equipment industries increased their profits faster than average from about 1990 to about 1994, after which they maintained good profit levels until about 1997. In the communications industry, the profits went from US$35 billion in 1996 to a loss of US$11 billion in 2002.

89. For example, Gordon (2003) has argued that the post–1995 productivity growth revival was a temporary event.

90. Also capacity utilization rates are very cyclical in computer and semiconductor industries. This has not usually been taken into account in ICT productivity studies. It would change the productivity estimates for the second half of the 1990s.

91. Conceptually, of course, the difference between ICT–producing and ICT–using sectors is not very clear. For example, the car industry is not usually counted as an ICT–producing sector, although luxury cars can have 80 microcontrollers and signal processors, and midsize cars often have 50. According to U.S. Census Bureau data, semiconductors account for 14 percent of the value of computer shipments. This number is, however, probably too low, and 30 percent might be closer to truth (cf. U.S. Congressional Budget Office, 2002, p. 22). Similarly, semiconductor design requires extensive use of ICT, and could therefore also be categorized as an ICT using sector. Van Ark, et al. (Van Ark, Inklaar, and McGuckin, 2003; 2002 ), for example, categorize car manufacturing as a "non–ICT manufacturing" industry. ICT intensity, of course, depends also on the overall capital intensity. The oil industry, for example, depends crucially on ICT for geological data analysis, process control, and other applications, but its large capital investments mean that it becomes categorized as "non–ICT using" industry in the above studies.

92. Outside the U.S., the results have been more mixed (cf. Van Ark, et al., 2002; Pilat and Wyckoff, 2003). Partly this is because international comparisons are difficult due to statistical differences and lack of data, but also because of structural differences across countries. For example, looking labor productivity growth contribution estimates produced by van Ark et al. (Van Ark, et al., 2002, Figure 1), one can easily imagine that the huge productivity increases in Ireland are partly related to movements in global value–networks towards locating high value–added ICT production to Ireland and to the rapid decreases in price indexes for semiconductors; in Finland to deep recession and unemployment in the early 1990s, which led to increases in the use of existing capital (cf. Jalava, 2002, p. 84); and in Norway to changing oil prices.

93. The importance of this factor is, however, unclear. Only a small fraction of services appear in the national accounts as final output and become counted in GDP. Most services are used as intermediate inputs in other industries, and their productivity improvements, therefore, should become visible in the aggregate economy. Indeed, it is quite probable that much of the measured productivity increase in the ICT producing sector comes from productivity increases in sectors that provide services for it. From the U.S. input–output tables, one can see, for example, that the biggest input for computer manufacturing is wholesale trade. It is clear that the productivity of wholesale trade has increased due to the development of global logistic, financial, and communication networks.

94. The difficulty of avoiding the switch between theoretically consistent arguments and arguments enhanced by common sense is visible in many discussions on capital deepening. It is often described causally, with the explanation that more and better capital increases the productivity of workers. The correct interpretation is structural and remains agnostic about causes and effects. In this framework, relative increase in the amount of capital is defined as capital deepening and capital deepening is defined as labor productivity growth. There are no empty theoretical holes left for "because of ... ." Common sense interpretations about the causes of labor productivity growth are both unnecessary and inconsistent within this framework. In the real world, capital deepening, of course, can lead to increased task efficiency, as well as it can result from reductions of workforce, outsourcing of labor intensive tasks, or revaluation of capital assets.

95. In 1996, there were more than 20 Barbie dolls for each Internet user. Rational decision–makers, of course, would not often invest in Barbies, and therefore they would not show up in capital assets. In fact, if Barbies would be counted as investments, the neoclassical theory would assign them productive value that equals their market price. Barbies would then be productive simply because economic decision–makers buy them.

96. Shestalova (2001) reported that the exceptionally high TFP growth rates in Japan in 1985–90 were linked with deterioration of the terms of trade. Keller and Yeaple (2003) estimated that about 14 percent of productivity growth in U.S. firms between 1987 and 1996 resulted from productivity spillovers generated by their foreign direct investments. International transfers of productivity gains have recently also been discussed by Antille and Fontela (2003). The impacts of such transfers are particularly visible in the national accounts of small countries with high intensity of ICT production, Ireland and Finland probably being the outstanding examples in the EU. They also make comparisons between the EU and the U.S. productivity developments difficult, as productivity transfers probably occur to a large extent within the national boundaries in the U.S. economy. International productivity comparisons typically use the U.S. ICT price indices and adjust or domestic inflation, but they do not usually adjust for exchange rate changes (cf. Schreyer, 2002, p. 26).

97. Although ICTs play an important role in the modern globalization process, its drivers are rarely technical. See, for example, Castells, 2000.

98. According to surveys, about 60 percent of employees have been included in stock based compensation schemes in the U.S. high–tech industry in recent years.

99. It has been estimated the new international and U.S. rules for accounting stock based compensation as normal labor costs could have major impact on profits in the ICT sector. For example, Microsoft’s profits would have been about a third lower in recent years if it had shown its stock–based compensation schemes as normal labor costs. In the EU, the new accounting rules for stock–based compensation are expected to be in use starting from the 2004 accounting year. They try to match labor income with produced labor services. In stock options, a problem is that the accrued income is realized in the future and its current value has to be estimated using, for example, option pricing models. The new accounting rules will have an impact on productivity statistics, in particular in the ICT producing sectors. They will also lead to revisions of the estimates of past productivity developments.

100. The traditional neoclassical growth framework can, however, be extended by redefining the concept of "investment" to include human capital and intellectual capital, and by defining these in ways that fit with empirical observations. This approach has been proposed, for example, by Jorgenson (e.g., Jorgenson and Fraumeni, 1989; Jorgenson and Yip, 1999) and Corrado, et al. (2002).

101. McGuckin and van Ark (2001), for example, have argued that productivity growth has been slower in the EU than in the U.S. because of regulations and restrictions that firms face, including structural impediments in product and labor markets. This may be so. It is, however, not exactly clear whether such a conclusion could be drawn from their productivity studies. Strictly speaking, the neoclassical growth accounting framework that they use starts from the assumption that policy does not have any impact, except perhaps taxes that change the cost of capital. In general, policy recommendations derived from comparative analysis of TFP growth rates have the conceptual limitation that they cannot distinguish "policy" and "technology," for example, as both are in this framework unexplainable "manna from heaven." It is possible to add "policy variables" into econometric productivity equations. Strictly speaking, this approach, however, contradicts the neoclassical assumptions.

102. The impact of the U.S.–Japan semiconductor agreement is discussed in Langlois and Steinmueller (1999).

103. Brynjolfsson and Hitt, 2000, p. 40.

104. According to Van Ark, Inklaar and McGuckin (2002, p. 3), the U.S.–EU differential in productivity growth in the 1990s was concentrated in retail and wholesale trade and in securities. Gordon (2003, p. 49) notes that most of the productivity growth in the retail sector in the U.S. was associated with "big box" retailers, such as Wal–Mart, Home Depot, and Best Buy. Although old establishments also invested in ICT, they did not show any productivity improvements in the 1990s.

105. Griliches, 2000, p. 89.

106. Freeman (2000), for example, has argued that the emergence of new economic structures can lead to surges in economic inequality, and that strong pro–business governments tend to aggravate the growth of inequality during periods of rapid structural change.

107. I have discussed this concept of technology use in Tuomi (2002a), where I also proposed that cultural–historical activity theory and research on communities of practice could provide starting points for understanding the relevant units of analysis.

108. I have discussed the linking of social practices and innovation in Tuomi (2001), and in more theoretical detail in Tuomi (2002a).

 

References

N. Ahmad, 2003. "Measuring investment in software," STI Working Paper, 2003/6. Paris: OECD.

N. Ahmad, P. Schreyer, and A. Wölfl, 2004. "ICT investment in OECD countries and its economic impacts," In: The economic impact of ICT: Measurement, evidence and implications. Paris: OECD, pp. 61–83.

A. Aizcorbe, 2002a. "Price measures for semiconductor devices," FEDS Working Paper, number 2002–13, revised version January 2002. Washington, D.C.: FEDS.

A. Aizcorbe, 2002b. "Why are semiconductor prices falling so fast: Industry estimates and implications for productivity measurement," FEDS DP 20–2002, Federal Reserve Board.

A. Aizcorbe, K. Flamm, and A. Khurshid, 2002. "The role of semiconductor inputs in IT hardware price decline: Computers vs. communications," at http://www.federalreserve.gov/pubs/feds/2002/200237/200237pap.pdf.

A. Aizcorbe, C. Corrado, and M. Doms, 2000. "Constructing price and quantity indexes for high technology goods," paper presented at the CRIW Workshop on Price Measurement.

G. Antille and E. Fontela, 2003. "The terms of trade and the international transfers of productivity gains," Economic Systems Research, volume 15, number 1, pp. 3–19. http://dx.doi.org/10.1080/0953531032000056918

M.N. Baily, 1996. "Trends in productivity growth," In: J. Fuhrer and J.S. Little (editors). Technology and growth. Boston: Federal Reserve Bank, pp. 269–278.

R.J. Barro, 1998. "Notes on growth accounting," NBER Working Paper Series, Working Paper number 6654. Cambridge, Mass.: National Bureau of Economic Research.

E.J. Bartelsman and M. Doms, 2000. "Understanding productivity: Lessons from longitudinal microdata," Journal of Economic Literature, volume 38, number 8, pp.569–594. http://dx.doi.org/10.1257/jel.38.3.569

W.J. Baumol, 2002. The Free–market innovation machine: Analyzing the growth miracle of capitalism. Princeton, N.J.: Princeton University Press.

E.R. Berndt, E.R. Dulberger, and N.J. Rappaport, 2000. "Price and quality of desktop and mobile personal computers: A quarter century of history," (17 July), at http://www.nber.org/~confer/2000/si2000/berndt.pdf.

T.F. Bresnahan and F. Malerba, 1999. "Industrial dynamics and the evolution of firm’s and nations’ competitive capabilities in the world computer industry," In: D.C. Mowery and R.R. Nelson (editors). Sources of industrial leadership: Studies on seven industries. Cambridge: Cambridge University Press, pp. 79–132.

J.S. Brown and P. Duguid, 2000. The social life of information. Boston: Harvard Business School Press.

E. Brynjolfsson, 1993. "The productivity paradox of information technology: Review and assessment," Communications of the ACM, volume 36, number 12, pp. 66–76, and at http://ccs.mit.edu/erik.html. http://dx.doi.org/10.1145/163298.163309

E. Brynjolfsson and L. Hitt, 2000. "Beyond computation: Information technology, organizational transformation and business performance," Journal of Economic Perspectives, volume 14, number 4, pp23–48. http://dx.doi.org/10.1287/mnsc.42.4.541

E. Brynjolfsson and L. Hitt, 1996. "Paradox lost? Firm–level evidence on the returns to information systems spending," Management Science, volume 42, number 4, pp.541–558. http://dx.doi.org/10.1287/mnsc.42.4.541

M. Castells, 2000. "Globalization & identity in the network society: A rejoinder to Calhoun, Lyon, and Touraine," Prometheus, volume 04, pp. 109–123.

P. Chwelos, 2003. "Approaches to performance measurement in hedonic analysis: Price indexes for laptop computers in the 1990’s, Economics of Innovation and New Technology, volume 12, number 3, pp. 199–224. http://dx.doi.org/10.1080/10438590290013609

P. Chwelos, 1999. "Hedonic approaches to measuring price and quality change in personal computer systems," Vancouver, B.C.: University of British Columbia, Faculty of Commerce and Business Administration, unpublished Ph.D. dissertation.

A. Colecchia and P. Schreyer, 2002. "ICT investment and economic growth in the 1990s: Is the United States a unique case?" Review of Economic Dynamics, volume 5, pp.408–442. http://dx.doi.org/10.1006/redy.2002.0170

C. Corrado, C.R. Hulten, and D.E. Sichel, 2002. "Measuring capital and technology: An expanded framework," Paper prepared for the CRIW/NBER conference "Measuring Capital in the New Economy," 26–27 April 2002, Washington, D.C.

H.E. Daly and J.B. Cobb, Jr., 1989. For the common good: Redirecting the economy toward community, the environment, and a sustainable future. Boston: Beacon Press.

P. David, 1999. "Digital technology and the productivity paradox: After ten years, what has been learned?" paper prepared for "Understanding the Digital Economy: Data, Tools and Research," held at the U.S. Department of Commerce, Washington, D.C. (25–26 May).

P. David, 1991. "Computer and dynamo: The modern productivity paradox in a not–too–distant mirror," In: Technology and productivity: The challenge for economic policy. Paris: OECD.

E.F. Denison, 1962. The sources of economic growth in the United States and the alternatives Before U.S. New York: Committee for Economic Development.

W.E. Diewert, 2003. "Measuring capital," NBER Working Paper Series, Working Paper number 9526. Cambridge, Mass.: National Bureau of Economic Research.

W.E. Diewert and K. Fox, 1999. "Can measurement error explain the productivity paradox?" Canadian Journal of Economics, volume 32, number 2, pp.251–280. http://dx.doi.org/10.2307/136423

P. Duguid, 2003. "Incentivizing practice: Report on "Communities of practice, knowledge work, innovation, economic and organizational theory"," Institute for Prospective Technological Studies, Workshop on "ICTs and Social Capital in the Knowledge Society," Seville, 2–3 November 2003.

K. Ewusi–Mensah, 2003. Software development failures. Cambridge, Mass.: MIT Press.

R. Färe, S. Grosskopf, and C.A.K. Lovell, 1994. Production frontiers. Cambridge: Cambridge University Press.

C. Freeman, 2000. "Social inequality, technology and economic growth," In: S. Wyatt, F. Henwood, N. Miller, and P. Senker (editors). Technology and in/equality: Questioning the information society. London: Routledge, pp. 149–171.

C. Freeman, J. Clark, and L. Soete, 1982. Unemployment and technical innovation: A study of long waves and economic development. Westport, Conn.: Greenwood Press.

R.J. Gordon, 2003. "Hi–tech innovation and productivity growth: Does supply create its own demand?," NBER Working Paper Series, Working Paper number 9437. Cambridge, Mass.: National Bureau of Economic Research, at http://www.nber.org/papers/w9437.

R.J. Gordon, 2002. "Technology and economic performance in the American economy," NBER Working Paper Series, Working Paper number 8771. Cambridge, Mass.: National Bureau of Economic Research, at http://www.nber.org/papers/w8771.

R.J. Gordon, 2000. "Does the "New Economy" measure up to the great inventions of the past?" Journal of Economic Perspectives, volume 14, number 4, pp.49–74. http://dx.doi.org/10.1257/jep.14.4.49

C. Gottbrath, J. Bailin, C. Meakin, T. Thompson, and J.J. Charfman, 1999. "The effects of Moore’s Law and slacking on large computations," at http://arxiv.org/PS_cache/astro-ph/pdf/9912/9912202.pdf.

Z. Griliches, 2000. R&D, education, and productivity. Cambridge, Mass.: Harvard University Press.

B.T. Grimm, 1998. "Price indexes for selected semiconductors, 1974–96," Survey of Current Business (February), pp. 8–24.

B.T. Grimm, B.R. Moulton, and D.B. Wasshausen, 2002. "Information processing equipment and software in national accounts," Washington, D.C.: Bureau of Economic Analysis, at http://www.bea.gov/bea/papers.

E. Grochowski and R.D. Halem, 2003. "Technological impact of magnetic hard disk drives on storage systems," IBM Systems Journal, volume 42, number 2, pp.338–346. http://dx.doi.org/10.1147/sj.422.0338

M. Hammer and J. Champy, 1993. Reengineering the corporation. New York: Free Press.

E. Helpman and M. Trajtenberg, 1998. "A time to sow and a time to reap: Growth based on general purpose technologies," In: E. Helpman (editor). General purpose technologies and economic growth. Cambridge, Mass.: MIT Press, pp. 55–83.

L. Hitt and E. Brynjolfsson, 1994. "Creating value and detroying profits? Three measures of information technology’s contributions," at http://ccs.mit.edu/papers/CCSWP183.html.

C.R. Hulten, 2000. "Total factor productivity: A short biography," NBER Working Paper Series, Working Paper number 7471. Cambridge, Mass.: National Bureau of Economic Research.

ITRS, 2001. International Technology Roadmap for Semiconductors. 2001 Edition, at http://public.itrs.net.

T. Jackson and S. Stymne, 1996. Sustainable economic welfare in Sweden: A pilot index 1950–1992. Stockholm Economic Institute, at http://www.sei.se/dload/1996/SEWISAPI.pdf.

J. Jalava, 2002. "Accounting for growth and productivity: Finnish multi–factor productivity 1975–99," Finnish Economic Papers, volume 15, number 2, pp.76–86.

D.W. Jorgenson, 2000. "Information technology and the U.S. economy," Presidential address to the American Economic Association, New Orleans, Louisiana (6 January), at http://www.economics.harvard.edu/faculty/jorgenson/papers/NewAmerican.pdf.

D.W. Jorgenson, M.S. Ho, and K.J. Stiroh, 2003. "Lessons from the U.S. growth resurgence," Journal of Policy Modeling, volume 25, pp.453–470. http://dx.doi.org/10.1016/S0161-8938(03)00040-1

D.W. Jorgenson and K.J. Stiroh, 2000. "Raising the speed limit: U.S. economic growth in the information age," OECD Economic Department Working Papers, number 261, at http://www.oecd.org/dataoecd/15/18/1885684.pdf.

D.W. Jorgenson and B.M. Fraumeni, 1989. "The accumulation of human and non–human capital," In: R.E. Lipsey and H.S. Tice (editors). The measurement of saving, investment, and wealth. Chicago: University of Chicago Press, pp. 227–282.

D.W. Jorgenson and Z. Griliches, 1967. "The explanation of productivity change," Review of Economic Studies, volume 34, number 3, pp. 249–283. http://dx.doi.org/10.2307/2296675

D.W. Jorgenson and E. Yip, 1999. "Whatever happened to productivity growth?" at http://www.economics.harvard.edu/faculty/jorgenson/papers/kuznets.pdf.

A.J. Katz and S.W. Herman, 1997. "Improved estimates of fixed reproducible tangible wealth, 1929–95," Survey of Current Business (May), pp. 69–92.

W. Keller and S.R. Yeaple, 2003. "Multinational enterprises, international trade, and productivity growth: Firm level evidence from the United States," NBER Working Paper Series, Working Paper, number 9504. Cambridge, Mass.: National Bureau of Economic Research.

M. Kenney, 2000. Understanding Silicon Valley: The anatomy of an entrepreneurial region. Stanford, Calif.: Stanford University Press.

T.S. Kuhn, 1970. The structure of scientific revolutions. Chicago: University of Chicago Press.

J.S. Landefeld and B.T. Grimm, 2000. "A note on the impact of hedonics and computers on real GDP," Survey of Current Business (December), pp. 17–22.

R.N. Langlois, 2002. "Computers and semiconductors," In: B. Steil, D.G. Victor, and R.R. Nelson (editors). Technological innovation and economic performance. Princeton, N.J.: Princeton University Press, pp. 265–284.

R.N. Langlois and W.E. Steinmueller, 1999. "The evolution of competitive advantage in the worldwide semiconductor industry, 1947–1996," In D.C. Mowery and R.R. Nelson (editors). Sources of industrial leadership: Studies of seven industries. Cambridge: Cambridge University Press, pp. 19–78.

R. Mahadevan, 2002. New currents in productivity analysis: Where to now? Tokyo: Asian Productivity Organization.

R.H. McGuckin and B. Van Ark, 2001. "Making the most of the information age: Productivity and structural reform in the new economy," Conference Board Report, 1301–01–RR. New York: Conference Board.

D.G. McKendrick, R.F. Doner, and S. Haggard, 2000. From Silicon Valley to Singapore: Location and competitive advantage in the hard disk drive industry. Stanford, Calif.: Stanford University Press.

D. Moch, 2001. Price indices for information and communication technology industries — an application to the German PC market. Mannheim: Zentrum für Europäische Wirtschaftsforschung (ZEW), at http://www.zew.de/en/publicationen/.

R.J.T. Morris and B.J. Truskowski, 2003. "The evolution of storage systems," IBM Systems Journal, volume 42, number 2, pp.205–217. http://dx.doi.org/10.1147/sj.422.0205

C.J. Morrison and E.R. Berndt, 1991. "Assessing the productivity of information technology equipment in U.S. manufacturing industries," NBER Working Paper Series, Working Paper number 3582. Cambridge, Mass.: National Bureau of Economic Research, at http://www.nber.org/papers/w3582.

I. Nonaka, 1988. "Speeding organizational information creation: Toward middle–up–down management," Sloan Management Review (Spring), pp.57–73.

I. Nonaka and H. Takeuchi, 1995. The Knowledge–creating company: How Japanese companies create the dynamics of innovation. Oxford: Oxford University Press.

W.D. Nordhaus, 2001. "Productivity growth and the new economy," NBER Working Paper Series, Working Paper number 8096. Cambridge, Mass.: National Bureau of Economic Research.

W.D. Nordhaus and J. Tobin, 1971. "Is growth obsolete?" Cowles Foundation Discussion Papers, number 319. New Haven, Conn.: Yale University, at http://cowles.econ.yale.edu/P/cp/p03b/p0398ab.pdf.

Nordic Council of Ministers, 2002. Nordic Information Society Statistics 2002. Helsinki: Nordic Council of Ministers, Statistics Denmark, Statistics Finland, Statistics Iceland, Statistics Norway, Statistics Sweden.

M. O’Mahony and B. Van Ark, 2003. EU productivity and competitiveness: An industry perspective. Can Europe resume the catching–up process? Luxembourg: European Communities.

M. O’Mahony and M. Vecchi, 2002. "In search of an ICT impact on TFP: Evidence from industry panel data," (October). London: National Institute of Economic and Social Research (NIESR), at http://ideas.repec.org/p/ecj/ac2003/210.html.

OECD, 2003. ICT and economic growth: Evidence from OECD countries, industries and firms. Paris: OECD.

S.D. Oliner and D.E. Sichel, 2002. "Information technology and productivity: Where are now and where are we going?" Federal Reserve Bank of Atlanta Economic Review, volume 87, number 3, pp. 15–44.

S.D. Oliner and D.E. Sichel, 2000. "The resurgence of growth in the late 1990s: Is information technology the story?" Journal of Economic Perspectives, volume 14, number 4, pp. 3–22. http://dx.doi.org/10.1257/jep.14.4.3

A. Pakes, 2002. "A reconsideration of hedonic price indices with an application to PCs," NBER Working Paper Series, Working Paper number 8715. Cambridge, Mass.: National Bureau of Economic Research, at http://www.nber.org/papers/w8715.

R. Parker and B.T. Grimm, 2000. Recognition of business and government expenditures for software as investment: Methodology and quantitative impacts, 1959–98. Washington, D.C.: U.S. Bureau of Economic Advisers, at http://www.bea.gov/bea/about/software.pdf.

C. Perez, 2002. Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Cheltenham: Edward Elgar.

C. Perez, 1985. "Microelectronics, long waves and world structural change: New perspectives for developing countries," World Development, volume 13, number 3, pp.441–463. http://dx.doi.org/10.1016/0305-750X(85)90140-8

D. Pilat and A. Wyckoff, 2003. "The impact of ICT on economic performance — an international comparison at three levels of analysis," Paper prepared for the conference "Transforming Enterprise," U.S. Department of Commerce (27–28 January).

R. Quevedo, 1991. "Quality, waste, and value in white–collar environments," Quality Progress (January), pp. 33–37.

N. Rosenberg and W.E. Steinmueller, 1982. "The economic implications of the VLSI revolution," In: N. Rosenberg (editor). Inside the black box: Technology and economics. Cambridge: Cambridge University Press, pp. 178–192.

A. Saxenian, 1999. Silicon Valley’s new immigrant entrepreneurs. San Francisco: Public Policy Institute of California.

A. Saxenian, 1994. Regional advantage: Culture and competition in Silicon Valley and Route 128. Cambridge, Mass.: Harvard University Press.

P. Schreyer, 2002. "Computer price indices and international growth and productivity comparisons," Review of Income and Wealth, volume 48, number 1, pp. 15–31. http://dx.doi.org/10.1111/1475-4991.00038

P. Schreyer, 2001. OECD productivity manual: A guide to the measurement of industry–level and aggregate productivity growth. Paris: OECD Statistics Directorate, National Accounts Division.

A. Sen, 2000. Development as freedom. New York: Anchor.

V. Shestalova, 2001. "General equilibrium analysis of international TFP growth rates," Economic Systems Research, volume 13, pp.391–404. http://dx.doi.org/10.1080/09535310120089770

G. Simmel, 1990. The philosophy of money. Second enlarged edition. London: Routledge.

Steering Committee for the Review of Commonwealth/State Service Provision, 1997. Data envelopment analysis. Canberra: AGPS.

C. Steindel and K.J. Stiroh, 2001. "Productivity: What is it, and why do we care about it?" Federal Reserve Bank of New York, Staff Reports, number 122, at http://www.ny.frb.org/rmaghome/staff_rp/2001/sr122.pdf.

T. ten Raa and P. Mohnen, 2002. "Neoclassical growth accounting and frontier analysis: A synthesis," Journal of Productivity Analysis, volume 18, pp. 111–128. http://dx.doi.org/10.1023/A:1016558816247

J.E. Triplett, 1999. "The Solow productivity paradox: What do computers do to productivity?" Canadian Journal of Economics, volume 32, number 2, pp.309–334. http://dx.doi.org/10.2307/136425

J.E. Triplett, 1996. "High–tech industry productivity and hedonic price indices," In: Industrial productivity: International comparison and measurement issues. Paris: OECD, Chapter 4, pp. 119–142.

I. Tuomi, 2004. "Social capital in the knowledge society: Theoretical concepts and the impact of ICTs," IPTS Working Paper (January). Seville: Joint Research Centre/Institute for Prospective Technological Studies.

I. Tuomi, 2003. "Kurzweil, Moore, and accelerating change," IPTS Working Paper (August). Seville: Joint Research Centre/Institute for Prospective Technological Studies, at http://www.jrc.es/~tuomiil/moreinfo.html.

I. Tuomi, 2002a. Networks of innovation: Change and meaning in the age of the Internet. Oxford: Oxford University Press.

I. Tuomi, 2002b. "The lives and death of Moore’s Law," First Monday, volume 7, number 11 (November), at http://firstmonday.org/issues/issue7_11/tuomi. http://dx.doi.org/10.5210/fm.v7i11.1000

I. Tuomi, 2001. "Internet, innovation, and open source: Actors in the network," First Monday, volume 6, number 1 (January), at http://firstmonday.org/issues/issue6_1/tuomi/. http://dx.doi.org/10.5210/fm.v6i1.824

I. Tuomi, 1999. Corporate knowledge: Theory and practice of intelligent organizations. Helsinki: Metaxis.

U.S. Congressional Budget Office, 2002. "The role of computer technology in the growth of productivity," at http://www.cbo.gov.

B. Van Ark, R. Inklaar, and R.H. McGuckin, 2003. "ICT and productivity in Europe and the United States: Where do the differences come from?" Paper for the European Economic Association, (20–24 August), Stockholm. (Version: February 2003).

B. Van Ark, R. Inklaar, and R.H. McGuckin, 2002. ""Changing gear": Productivity, ICT and service industries: Europe and the United States," Groningen Growth and Development Centre, Research Memorandum, GD–60.

B. Van Ark, J. Melka, N. Mulder, M. Timmer, and G. Ypma, 2002. "ICT investment and growth accounts for the European Union, 1980-2000," Final report on "ICT and Growth Accounting" for the DG Economics and Finance of the European Commission, Brussels, at http://www.eco.rug.nl/GGDC/dseries/Data/ICT/euictgrowth.pdf.

F. Vijselaar and R. Albers, 2002. "New technologies and productivity growth in the euro area," European Central Bank Working Paper, number 122. Frankfurt am Main: European Central Bank.


Editorial history

Paper received 12 May 2004; accepted 15 June 2004.


Copyright ©2004, Ilkka Tuomi

Copyright ©2004, First Monday

Economic productivity in the Knowledge Society: A critical review of productivity theory and the impacts of ICT by Ilkka Tuomi
First Monday, Volume 9, Number 7 - 5 July 2004
http://firstmonday.org/ojs/index.php/fm/article/view/1159/1079





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2014.