Computing is one of the primary means by which we solve problems in society today. In this short paper we examine the implications of the primary techniques used in computer systems work — abstraction and indirection — and of Sevareid’s Law, an epigram that suggests that our problem-solving instinct may be leading us astray. We explore the context of this dilemma and discuss instances in which this has arisen in the recent past. We then consider a few design options and changes to the normal mode of computer science practice that might enable us to sidestep the implications of Sevareid’s Law.Contents
1. Introduction
2. Abstraction and indirection
3. Benign computing
4. Conclusions
The chief source of problems is solutions. (Sevareid, 1970) Sevareid’s Law expresses a conundrum at the core of modern industrial society. As we collectively identify and “solve” “problems”, we inevitably find that not only are many “problems” not real problems, but that “solutions” applied to them are the causes of new woes. The history of the last century is littered with examples of this conundrum, from chemical solutions like DDT (Carson, 1962) and tetraethyl lead (Drum, 2013) to agricultural solutions like mono-cropping with synthetic fertilizers and pesticides (Pollan, 2006) to financial solutions like leverage (Galbraith, 1955) and complex derivatives (Lewis, 2011). In recent decades, computing has become central to new solutions and to technological progress.
Seeing the consequences of these “solutions”, some have argued that technological progress, as currently understood in industrialized societies, is itself suspect and undesirable, and perhaps even something against which to fight. On the other hand, proponents of technological progress argue that no matter the downsides, technological progress must be pursued (Kelly, 2010). We hope both to sidestep this intractable debate in this paper and to chart what might be a middle course in the context of computing.
Specifically, we introduce the notion of benign computing and attempt to draw up a set of principles for computing that is less likely to have unintended, harmful downsides to the global ecosystem and to the subset of the ecosystem that is human society. Today computing is seen as a source of potential solutions in nearly every major sector of society, including energy, agriculture, transportation, health, education, manufacturing, science, and governance.
Modern computing, however, is still new enough that its principles and approaches have not withstood the test of time, and so the implications of Sevareid’s Law for computing can only be definitively seen with the benefit of hindsight. As we enter a critical period in global society due to the ecological limits industrial society faces today (McKibben, 2010; Pargman and Raghavan, 2014; Raghavan and Ma, 2011; Tomlinson, et al., 2013), this is an appropriate time to reconsider both the foundations upon which computing is built and consider a course correction if computing is to meet the needs of human society in an age of limits.
2. Abstraction and indirection
Modularity based on abstraction is the way things are done. (Liskov, 2009) In most fields of computer systems research and engineering, abstraction and indirection are key design principles. Abstraction involves the distillation of data or concepts, creating orderliness and enabling simplification and modularity. Indirection involves interposition on the flow of data or control between two modules within some software or hardware system. Abstraction makes indirection easier, and together the two principles make tractable computer systems of a scale unimaginable a few decades ago — all large-scale computing systems today rely upon layer after layer of indirection built upon an intricate weave of abstractions.
2.1. Computing and society
Given the ubiquity of computing in wealthy nations today, the impact of computing on society is self evident. However, the nature of that impact — its benefits and drawbacks, its consequences and character — is still far from clear and hotly contested. We do not aim to recap the ongoing debates on this subject, but briefly note below a few points relevant to this paper.
In his role as long-time technology journalist and pundit, Kevin Kelly’s (2010) attempts to find a middle ground between techno-utopian and techno-phobic arguments are well worth considering. He concludes that while technological solutions do almost always create new problems (per Sevareid), in his view technological progress is an unstoppable force and ultimately should be embraced. Thus Kelly concludes that society should embrace the sisyphean task of solving problems created by a previous generation of technology while creating new ones in the process. While still techno-utopian in many respects, Kelly’s view is more moderate than many technology commentators (and indeed many researchers and engineers) who ignore or dismiss technology’s downsides.
Hidden in Kelly’s discussion are three issues that critics have seized upon. First, technology often solves problems temporarily, if at all, when evaluated holistically. Second, technology often doesn’t even solve problems on its own terms — that is, even by the metrics used by a technology’s proponents, it often fails. Third, some technological solutions aim to address problems that are in a fundamental sense unsolvable [1]. Examples in each of these three categories are numerous, and are staples of the work of polemicists such as Morozov (2011; 2013). Many of these hidden issues are due to abstraction and indirection — consider the ways in which, for example, friendships are cheapened by the abstraction and indirection introduced by social networking, or self-employed freelance entrepreneurs are reduced to replaceable cogs in a task-based economy intermediated by various Web services.
2.2. Benefits and drawbacks
While we are concerned primarily with computing technology here, the tangle of these three issues and of abstraction and indirection are common in other disciplines as well. For example, the woes of industrial agriculture could be seen as a consequence of the abstraction of plants as machines that take water, N-P-K, or nitrogen (N), phosphorus (P), potassium (K), and sunlight and turn it into food; the abstraction of home mortgages and bundling into complex financial instruments through many layers of indirection was key to the financial crisis of 2008. It would be worthwhile to examine whether abstraction and indirection are central to solutions and their subsequent problems in many fields; indeed, given the complexity of today’s computing systems and their importance to today’s society, understanding the benefits and drawbacks of these principles when applied elsewhere is crucial.
By and large, abstraction and indirection have shown clear benefits in solving problems in the design and implementation of computing systems, so much so that they are sometimes jokingly viewed as the only two ideas in systems research. When applied within a system for its own purposes, it is relatively easy to identify whether the indirection introduced yields a benefit or adds unnecessary complexity. Indeed, unnecessary complexity is one of the key ways that indirection can go awry and cause drawbacks in excess of benefits; this is true along the entire scale of systems of all types, from a small piece of software to a civilization itself (Tainter, 1990).
As Toyama (2011) notes, however, technology is only an amplifier of human intent, and thus it is important in many of these instances — both those examples hailed by Kelly and scorned by Morozov — to note the benefits that a technology’s purveyors receive (and in many instances the drawbacks that they ignore). Even when benefits and drawbacks are heeded, a technology’s amplification can quickly get out of control if its power and subtle implications are not understood in advance.
Many small things breed a kind of stability; a few big things endanger it — better the Fortune 500,000 than the Fortune 500 (unless you want to be an eight-figure CEO). (McKibben, 2010) There are many possible responses to the above issues, and in this section we propose one possibility: benign computing, a general design framework for building computing systems that are less likely to produce harmful impacts to the ecosystem (and thus to human society) and are less likely to become trapped by Sevareid’s Law. Here we only offer a vision of what benign computing might become, in the form of design principles.
A key aspect of benign computing is a rejection of the utopian notion of creating new technology that is strictly “beneficial” or that advances ”development”. Such efforts suffer from a number of problems. First, benefit is always relative. Second, benefit, even when broad-based, is often difficult to measure. Third, the temporal profile of benefits and drawbacks can be complex for many technologies — benefits can occur before drawbacks, or vice versa, and worse still, even once drawbacks (or benefits) arrive they can be hidden. Instead, the aim of benign computing is computing that is of a scale and structure such that even if its downsides dominate, its overall harm is small because they are made apparent.
3.1. Inspirations
Setting aside the proposed responses of ardent boosters of any and all technological development and critics who suggest to throw it all away, there are some healthy trends in computing research we first consider for inspiration.
A positive trend is work in the field of information and communication technology for development (ICTD), which aims to use computing to address urgent and practical needs in countries and regions with fewer resources and less infrastructure, often employing thinking from the older field of appropriate technology. ICTD has advanced significantly in the past decade, and more importantly has established the practice of defining clearly both the societal problems being solved and the measures used to evaluate impact. In doing so, ICTD work has been more likely to yield intended social impact, though the resulting “benefit” and “development” often remains fuzzy.
While ICTD work often is careful in the means of implementation, focusing on “appropriateness” of the interventions applied and systems built, it begins with an assumption of good intentions and a judicious researcher employing the system for what is believed to be a worthy end. However much of computing work, both in academia and industry, is not done with social ends in mind; instead, intellectual and financial ends dominate. Thus we must consider how the notion of benign computing, which we detail below, might co-exist with these motives.
Another promising but preliminary area of work is in biomimicry, the design of systems that are modeled after nature. The challenge for biomimicry is to integrate true ecological understanding into the mimicry being attempted, rather than decomposing natural systems to identify pieces that serve specific needs. For example, work on stigmergy in computing systems has the potential to increase system resilience.
3.2. Industry
A key challenge is that computing today has a thriving industry, one that is naturally driven by profit motives [2]. This motive is not a problem in itself, but the manner in which computing startups aim to “scale” rapidly as an individual organization is a fundamental source of trouble. Indeed, computing startups (unlike in most other industries) are expected to demonstrate hyper-growth. A startup can in a matter of years have a direct impact on billions of people, and profit handsomely doing so (usually via the application of abstraction and indirection). The amplifying power of technology today is at such a level that the power needs to be used wisely, and there appears to be little understanding of the downside risks to society that such power creates.
Ironically, this structure is at odds with a principle at the core of modern distributed systems: horizontal scalability (Barroso and Hölzle, 2009; Patterson, et al., 1988). Horizontal scalability — “scale out” — is an approach in which a system is made faster and/or more resilient by adding more small units (e.g., individual computing nodes in a system) and is broadly favored today over vertical scalability — “scale up” — in which the power of a single machine is increased. However, when systems have been scaled horizontally, they are then cloaked in an abstraction that presents them as a single unified (large) system.
3.3. Principles
We contend that a different paradigm of computing research and practice is possible, which we term benign computing. The core aim of this paradigm is to make a technology’s benefits and drawbacks more apparent to its designers, researchers, and implementers, enabling a proper evaluation that might otherwise prove difficult or inconvenient. In doing so, the aim is to help anticipate drawbacks of a technology and to help preserve its potential benefits, and to ensure that those benefits are more broad-based (i.e., that they are reflective of more than one perspective of “benefit”).
Here we describe several design principles that we believe should be at the center of any work on benign computing. While only the test of time will show whether this is truly possible and whether these principles are the right ones, there is some evidence, both anecdotal and in other fields, that these principles lead to good results.
Scale-out. As we noted above, horizontal scalability — scale-out — is already a common principle in distributed systems work, but this is seldom then applied to the macro systems that they are supporting. There has been work on so-called federated systems, which are of the flavor we advocate here — systems where the scaling out is done by autonomous parties (i.e., under diverse administrative control) and the system as a whole is a federation of these parties’ systems. An advantage of scale-out in traditional settings is that the failure of a few does not threaten the functioning of the whole, and that to increase scale the whole system need not be reengineered. Beyond those settings, scale-out has the advantage that a coalition of parties can decide to create an independent offering. Indeed the systems built upon many of the early Internet protocols (e.g., SMTP, NNTP) had this structure, though they have lost it over the years [3].
Fails well. Natural systems are complex; the weave of interdependencies in the global ecosystem is far beyond our understanding today. Yet this complexity does not yield vulnerability of the sort exhibited by complex human-made systems. A primary reason is that complex human-made systems have only apparent complexity — they seem complex, but have far fewer stabilizing backup systems to ensure resilience, often because resilience requires sacrificing system efficiency and leads to higher short-term costs. Natural systems, on the other hand, have inherent complexity; while not efficient in the way of many human technologies, they have significant resilience to failure. Thus the evaluation of computing systems should look beyond apparent complexity; that is, nature should be mimicked in the ways it handles failure, not just in the ways it succeeds in normal operation [4].
Open design. While open source software and hardware is important, open designs are far more important. Open designs enable the creation of a diversity of implementations, written by different authors with different motivations but a common goal. This common goal can be codified in the form of an RFC or similar design document. In a scale-out system it is important that each independent party can build upon a base design to create new, differentiated offerings.
Self similar/fractal. Systems should be scale-out, open in design, and fail well at every level of their structure. While many of today’s large-scale distributed systems have these properties at some level of their operation, few exhibit them at all levels. This fractal structure ensures that decoupling can occur at the level most appropriate for it to occur [5].
The principles we offer above are not in themselves new or deep. Our only aim is to identify those approaches that can lead to a minimization of social harm should a system be recognized to be primarily harmful. Ultimately these principles aim to limit structural power — say, of the sort that large companies like Apple, Google, Amazon, Microsoft, and Facebook wield today — by creating computing systems that have a far greater underlying diversity, as in nature. Crucially, systems that have greater underlying structural diversity — and diffusion of power and control — can be more responsive to the needs of the local human and natural community. As McKibben suggests, there is a resilience — not just resilience to failure, but to unforeseen societal harms — that comes from a diversity of smaller parties providing a service instead of a smaller number of big players. While adopting these principles in commercial settings will be difficult, as they are likely to conflict with profit motives, computer science researchers are not similarly constrained. Indeed these principles appear to be better aligned with the role of research in society, as they enable a greater diversity and debate of ideas and approaches. Whether they can be adopted and supplant industry-driven “normal science” as it is practiced today in computer science remains to be seen (Kuhn, 1962).
About the author
Barath Raghavan is a senior researcher in the Networking Group at the International Computer Science Institute.
E-mail: barath [at] icsi [dot] berkeley [dot] edu
Notes
1. Greer (2008) has advocated differentiating between problems that have solutions and predicaments that have responses but no solutions.
2. This is less the case in many other engineering disciplines, and far less the case in most scientific disciplines.
3. Consider the structure of Facebook vs. Craigslist: Facebook is monolithic — while it uses scale-out in its data centers, the service offering as a whole is scale-up. Craigslist, on the other hand, is scale-out in many ways, as the sites for each community could (in theory) be spun off and run independently and diverge in its offerings. For a service like Craigslist to fully become scale-out in the manner we propose here, the service as a whole would be a federation of autonomous, regional Craigslists that are loosely tied by APIs, protocols, and links.
4. To be more concrete, consider the Internet’s routing system. While it is designed to be resilient to failure, its resilience is only in one dimension — alternative paths on the data plane — and not in control plane systems (e.g., backup alternatives for BGP), management systems (e.g., the administrative control of large ASes), and physical systems (e.g., IXPs, long-distance fiber bundles, etc.).
5. For example, A federated alternative to Facebook might require divergence/decoupling at the level of data center structure in one region where electric power availability is intermittent, while in another might require it at the level of inter-region connections where different privacy standards require different data retention behavior for transient data. If the system were not fractal, some of these decouplings would not be possible.
References
L.A. Barroso and U. Hölzle, 2009. “The datacenter as a computer: An introduction to the design of warehouse-scale machines,” Synthesis lectures on computer architecture, at http://www.cs.berkeley.edu/~rxin/db-papers/WarehouseScaleComputing.pdf, accessed 23 July 2015.
R. Carson, 1962. Silent spring. Boston: Houghton Mifflin.
Kevin Drum, 2013. “America’s real criminal element: Lead,” Mother Jones (January/February), at http://www.motherjones.com/environment/2013/01/lead-crime-link-gasoline, accessed 23 July 2015.
J.K. Galbraith, 1955. The great crash, 1929. Boston: Houghton Mifflin.
J.M. Greer, 2008. The long descent: A user’s guide to the end of the industrial age. Gabriola Island, B.C.: New Society Publishers.
K. Kelly, 2010. What technology wants. New York: Viking.
T.S. Kuhn, 1962. The structure of scientific revolutions. Chicago: University of Chicago Press.
M. Lewis, 2011. The big short: Inside the doomsday machine. New York: Norton.
B. Liskov, 2009. “The power of abstraction,” Turing Award Lecture, at http://amturing.acm.org/vp/liskov_1108679.cfm, accessed 23 July 2015.
B. McKibben, 2010. Eaarth: Making a life on a tough new planet. New York: Time Books.
E. Morozov, 2013. To save everything, click here: The folly of technological solutionism. New York: PublicAffairs.
E. Morozov, 2011. The Net delusion: The dark side of Internet freedom. New York: PublicAffairs.
D. Pargman and B. Raghavan, 2014. “Rethinking sustainability in computing: From buzzword to non-negotiable limits,” NordiCHI ’14: Proceedings of the Eighth Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, pp. 638–647.
doi: http://dx.doi.org/10.1145/2639189.2639228, accessed 23 July 2015.D.A. Patterson, G. Gibson, and R.H. Katz, 1988. “A case for redundant arrays of inexpensive disks (RAID),” SIGMOD ’88: Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, pp. 109–116.
doi: http://dx.doi.org/10.1145/50202.50214, accessed 23 July 2015.M. Pollan, 2006. The omnivore’s dilemma: A natural history of four meals. New York: Penguin Press.
B. Raghavan and J. Ma, 2011. “Networking in the long emergency,” GreenNets ’11: Proceedings of the Second ACM SIGCOMM Workshop on Green Networking, pp. 37–42.
doi: http://dx.doi.org/10.1145/2018536.2018545, accessed 23 July 2015.E. Sevareid, 1970. Remarks during CBS News (29 December); quoted in quoted in T.L. Martin, 1973. Malice in blunderland. New York: McGraw-Hill, p. 4.
J.A. Tainter, 1990. The collapse of complex societies. Cambridge: Cambridge University Press.
B. Tomlinson, E. Blevis, B. Nardi, D.J. Patterson, M. Silberman, and Y. Pan, 2013. “Collapse informatics and practice: Theory, method, and design,” ACM Transactions on Computer-Human Interaction, volume 20, number 4, article number 24.
K. Toyama, 2011. “Technology as amplifier in international development,” iConference ’11: Proceedings of the 2011 iConference, pp. 75–82.
doi: http://dx.doi.org/10.1145/1940761.1940772, accessed 23 July 2015.
Editorial history
Received 15 July 2015; accepted 23 July 2015.
Copyright © 2015, First Monday.
Copyright © 2015, Barath Raghavan.Abstraction, indirection, and Sevareid’s Law: Towards benign computing
by Barath Raghavan.
First Monday, Volume 20, Number 8 - 3 August 2015
https://firstmonday.org/ojs/index.php/fm/article/download/6120/4839
doi: http://dx.doi.org/10.5210/fm.v20i8.6120