First Monday

Finding time in a future Internet by Britt Paris



Abstract
In contemporary discourse, technological time is generally articulated as interface speed, human memory, or user attention. Philosophers of technology such as Bernard Stiegler and Paul Virilio, software studies scholars such as Wendy Chun and Alex Galloway, and sociologists such as Barbara Adam and Manuel Castells all suggest that the time of technology is bound with the cultural, political, and economic structure of contemporary society. What these fields leave relatively undertheorized is how technology is built in relation to concepts of time. This paper supplies an answer, derived from interviews and document data regarding a real-time videoconferencing application named Flume, which runs on Named Data Networking (NDN), a NSF-funded Future Internet Architecture (FIA) project that is currently underway.

Contents

Introduction
Sociotechnical time
The past and future Internet
Temporal themes in NDN’s Flume application
Conclusions

 


 

Introduction

The synchronization of human life with the speed of technology has become a point of contention in contemporary technological discourse. Some worry that the human ability to think and relate to one another is being attenuated by machines (Roberts, 2015; Turkle, 2011). Others are concerned that habitual technology use correlates with psychological and developmental problems (Rock, 2013; Carr, 2011). On the other hand, for many, high speed Internet is a social good and we have come to think we cannot live without it. Indeed, the Internet is the medium through which many people connect with information, one another, and important services (Eubanks, 2017; Noble, 2018; Pariser, 2011; Turkle, 2011). But there is also the issue of how fast is too fast — as we can see in Kansas City’s inexpensive world class, high speed Internet recently made possible by Google Fiber, which has confounded users — normal people have difficulty accessing utility of super high speed Internet with the tools currently available to them (Manjoo, 2013). At the same time, people in underserved areas are continually ignored by Internet Service Providers (ISPs) who are loath to build new infrastructure necessary to bring Internet services up to speed with the rest of the country. With the Federal Communications Commission’s (FCC) late-2017 reversal of Internet neutrality, which would have ensured that ISPs could not discriminate in the provision of equal services, those in underserved areas have little hope for receiving service that would allow them access to the increasingly connected world (Cohen, 2017). Others fear that as a result of the FCC’s reversal of Internet neutrality rules, the wealthy will be able to pay for Internet “fast lanes” while everyone else would receive sub-optimal service. In this short meditation on its various contemporary effects, we see that Internet speed has political, social, and human implications, as well as implications for infrastructure.

If the speed of technology has such an effect on humans, it makes sense to uncover how the discourse of time is considered and implemented in Internet engineering projects. This project draws inspiration from information theorists Susan Leigh Star and Karen Ruhleder’s (1994) characterization of technological infrastructure not as a “thing” but as an ongoing, mutually-constitutive process that shapes social structures, institutions, and individuals that build and work with technical tools. This project combines assumptions from both STS and software studies to attempt to answer the following interrelated questions: What is the discourse of time and temporality in the development of technological projects? How does the concept of time manifest in technical projects?

This paper explores the aforementioned questions in the context of a real-time text chat application named Flume, which runs on Named Data Networking (NDN), an ongoing Future Internet Architecture (FIA) project that seeks to replace or provide an alternative to Internet Protocol (IP) addressed-based packet routing and transmission. I interviewed Flume project principals and analyzed related documents and code. The findings are categorized into three themes, the representation of time, the technical contexts that determine the way a program operates with relation to time, and how technology engineers envision the future of their constructions and the society in which these might exist.

 

++++++++++

Sociotechnical time

The compression of time and space is a persistent theme in multidisciplinary accounts of contemporary technology. Software studies theorists Wendy Chun and Alex Galloway each suggest that the human experience of time when engaged with technology is shaped by rhetoric which is inherently connected to epistemology and power relations (Chun, 2011; Galloway, 2004). In this vein, philosopher of technology Bernard Stiegler (2010) argues that contemporary society is increasingly bound to technologies designed to generate a temporal orientation that directs human consciousness towards an ungraspable present, rendering us uninterested in the past and incapable of envisioning a future, which has severe social, cultural and political consequences. Stiegler’s declarations are supported by the work of social theorists and sociologists concentrating on the relationship between technological and social time. For example, John Urry’s (2000) argument that new ICTs generate new kinds of temporality characterized by instantaneity and unpredictable change, which support new, destabilizing sociotechnical relationships that replace the linear logic of clock time, can be understood in concert with Manuel Castells’ concept of “timeless time” (Castells, 1996). Anthony Giddens’ (1991) notion that rhythms of life are faster, and the increasing speed of social and cultural change is in tune with Helga Nowotny’s (1994) declaration that as acceleration has increased, people feel more time pressure, and as such, it is increasingly challenging to find time for reflection and imagination. The interdisciplinary field of time studies has explored the notion of technologically-induced temporal disorientation extensively (Hassan and Purser, 2007; Southerton and Tomlinson, 2005; Turkle, 2011; Wajcman, 2015).

Contributing to the user experience engineering realm, cognitive psychology and related fields that examine technology use tend to focus on human interaction. In these fields, time is a necessary variable of attention and retention of information in technologically-mediated settings (Donders, 1969; Hick, 1952; Longstreth, et al., 1985; Luce, 1986; Martín, 2009; Norwich, 1993; Shannon, 1948). Many contemporary studies in this realm claim users prefer faster interface speeds (Brutlag, 2009; Bhatti, et al., 2000; Galletta, et al., 2004; Nah, 2003), which seems to provide justification for the ways in which many user-facing applications are designed to be responsive, or to operate almost instantaneously.

User experience research focuses on rendering phenomenological modes of the experience of time into formalized, workable units. There are many other formal modes of the “representation of time” that are useful to explore in describing the discourse of technical time — that is, how it is described as a concept, a process and a reified thing that one can represent cleanly and efficiently to ensure that their projects function in a desirable way.

 

++++++++++

The past and future Internet

Before discussing in detail what these future Internet projects entail, it is useful to briefly describe how this paper regards the Internet. The Internet is a meta-network of interconnected networks that communicate with a common protocol suite — Transmission Control Protocol/Internet Protocol (TCP/IP) — designed to connect any two networks. IP determines to and from where packets flow, specified by IP addresses. TCP determines the path packets take to and from their end points, and verifies that packets are received when they are requested (Blaauw and Brooks, 1997). The TCP/IP layer is often called the networking layer or Layer 3 when regarding the International Standards Organization (OSI) framework (Zimmerman, 1980). The networking layer connects the lower level hardware (flows of voltage differentials, which travel along physical infrastructure of cables and wires) and the application layers (where user-facing applications are allowed to access network services). Once two networks are interconnected, end-to-end communication with TCP/IP is enabled, so that any node on the Internet has the ability to communicate with any other regardless of their physical location or the network they are on. Many technical and policy scholars contend that the end-to-end principle, developed by Saltzer, et al. (1984), enabled the openness and expansive growth that allowed the Internet to become the global communication it is at present (Clark, et al., 2005; Gillespie, 2006; Lemley and Lessig, 2000) [1].

NDN looks to improve upon TCP/IP’s reliance on the physical addresses of packets. TCP/IP’s address-based method works well for simple information transfer, but the increasing popularity of streaming content, mobile applications, and the Internet of Things (IoT) devices present issues for TCP/IP. In NDN’s technical and outward facing documents, there is a notion that the architectures will allow data or content to be distributed between stakeholders “faster” and “more efficiently” than in IP. These documents go on to explain how new speeds and efficiencies are achieved, in layers of technical details (Jacobson, et al., 2009; Zhang, 2010) [2]. These are the attributes most commonly picked up on in the popular press, in which the affordances of these projects get broken down into discussions of developing an architecture that is “faster”, “more efficient”, or that allows better quality content streaming over today’s Internet (Talbot, 2013; Brown, 2015; Bauman, 2017). As a different speed, referred to in varying terms, is lauded as an advantage of the new architecture, it makes sense to interrogate more rigorously what these time-laden terms mean in practice.

What is NDN?

To further contextualize the findings presented below, it is first useful to overview the NDN protocol and the organization that supports it. As the Internet moved from Web 1.0 to 2.0, the future Internet design (FIND) projects were the first generation of future Internet projects funded by the NSF in 2006. NDN is a future Internet architecture (FIA) project, funded from 2010-2016 by the NSF in second and third rounds of future Internet funding (National Science Foundation, 2011). (See Figure 1 for the history of NSF-funded Future Internet work.) As the funding cycle came to an end in 2016, NDN is now scrambling to find its place going forward, marking a new phase in the project. New funding opportunities have arisen from NDN’s contract with the U.S. Defense Advanced Research Projects Agency’s (DARPA) Secure Handhelds on Assured Resilient networks at the tactical Edge (SHARE).

 

NDN's relationship with other FIAs
 
Figure 1: NDN’s relationship with other FIAs (Paris, forthcoming).
Note: Larger version of figure available here.

 

NDN was inspired by Van Jacobson’s notion of information-centric networking (ICN) (Jacobson, 2010). There are many types of ICN, which generally include a Layer 3 Internet networking protocol designed to allow data-centric and location independent communications. NDN, a research endeavor, is the most well-known, open, and outward facing version of ICN. Indeed, there are eight primary research sites of NDN in the U.S., and 21 sites total across the world working on NDN (NDN Project, 2018). Content Centric Networking (CCN) is also an ICN and a close relative to NDN being developed at Xerox PARC. Both NDN and CCN look to route data based on its content name, rather than its location or address, as it is under IP (L. Zhang, et al., 2010).

According to NDN researchers, in using NDN, data access becomes independent from IP address or location, enabling a more flexible, secure, and efficient communication model. According to 40 published reports, Web pages, and videos on the NDN project, the technology has the potential to address many of the key problems faced by the Internet today, including content distribution, mobility, security, and scalability [3]. The NDN Community prizes NDN’s security advantage provided with their cryptographically signed name space for each piece of data.

In order to initiate data transfer and routing, a data consumer (a user or application) must request a piece of data by issuing an interest packet with the name of the data. The network finds the closest copy of this data and sends it back along the path from which the interest packet was received. To be clear, NDN cannot support push requests that are sent by the data producer; the data consumer drives the process.

Flume as part of the NDN trajectory

NDN seeks to develop applications that will highlight its advantages to the wider public. Jeff Burke is the primary investigator at UCLA’s Research in Engineering, Media and Performance (REMAP) which is charged with this task, while NDN researchers develop other applications on an ad hoc basis. Peter Gusev works with REMAP as NDN project’s only paid application developer. Gusev and I started talking with regard to his work on the real-time videoconferencing (RTVC) application he developed in 2015 to run on an NDN network. In our early conversations, he began describing his work on Flume, an NDN application that improved upon code from the RTVC application. Gusev envisioned Flume to be a text chat application with different possible groups or channels, like the collaborative work platform Slack. In addition, Flume would offer the capacities to perform real-time videoconferencing, and replay those streams after the fact. His idea was that the group chat and real-time audiovisual flows would be accessible through an interface that allows one to view the both the replay of the live conference and the text chats.

 

++++++++++

Temporal themes in NDN’s Flume application

In a preliminary interview with Gusev, I mentioned that my project is an attempt to think through the nuances of temporality in applications built on new networking protocols. In response, Gusev noted that Flume is very time-sensitive and stated that “all I do these days is think about time” because of this project [4]. The following discussion is based on documents gathered from the publicly available NDN documents, as well as Flume project documents, including code, as well as recorded and annotated conversations with NDN participants like Gusev and Burke. I describe data in the three main categories of technical discourse outlined in the literature review — temporal representation, technical time, and speculation on the future of the project itself.

Temporal representation

The first major conceptual theme that arose from my analysis was representation, or the ways in which time and temporality are translated into a malleable object that can be understood, agreed upon, and made operable in such technical project as this case. Two particular phenomena, schematics and the interface, were particularly powerful illustrations of how representation works in the NDN/Flume development context.

Focusing on the diagrammatic representation of time, Drucker (2009) reviewed a wide swath of theoretical work regarding the representation of temporal relations in computer interface designs. She found that despite a variety of possible configurations, time was generally considered to be linear and unidirectional in the design of these interfaces (Drucker, 2009). She determined that the primary elements of the designs she reviewed were the reference frame that structures the temporal relations and the visual vocabulary used to express these relations [5]. The reference frame can be imposed externally with regard to a specific objective time system, or may be determined internally in accordance with the logic of a one or a set of time systems present. The vocabulary for describing these relations usually includes a mode of representing events (discrete point-based moments with no duration) intervals (events with duration) [6].

Relatedly, Fabio Schreiber’s (1994) “Is time a real time?” includes facets of representing time in computer systems — time primitives, topologies, references, bounds, structure, and metrics [7]. These categories are all based on equations and give dimension to Drucker’s notions of the reference frame and notational vocabulary. For example, Schreiber’s time primitives and time references are based in events and intervals while his time topologies, bounds, structure, and metrics are indicative of an objective, imposed reference frame used in any given representation. One can note how Schreiber’s work serves as a precursor to the project of this paper. Schreiber’s text examines the temporal ontologies underlying the mathematical schema are necessary to make systems work. In my study, I am more interested in the discourse of time that concerns how time is made into a technical object. It is perhaps true that engineers generally understand time in terms of equations, but I am more interested in their descriptions of how these concepts of time work within the systems they build.

Schematics

In a meeting, Gusev shared Flume schematics that illustrate how packets are managed through the nodes of the application. (See Figure 2: Flume Application Class Diagram) The nodes shown in Figure 2 are C++ classes in the application code directory; the lines on the map show how the nodes interact.

 

Flume application class diagram
 
Figure 2: Flume application class diagram (Paris, forthcoming).
Note: Larger version of figure available here.

 

This map in Figure 2 shows that the user-facing application functions as data passes through the nodes. An important juncture is the Interest Control node, which indicates the number of interest packets a data consumer can issue at once. The size of this pipeline is crucial for a real-time stream to function over NDN. NDN forwards packets based on a pull model of packet communication, meaning the consumer must issue requests for data before the producer will send them (Zhang, et al., 2014). In a real-time application, the data consumer must request data before the data is generated by the producer, which presents difficulties when building over NDN. Once the data is requested and received, data is reassembled in the Buffer node where minimal audiovisual latency is achieved using multiple classes — LatencyControl, PlaybackQueue, and the StateMachine. When a packet arrives, the buffer passes it to the state machine where that determines the video frame to which the data belongs and assembles the audiovisual frame. There is then a pointer to the data and the frame it goes into when it arrives. The playback queue decodes the data further into a stream of assembled frames. This process cycles through each node of the map at the same time and ensures the stream is played in order with the lowest latency, or execution time, possible, so that it appears to be functioning close to real-time. This technical description shows how applications are written to order data in NDN to have the user-facing appearance of a real-time video stream.

Interface

Gusev reported that he thought a lot about user interface. Gusev said that, in the end, he decided to use a timeline (See Figures 3 and 4) for Flume’s interface because this mode of representing time in a linear way, as a simple trajectory from past to future, is found in most leading text-based chat applications. He reasoned that this mode of representing the data would be easier for users to understand and intuitively use. He explained:

The timeline is just something that people are used to, I think. They see their news feeds in Facebook or they see their text chats in Slack. It’s always, basically, a timeline, a chronological timeline, right? You can always scroll back or forward to view previously received information. [8]

 

Flume conceptualization of a uniform v. non-uniform timeline
 
Figure 3: Flume — conceptualization of a uniform v. non-uniform timeline [9].
Note: Larger version of figure available here.

 

Figure 3 presents is what Gusev called a “uniform timeline”, where time’s passage between messages correlates to the space between those messages. The distance between messages five minutes apart is shorter than the distance between messages 30 minutes apart [10]. Gusev’s timeline in Figure 3 is “uniform” with relation to the standardized way of imposing time into systems with regard to minutes, hours, and days. Interestingly, the descriptor of uniform correlates directly with the notion that time passes in a uniform, even way.

However, Gusev noted in an unpublished specifications paper that using a uniform timeline is risky because it is hard to represent neatly on a screen and when events of different duration are involved, it could become confusing to users [11]. Instead, messages and video streams are usually timestamped and bundled together. In other words, they are presented in what Gusev called a “non-uniform timeline” seen in Figure 4.

 

Incompatibility of uniform and non-uniform timelines
 
Figure 4: Incompatibility of uniform and non-uniform timeline [12].
Note: Larger version of figure available here.

 

Non-uniform timelines work great for textual data because the actual time interval will be always different because of the difference in the time zone, in the user interface, say a text chat you see a message from today, a message from yesterday. They are adjacent. The gap between messages that came five minutes before or one hour before, it’s represented the same because makes more sense to see it all upfront. [13]

Here we can see an interesting instance of the decision to further impose a standardized time structure onto the different types of time that the Flume interface attempts to represent. Moreover, the timeline is still considered uniform, it still passes at the same rate, but here messages that are five and 15 minutes apart are shown in with the same distance between them [14].

Gusev noted that in his conceptualization of the interface, he thought more about the characteristics of the data, how it was named, and forwarding strategies NDN was using than he thought about the user interface or how users experience the passage of time. He said that he specifically thought about the difference in the ways that discrete data — say an event-based message — and continuous data — real-time audio or video streams — would be represented. Gusev said:

This idea really got me, that [with an application’s data namespace] you can issue interest for data and map the data received to some dimensions, to some coordinate system, and then retrieve whatever data is there in this coordinate system, right? So on one side, there is interest, on the other side data, and both sides will map them through names to some third abstract coordinate system. In case of a Flume application, this would be the coordinate system of a timeline.

For Flume, the idea was just to map data that has some continuity, right, that has some duration. So the step was to map this information under the same timeline.

Then when we have interest that are mapped to coordinate system of this timeline, we just issue interest in this name space in these coordinates and we get whatever data was published there and has the same coordinates we’re asking for, right? In our case ... the Flume case, it will be either text messages or samples of video or audio. [15]

However, audio or video streams of continuous data cannot be easily or neatly integrated into non-uniform timelines, as seen in Figure 5. Gusev suggested that while one can indicate start and end points of the stream, it is hard to represent the continuity of what happens between those start and end points, especially, he noted, when there are multiple or simultaneous streams.

 

Merging continuous streams with instant events
 
Figure 5: Merging continuous streams with instant events [16].
Note: Larger version of figure available here.

 

Figure 5 shows Gusev’s final decision was to integrate elements of the uniform and non-uniform timelines into one viewer and introduce a viewfinder which shows event-based, time-stamped messages and documents, and operates in the uniform timeline in which “one can imagine the viewfinder sliding along the uniform time axis in both directions” [17].

In the combined uniform/non-uniform timeline, the upper part contains the non-uniform timeline in which the messages, shared documents and audiovisual data streams are shown as points. The lower part incorporates the viewfinder that allows the user to scrub, or hover over, the timeline to access sample frames of the audiovisual streams in order. The viewfinder only pops up if the non-uniform timeline “encounters” a stream of data. For example, if no one was streaming, in a section of a Flume video conference, the user will not encounter the viewfinder. However, “with live streams the viewfinder is perennially active and represents the point of now and shows currently streamed content” [18].

Solving technical problems

The second major theme related to time and temporality in the Flume case is how project participants approach technical problem-solving. This was demonstrated especially in their creation and uses of code, of hardware, and of protocols.

The first level of temporal organization in engineering occurs at the level of code. Compilers translate programming language code into machine code-executable directions by breaking apart code into tokens and transforming them onto data structures the machine can understand, which affects the runtime of the program [19]. For example, C++ is a programming language based on objects and procedures, called functions, that communicate directly with machine code. C++ runs without compilers and thus has a faster run-time than Java or C# (Stroustrup, 1994).

Within the hardware of the machine is a system clock, a programmable timer which vibrates, pulses, or ticks to measure system time at regular intervals [20]. Applications use these hardware clocks for allocating time stamps to content. System time is often set to Coordinated Universal Time (UTC), or Greenwich Time. CPU process time, on the other hand, is related to these and is a count of the total CPU time consumed by a computational process [21].

Network Time Protocol builds upon the difficulties of incorporating system time into a networked scenario. One of the oldest protocols still in use today, NTP uses a specialized algorithm that synchronizes all participating computers within a network to the same Universal Coordinate Time (UTC) to “maintain accuracy and robustness” in transferring data packets in networked computer systems that transmit and receive IP-timestamped data [22].

Bridging technical time and the ways humans conceive of it, articulations of latency, efficiency, and speed are most often used in public-facing descriptions of the technological affordances of many components of Internet infrastructure. In computer science and engineering projects, latency generally means the time elapsed from the source sending an instruction, to the destination executing it. Efficiency characterizes an optimal ratio between the latency of a particular technical process and the computational resources, such as bandwidth, necessary to execute an action. Speed is a desirable feature of user experience which leads to more user engagement, and better user retention (Butterfield, et al., 2016).

The findings below show how those outward-facing descriptions reconcile with the ways in which technologists use these terms, and how technological components such as code, hardware, and protocols contribute to the technical function of the system.

Programming language

Based on previous research and discussions with the NDN team, it was clear that NDN protocol determines that the streaming data must be named in a particular way to impart a user-experience of a sequence of video. One issue Gusev’s RTVC solved relatively well is the inclusion of a system to assign numbers to packets names that are agreed upon when a real-time stream between a data producer and consumer is initiated. Gusev’s RTVC also developed a forwarding strategy that ensured that packets would remain on the producer’s side until they were called for by the consumer. In addition, Gusev stated that both RTVC and Flume are built in C++ because it operates without compilers, it requires less CPU processing power, and as such lowers latency.

Hardware

In order to configure the machines on an NDN network so that they function optimally with real-time applications, a few things must occur. First, the data has to be timestamped. Gusev noted that each time a real time application is initiated over NDN, it starts a clock that measures time in milliseconds.

For this project and for all the projects I work on here, my clock runs in milliseconds. I am not interested in microsecond level, it’s really overkill — not necessary. People theoretically using the application don’t notice milliseconds, and definitely not microseconds [23].

Every time a packet is received it queries the internal clock and stamps it with the time computer’s internal clock indicates. This allows packets to be timestamped or sequence stamped to arrange the packets into a real time stream.

In testing of applications, Gusev stated “We absolutely measure the demand on CPU process time. But it’s not hardware-accelerated like with Skype” [24], meaning that the Skype application uses hardware-accelerated encoding and decoding of the video stream, in other words, it employs some of the computer’s CPU to help with the application software’s performance. He added that while NDN is only using software-accelerated encoding at the moment, using similar types of hardware-accelerated processes might be possible down the road. In addition, NDN data names grow quite large and require more bites, thus NDN overhead for network packets constitute 15–30 percent of the payload — which Gusev emphasized is very inefficient network-wise. He noted that “We are just trying to run a simple video on NDN and that already has enormous demands on CPU in terms of bandwidth of the computer” [25]. Process time is generally slower than it would be with something like Skype. While process time is not as important with Flume, his quote does show the internal issues faced by NDN engineers to compete with IP-based computation.

Protocols

The hyperbolic routing strategy is particular to NDN (Krioukov, et al., 2010). In talking with Gusev, this concept was interesting as it is perhaps one of the aspects of NDN that conceptualizes the passage of time as information that is then used to order transmission of data packets in a way that would promote network efficiency and communication speed. It is interesting to the case at hand because it at once is framed as more efficient than the contemporary best-route routing strategy used in TCP/IP, and points to the way that the concept efficiency is rendered into a spatialized function of time and space that relies on geometric coordinates and time discretized into time-stamps. Hyperbolic routing strategy differs from the surest-route strategy, which is predicated on the end-to-end principle in which the destination IP address is needed to route and verify receipt of packets, as is used in most of the Internet today. Hyperbolic routing instead directs packets with regard to a binary input — time stamps and geocoordinates. It uses these to calculate the network’s resource costs of different routes; it then assigns the cheapest one. Gusev said “This is a number that takes into account the distance among geocoordinates and speed and routes data accordingly” [26]. He mentioned hyperbolic routing with regard to a new augmented reality project he is working on that, like Flume, runs with the RTVC codebase. In the original 2015 tests of RTVC, upon which Flume derived its routing strategy, the testbed operator John DeHart found that the hyperbolic routing strategy that was causing problems for the application, or rather the application caused problems for the routing strategy — the routing strategy could not read the time stamps. The issue was fixed by NDN’s networking architects and hyperbolic routing for the whole NDN project now runs more efficiently.

Flume, and other applications Gusev has built over NDN, are intended to work with intermittent or unstable connectivity. Gusev noted that when developing NDN applications, he must first ensure that his application’s algorithms are working properly with the network, as is the normal agenda of a research project. Generally, he indicated that he is interested in optimizing applications, but as the research project dictates, he is more interested in simply getting the applicaitons to work.

“Time is such a problem for me,” John DeHart, the NDN testbed administrator said. “Getting all the equipment to synched to run so that we can even see how things are running is incredibly difficult. NTP is where we have the most problems” [27]. It is difficult to know how the network functions with new designs if the machines are not synched.

To be clear, NTP is most useful for running tests, because in the wild computers are always out of sync, and at different locations in different time zones across the globe. Gusev clarified, “With Flume and with RTC, we never used NTP to run the application because it’s not really a thing in the wild.” Gusev noted that his NDN-based projects use NTP for running tests, to get metrics. “For example, we might use it if we want to run a test to know whether the actual latency between producers and consumers, whether it’s okay or whether it needs to be optimized” [28].

Technological continuity

The third principal theme that emerged from the analysis of Flume (and NDN more broadly) was more of an attitude or stance among project participants, which can be thought of as facing the future. This surfaced in conjunction with discussions on the DARPA SHARE proposal funding and Cisco’s acquisition of CCN, NDN’s conceptual cousin developed in tandem with NDN at Xerox PARC.

Sociologist Michel Callon (1980) overviewed how social and political interactions within scientific or technical projects shaped their struggle towards existence in technological innovation with biodiesel in France in the 1970s.

a) considerable variety in the technological options that are available, and close links between technical choices and sociopolitical choices; (b) considerable diversity in points of view put forward by the numerous social groups involved; (c) an initial lack of determination of the market demand, which is built up at the same time as the equipment designed to meet it These innovations lead to the emergence of new political actors who, by fighting to impose their technical choices, are inevitably led to define the needs to be satisfied, the forms of social organization to promote, and the actions to be undertaken. [29]

Star and Ruhleder argue against neat, totalizing models of the trajectories technical projects, saying that “traditional methodologies for systems development and deployment assume that tasks to be automated are well-structured, the domain well-understood, and that system requirements can be determined by formal, a priori needs-assessment” [30]. Instead, they argue, the sociopolitical aspects of technology development are nearly always messier, with more overlapping stages and disagreement than is present in most models. They argue that technological development is always an ongoing negotiation among actors with conflicting goals — institutions, individuals and technical instruments that all function within the existing sociotechnical structure of society.

Facing the future

At the NDN Community Meeting in March 2017, Gusev’s presentation invited many questions from the audience. First, NDN PI Lixia Zhang asked Gusev, who is NDN’s only paid developer, if this is the right direction to go — is it the best thing to do to imagine a novel application and build it on NDN to show affordances of NDN? In what ways does this highlight the security features of NDN? (Paris, forthcoming).

Gusev’s philosophy is to offer the market an application with novel functionality. In the case of Flume, it is something that can be recognized as a better alternative to Slack. With no exciting advantages the public can easily grasp, it seems that at present, NDN should develop an application that features something that would show that NDN worth investing in.

Gusev worked on developing this project because it imagines a new use of NDN that is exciting and hopefully something that would encourage users to engage with it. In order to do this, he is following the example of e-mail; at first, when e-mail was introduced, people did not understand or appreciate how convenient and necessary it would be until it was developed. Gusev maintained “I think it’s important to make a killer app — something people have never seen before” [31].

The correctness of Gusev’s instincts cannot be determined within a vacuum. The expected live date for Flume has come and gone, and as of now the project has been suspended until further funding can be found for it. According to Jeff Burke, who oversees Gusev’s work, funding is a huge issue for application development. As previously mentioned, the money received from the NSF for the whole of the NDN project has run out. Burke said, “We are at an inflection point between basic research and development of viable projects because the funding is low with relation to the ambitious goals of the project overall” [32]. Moreover, he indicated notions of experience and expectation as he noted that Gusev’s Flume is particularly difficult to justify in a NDN, a technically complex project that is, at the same time, so woefully underfunded. The academic structure of the project makes it difficult procure and dedicate the time and money to overcome many of the technical challenges.

Burke noted:

Flume is not working on the current Internet. Users, even users within the NDN community, have high expectations, which is unfortunate, because this is the group of people who would be ideally the most forgiving with the limitations of the applications [33].

He suggested a bright side for Gusev’s Flume because it is a solution unique to NDN, that allows live and historical playback, which is difficult over IP. But, he noted, Flume is promising because it highlights the uniqueness of NDN’s ability to operationalize historical data with a novel but useful application. Burke indicated that industry partners might assist with the final funding push for the Flume and related application projects on NDN in the future. With regard to NDN and how applications development will continue in the organization Gusev stated he expected that NDN would need to build a killer app to make it competitive, and that whatever that killer app might be, it would probably need to demonstrate NDN’s advantages:

The research will continue. It’s interesting work and people will continue to get funding for the research, in the same way probably that all experimental engineering research happens. But if it’s ever going to be something that scales up, application development needs some corporate funding. I don’t think the Cisco merger of CCN will really help with that in the near future, but who knows maybe I am wrong.

I think we are still about five years away from applications for the earliest adopters. What we really need is a killer app. The best way forward would be an app that has NDN bundled into it. One app that you can download that would run on NDN and that would be bundled into the application. Ideally it would be one that really highlights the features of NDN, something secure.

Edge computing is very attractive to consumers. Maybe a secure messaging app, maybe something else focused on data at the edge ... could be something that the military uses. It could be something that does something with big data sets in physics ... . [34]

Invoked by Gusev as a future NDN application opportunity, edge computing is a term used to describe performing data processing, transmission, and checking nearer to the application’s end node, where the data is generated. In 2017, the term edge computing was a prominent tech industry buzzword, much as big data and the Internet of Things have been in recent years. Research firm International Data Corporation (IDC) defines edge computing as “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet” (Quinn, 2017). Currently, the most common use of edge computing is processing data coming to and from IoT devices. These IoT edge devices, such as temperature sensors in hospital labs, agricultural buildings, and/or homes, package the sensor data and transmit it to a data center or “cloud” for processing to achieve results in automated temperature regulation. At the same time, the end node or edge application locally processes some of the incoming data, thus decreasing both the traffic to the central processing unit for the cloud services and the processing latency, and improving system efficiency. NDN’s unique naming schema and routing strategies allow applications to directly call for and store the data, making it compatible with providing “data at the edge”, as Gusev terms it. Gusev’s mention of “data at the edge” as an opportunity for future NDN applications is a great example of the ways in which he, the translator between what NDN does best and how it wants to be seen, must tap into trends in order to frame the affordances of the NDN protocol.

In the future, Burke says that in addition to the DARPA SHARE funding which takes advantage of data at the edge, NDN is looking for other non-FIA grants from the NSF and partnerships with industrial infrastructure agents, such as Huawei and Cisco. He stated that regardless of NDN’s funding situation, REMAP will continue to push the envelope in developing new real-time audiovisual and sensing applications, and will continue to look to NDN and other novel networking solutions to accommodate these goals. He maintained that at present, NDN is theoretically the most promising networking schema for the types of real-time audiovisual applications REMAP is most interested in developing.

 

++++++++++

Conclusions

The overview of Flume and its development shows that real-time application development for NDN illustrates the complex features of the discourse of time within the project. These features include computer science and engineering conventions for regarding the measurement and use of time at the hardware, code, and protocol levels, along with assumptions about how notions of time, speed and efficiency are interpreted by users.

Perhaps most interesting to this project is how code that actually drives functions links up with schematics, and how the engineer’s description of this process signals this issue of the discourse of time in the project at hand. The elements include how Gusev described thinking of time in terms of data and efficiency first, and in terms of speed of the user experience of time at the interface second. The code map showed the juncture at which engineers structure the function of applications with relation to the structure of the data and the function of the network. The code map along with the interface schematics demonstrate that new protocols require that user-facing applications be built differently to give the illusion of a real-time stream. The findings shed light onto how engineers build with time as an objectified concept that is communicated both in the function of the application and in the schematics that describe to the rest of the development team how the application is supposed to function.

Attending to Flume’s schematics that represent the way that time is supposed to be managed within the application is very much in tune with well-worn concepts of the standardized representation of time for users at the interface. This speaks to a discourse of the organization of time-sensitive functions that is employed by application developers and network architects to troubleshoot and navigate their work.

The relationship between speed, latency, and efficiency in real-time applications running on NDN is thought-provoking in this case. Lower latency is a concept that Gusev was attending to when choosing languages, using the hyperbolic routing strategy and testing the application. The latency of Flume, and indeed of NDN applications such as RTVC, appears to be a core problem that the NDN project as a whole has not fully worked out. With Flume, latency is difficult to control as a result of the network architecture of NDN — it is theoretically less complex, as Zhang noted, but at the same time there are still many problems to work out in terms of productively developing applications that would take advantage of this decreased complexity. In my discussions it became clear that to achieve its stated goals, the technology requires more work and resources to allow it to deliver on these claims. Or perhaps this tension is evidence of an inability of the architecture, configured by the present NDN community of practice, to receive the attention it needs to live up to the project’s public-facing promises.

NDN’s future is not assured. Flume, for now, has been put on hold — primarily for reasons of funding, logistics and lack of communication of the importance of the application. However, Flume is built on top of what seems to be the most developed future Internet architecture — NDN. NDN claims to focus on applications to attempt to prove the protocol’s affordances to the public, but there is still a question of the types of applications that should be developed for this goal and how to commit appropriate resources toward this end. NDN offers big affordances in the realm of security and privacy. Demonstrating and drawing public attention to these features is crucially important to the story of NDN. Highlighting privacy and security offers avenues to link NDN to more palpable contemporary events including data breaches, information leaks, and recent FCC rulings thwarting data protection, privacy, and Internet neutrality. End of article

 

About the author

Britt Paris is an information studies scholar whose work focuses on understanding how groups — from technologists to grassroots organizations — understand, build, and use Internet infrastructure according to their value systems. She recently finished her dissertation in the Department of Information Studies at UCLA, entitled “Time constructs: The origins of a future Internet” that employed document analysis and fieldwork to investigate the trajectory of NSF-funded Future Internet Architectures, which are technical research organizations building new global networking protocols that are intended to challenge many of the features of the longstanding Internet Protocol (IP). She has published work on Internet infrastructure projects, search applications, digital labor, and civic data analyzed through the lenses of critical, feminist, and postcolonial theory, and philosophy of technology. Paris previously earned her M.A. in media studies from the New School.
E-mail: parisb [at] ucla [dot] edu

 

Acknowledgements

Some of the findings presented in this paper also appear in my currently (August 2018) unpublished dissertation. I extend thanks to my dissertation committee for comments and notes on the draft. I would like to thank Peter Gusev and other members of the NDN team for graciously granting me their time.

 

Notes

1. Gillespie (2006) discusses the sociotechnical trajectory of the concept of end-to-end to describe how it began as a design feature but has been reified in technical discourse and practice.

2. These are just two early examples of many that came after.

3. Zhang, et al., 2010; Shilton, et al., 2014; Jacobson, et al., 2014; B. Zhang, 2015; L. Zhang, 2016, p. 201.

4. P. Gusev, personal communication, 9 March 2017; Paris, forthcoming.

5. Drucker, 2009, p. 49.

6. Drucker, 2009, pp. 49–50.

7. Schreiber, 1994, pp. 12–16.

8. P. Gusev, personal communication, 9 March 2017; Paris, forthcoming.

9. Gusev, 2017, p. 2.

10. Gusev, 2017, p. 3.

11. Gusev, 2017, p. 2.

12. Gusev, 2017, p. 3.

13. P. Gusev, personal communication, 9 March 2017; Paris, forthcoming.

14. Gusev, 2017, p. 3.

15. P. Gusev, personal communication, 26 July 2017; Paris, forthcoming.

16. Gusev, 2017, p. 7.

17. Gusev, 2017, p. 7, emphasis added; Paris, forthcoming.

18. Gusev, 2017, p. 3, emphasis added; Paris, forthcoming.

19. Blanchette, 2011, p. 1,048; Mogensen, 2011.

20. Butterfield, et al., 2016, pp. 87–88.

21. Butterfield, et al., 2016, p. 126.

22. Mills, 1992, p. 1.

23. P. Gusev, personal communication, 20 July 2017; Paris, forthcoming.

24. Ibid.

25. P. Gusev, personal communication, 20 July 2017; Paris, forthcoming.

26. P. Gusev, personal communication, 26 July 2017; Paris, forthcoming.

27. J. DeHart, personal communication, 14 November 2017; Paris, forthcoming.

28. P. Gusev, personal communication, 26 July 2017; Paris, forthcoming.

29. Callon, 1980, p. 358.

30. Star and Ruhleder, 1994, p. 253.

31. P. Gusev, personal communication, 20 July 2017; Paris, forthcoming.

32. J. Burke, personal communication, 8 June 2017; Paris, forthcoming.

33. Ibid.

34. P. Gusev, personal communication, 20 July 2017; Paris, forthcoming.

 

References

M. Bauman, 2017. “NMSU professor working on new wireless networks,” Las Cruces (N.M.) Sun News (30 September), at http://www.lcsun-news.com/story/news/education/nmsu/2017/09/30/nmsu-professor-working-new-wireless-networks/718803001/, accessed 6 February 2018.

N. Bhatti, A. Bouch, and A. Kuchinsky, 200). Integrating user-perceived quality into Web server design, Proceedings of the Ninth International World Wide Web Conference on Computer Networks, and Computer Networks, volume 33, pp. 1–16.
doi: https://doi.org/10.1016/S1389-1286(00)00087-6, accessed 27 August 2018.

G. A. Blaauw and F. P. Brooks, 1997. Computer architecture: Concepts and evolution. Reading, Mass.: Addison-Wesley.

J.-F. Blanchette, 2011. “A material history of bits,” Journal of the American Society for Information Science and Technology, volume 62, number 6, pp.1,042–1,057.
doi: https://doi.org/10.1002/asi.21542, accessed 27 August 2018.

B. Brown, 2015. “IP was middle school, named data networking is college,” Network World (8 October), at http://www.networkworld.com/article/2990834/network-management/ip-was-middle-school-named-data-networking-is-college.html, accessed 20 June 2016.

J. Brutlag, 2009. “Speed matters,” Google AI blog (23 June), at http://googleresearch.blogspot.com/2009/06/speed-matters.html, accessed 8 December 2015.

A. Butterfield, G. E. Ngondi, and A. Kerr (editors), 2016. A dictionary of computer science. New York: Oxford University Press.

M. Callon, 1980. “The state and technical innovation: A case study of the electric vehicle in France,” Research Policy, volume 9, number 4, pp. 358–376.
doi: https://doi.org/10.1016/0048-7333(80)90032-3, accessed 17 July 2018.

N. Carr, 2011. The shallows: What the Internet is doing to our brains. New York: W. W. Norton.

M. Castells, 1996. The rise of the network society. Malden, Mass.: Blackwell.

W. H. K. Chun, 2011. Programmed visions: Software and memory. Cambridge, Mass.: MIT Press.

D. D. Clark, J. Wroclawski, K. R. Sollins, and R. Braden, 2005. “Tussle in cyberspace: Defining tomorrow’s Internet,” IEEE/ACM Transactions on Networking, volume 13, number 3, pp. 462–475.
doi: https://doi.org/10.1109/TNET.2005.850224, accessed 27 August 2018.

M. J. Cohen, 2017. “What will happen now that net neutrality is gone? We asked the experts,” Quartz (21 December), at https://qz.com/1158328/what-will-happen-now-that-net-neutrality-is-gone-we-asked-the-experts/, accessed 27 August 2018.

F. C. Donders, 1969. “On the speed of mental processes,” Acta Psychologica, volume 30, pp. 412–431.
doi: https://doi.org/10.1016/0001-6918(69)90065-1, accessed 27 August 2018.

J. Drucker, 2009. “Temporal modelling,” In: J. Drucker. SpecLab: Digital aesthetics and projects in speculative computing. Chicago: Univeristy of Chicago Press, pp. 37–64.

V. Eubanks, 2017. Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

D. F. Galletta, R. Henry, S. McCoy, and P. Polak, 2004. “Web site delays: How tolerant are users?” Journal of the Association for Information Systems, volume 5, number 1, at http://aisel.aisnet.org/jais/vol5/iss1/1, accessed 27 August 2018.

A. R. Galloway, 2004. Protocol: How control exists after decentralization. Cambridge, Mass.: MIT Press.

A. Giddens, 1991. The consequences of modernity. Stanford, Calif.: Stanford University Press.

T. Gillespie, 2006. “Engineering a principle: ‘End-to-end’ in the design of the Internet,” Social Studies of Science, volume 36, number 3, pp. 427–457.
doi: https://doi.org/10.1177/0306312706056047, accessed 27 August 2018.

P. Gusev, 2017. “Flume specs” (February), not published.

R. Hassan and R. E. Purser (editors), 2007. 24/7: Time and temporality in the network society. Stanford, Calif.: Stanford Business Books.

W. E. Hick, 1952. “On the rate of gain of information,” Quarterly Journal of Experimental Psychology, volume 4, number 1, pp. 11–26.
doi: https://doi.org/10.1080/17470215208416600, accessed 27 August 2018.

V. Jacobson, 2010. “Content centric networking” (24 March), paper presented at the IETF77 ISOC Internet researchers meeting, at http://named-data.net/publications/1-vj-isoc-mar10-2/, accessed 19 July 2018.

V. Jacobson, J. Burke, D. Estrin, L. Zhang, B. Zhang, G. Tsudik, K. Claffy, D. Krioukov, D. Massey, C. Papadopoulos, P. Ohm, T. Abdelzaher, K. Shilton, L. Wang, E. Yeh, E. Uzun, G. Edens, and P. Crowley, 2014. “Named Data Netowrking (NDN) annual 2012–13 report,” at http://www.caida.org/publications/papers/2013/named_data_networking_2012-2013/named_data_networking_2012-2013.pdf, accessed 19 July 2018.

V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, 2009. “Networking named content,” CoNEXT ’09: Proceedings of the Fifth International Conference on Emerging Networking Experiments and Technologies, pp. 1–12.
doi: https://doi.org/10.1145/1658939.1658941, accessed 27 August 2018.

D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguñá, 2010. “Hyperbolic geometry of complex networks,” Physical Review, volume 82, number 3, 036106.
doi: https://doi.org/10.1103/PhysRevE.82.036106, accessed 27 August 2018.

M. A. Lemley and L. Lessig, 2000. “The end of end-to-end: Preserving the architecture of the Internet in the broadband era,” Stanford Law School, John M. Olin Program in Law and Economics, Working Paper, number 207, at https://escholarship.org/uc/item/4t02053b, accessed 27 August 2018.

L. E. Longstreth, N. el-Zahhar, and M. B. Alcorn, 1985. “Exceptions to Hick’s law: Explorations with a response duration measure,” Journal of Experimental Psychology: General, volume 114, number 4, pp. 417–434.
doi: http://dx.doi.org/10.1037/0096-3445.114.4.417, accessed 27 August 2018.

R. D. Luce, 1986. Response times: Their role in inferring elementary mental organization. New York: Oxford University Press.

F. Manjoo, 2013. “What do you do with the world’s fastest Internet service?” Slate (12 March), at http://www.slate.com/articles/technology/technology/2013/03/google_fiber_review_nobody_knows_what_to_do_with_the_world_s_fastest_internet.html, accessed 27 August 2018.

F. M. del P. Martín, 2009. “The thermodynamics of human reaction times,” arXiv (21 August), at http://arxiv.org/abs/0908.3170, accessed 27 August 2018.

D. L. Mills, 1992. “Network time protocol (version 3) Specification, implementation and analysis,” Network Working Group, Request for Comments (RFC), 1305, at https://tools.ietf.org/html/rfc1305, accessed 27 August 2018.

F. Nah, 2003. “A study on tolerable waiting time: How long are Web users willing to wait?” AMCIS 2003 Proceedings, at https://aisel.aisnet.org/amcis2003/285, accessed 27 August 2018.

National Science Foundation, 2011. “NSF Future Internet Architecture Project,” at http://www.nets-fia.net/, accessed 20 March 2018.

NDN Project, 2018. “Named Data Networking: Next-phase participants,” at https://named-data.net/project/participants/, accessed 20 March 2018.

S. U. Noble, 2018. Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.

K. H. Norwich, 1993. Information, sensation, and perception. San Diego, Calif: Academic Press.

H. Nowotny, 1994. Time: The modern and postmodern experience. Translated by N. Plaice. Cambridge: Polity.

B. S. Paris, forthcoming. “Time constructs: The origins of a future Internet,” Ph.D. dissertation, University of California, Los Angeles.

E. Pariser, 2011. The filter bubble: what the Internet is hiding from you. New York: Penguin Press.

K. Quinn, 2017. “The top 3 features of edge computing that are driving enterprise edge strategy” (4 May), at http://www.idc.com/getdoc.jsp?containerId=US42393717, accessed 12 February 2018.

G. Roberts, 2015. “Google effect: is technology making us stupid?” Independent (15 July) at http://www.independent.co.uk/life-style/gadgets-and-tech/features/google-effect-is-technology-making-us-stupid-10391564.html, accessed 27 August 2018.

M. Rock, 2013. “A nation of kids with gadgets and ADHD,” Time (8 July), at http://techland.time.com/2013/07/08/a-nation-of-kids-with-gadgets-and-adhd/, accessed 19 July 2018.

J. H. Saltzer, D. P. Reed, and D. D. Clark, 1984. “End-to-end arguments in system design,” ACM Transactions on Computing Systems, volume 2, number 4, pp. 277–288.
doi: https://doi.org/10.1145/357401.357402, accessed 27 August 2018.

C. E. Shannon, 1948. “A mathematical theory of communication,” Bell System Technical Journal, volume 27, number 3, pp. 379–423.
doi: https://doi.org/10.1002/j.1538-7305.1948.tb01338.x, accessed 27 August 2018.

K. Shilton, J. Burke, kc claffy, C. Duan, and L. Zhang, 2014. “A world on NDN: Affordances & implications of the Named Data Networking future Internet architecture,” NDNTechnical Report, NDN-0018, revision 1 (11 April), at https://named-data.net/publications/techreports/world-on-ndn-11apr2014/, accessed 17 July 2018.

D. Southerton and M. Tomlinson, 2005. “‘Pressed for time’ — The differential impacts of a ‘time squeeze’,” Sociological Review, volume 53, number 2, pp. 215–239.
doi: https://doi.org/10.1111/j.1467-954X.2005.00511.x, accessed 27 August 2018.

S. L. Star and K. Ruhleder, 1994. “Steps towards an ecology of infrastructure: complex problems in design and access for large-scale collaborative systems,” CSCW ’94: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, pp. 253–264.
doi: https://doi.org/10.1145/192844.193021, accessed 17 July 2018.

B. Stiegler, 2010. Technics and time. 3. Cinematic time and the question of malaise. Translated by S. Barker. Stanford, Calif.: Stanford University Press.

B. Stroustrup, 1994. The design and evolution of C++. Reading, Mass: Addison-Wesley.

D. Talbot, 2013. “Your smartphone and tablet are breaking the Internet,” MIT Technology Review (9 January), at https://www.technologyreview.com/s/509721/your-gadgets-are-slowly-breaking-the-internet/, accessed 6 February 2018.

S. Turkle, 2011. Alone together: Why we expect more from technology and less from each other. New York: Basic Books.

J. Urry, 2000. Sociology beyond societies: Mobilities for the twenty-first century. New York: Routledge.

J. Wajcman, 2015. Pressed for time: The acceleration of life in digital capitalism. Chicago: University of Chicago Press.

B. Zhang, 2015. “Named Data Networking: Lessons learned and open issues,” paper presented at the NSF FIA PI meeting, at http://named-data.net/wp-content/uploads/2015/12/fiapi-2015-lessons.pdf, accessed 17 July 2018.

L. Zhang, 2016. “Looking back, looking forward: Why we need a new Internet architecture,” paper presented at the 11th International Conference on Future Internet Technologies, at https://named-data.net/wp-content/uploads/2016/07/looking_back_looking_forward_cfi.pdf, accessed 17 July 2018.

L. Zhang, kc claffy, P. Crowley, C. Papadopoulos, L. Wang, and B. Zhang, 2014. “Named data networking,” ACM SIGGCOM Computer Communication Review, volume 44, number 3, pp. 66–73.
doi: https://doi.org/10.1145/2656877.2656887, accessed 27 August 2018.

L. Zhang, D. Estrin, J. Burke, V. Jacobson, J. D. Thornton, D. K. Smetters, B. Zhang, G. Tsudik, kc claffy, D. Krioukov, D. Massey, C. Papadopoulos, T. Abdelzaher, L. Wang, P. Crowley, and E. Yeh, 2010. “Named Data Networking (NDN) Project” (31 October), at https://named-data.net/publications/techreports/tr001ndn-proj/, accessed 17 July 2018.

H. Zimmerman, 1980. “OSI reference model — The ISO model of architecture for open systems interconnection,” IEEE Transactions on Communications, volume 28, number 4, pp. 425–432, and at https://ieeexplore.ieee.org/document/1094702/, accessed 27 August 2018.

 


Editorial history

Received 22 March 2018; revised 29 May 2018; revised 23 August 2018; accepted 27 August 2018.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Finding time in a future Internet
by Britt Paris.
First Monday, Volume 23, Number 8 - 6 August 2018
https://firstmonday.org/ojs/index.php/fm/article/download/9407/7573
doi: http://dx.doi.org/10.5210/fm.v23i8.9407