acm-header
Sign In

Communications of the ACM

Communications of the ACM

A Computational Model of Everything


We can divide the world of computing into two separate, orthogonal components. Informally, the computer is one component, and the asynchronous ensemble in which computers are embedded is the other. By computer we mean any activity locus, any active synchronous device—any device that does (in effect) one thing at a time. An asynchronous ensemble is a communicating collection of activities.

A multiprocessor is an asynchronous ensemble; so is a desktop computer system. The basic computer (processor and memory) and the devices are separate activities. For our purposes, a human user is just another activity locus, and a user-plus-computer equals an asynchronous ensemble.

We have argued these two basic components—coordination or the ensemble, and computation or the activity locus—are orthogonal.1 By orthogonal, we mean we can develop independent models for each component. As long as both models are fully general in their own domains, any coordination model plus any computation model equals a complete model—a model of everything in computing.

Over the last two decades, we have developed a coordination model (Linda, or tuple spaces), a computation model (symmetric languages) and another coordination model (lifestreams). Together, these three are a model of more than everything because the two coordination models are complementary in one sense, competitive in another. But no model of this sort can claim to be the definitive one. Designers seek a good combination of simplicity and power; in other words, elegance. Only the field as a whole can determine whether they have succeeded.

We will briefly discuss each of these three systems and the ways in which they (purportedly) add up to a computational model of everything.

Back to Top

The Tuple Space Coordination Model (and the coordination-based view of computing)

Our original problem was this: Given a collection of concurrently active processes or programs, supply a model that can turn the collection into an ensemble—a distributed system or a parallel application or some other kind of ensemble. (A distributed system ordinarily copes with networked resources, for example, moves mail or files; a parallel application solves a compute-intensive problem fast by focusing many processors on it simultaneously. There is no logical difference between the two species. They are each an asynchronous ensemble.)

Our goal was to provide communication through space and time. That is, we wanted to allow two processes to exchange information if they were running in different places (different processors or computers) or at different times—one on Tuesday, one on Thursday. Communication through time can only be one-way: Tuesday can pass information to the following Thursday, but not vice versa. It is communication nonetheless.

And we wanted a dynamic solution. It shouldn't be necessary to link our activities together before runtime. It should be possible for new activities to join the ensemble at any time. Programs should not have to know anything about one another in order to communicate. Mutually unknown activities should be able to communicate. Activities with disjoint lifetimes should be able to communicate; one that died in 1996 with, say, a second born in 2001.

The result was the tuple space model developed by Carriero and Gelertner.2 (The tuple space model is the basis of the Linda coordination language.) Each activity in an ensemble floats in a shared virtual space called "tuple space" in something like the way helium balloons float in the atmosphere. Programs communicate by generating, reading, and consuming tuples, which they set adrift in tuple space like balloons launching other balloons. (Activities are also tuples. A tuple can be active—an executing program, or passive—a list of typed values.) The model is structurally recursive, in the sense that a tuple space can be one field of a tuple. Tuples occupy tuple spaces, and tuple spaces can also occupy tuples.


No model can claim to be the definitive one. Designers seek a good combination of simplicity and power; in other words, elegance. Only the field as a whole can determine whether they have succeeded.


A program with information to communicate puts the information in a tuple and releases the tuple into tuple space. A program that needs information can read or consume any tuple by describing the one it wants: for example, a 3-element tuple whose first element is the string task assignment and whose other two fields are integers. Once it has found a matching tuple, an activity can read the tuple or remove it.

We call this the "generative communication" model. It allows mutually unknown activities to communicate. It allows activities with non-overlapping lifetimes to communicate.

For example: Suppose we want to build something like Sun's Jini system. We want it to be possible for devices to join a network, announce their identities, and accept requests from (or send requests to) other network devices. In the tuple space model there are many simple solutions.

If a new printer joins the network, it might first assign itself a unique id, so it can solicit business. Floating in tuple space is a 2-tuple of the form (printer-id, j); j is some integer. The new printer grabs this tuple (temporarily), assigns itself the number j, then increments j and puts the tuple back. Next it announces itself, by removing (temporarily) a 2-tuple of the form (printers, list-of-printers), adding a description of itself to the end of the list, and replacing the tuple.

When some other device needs a printer, it checks what printers are available and what each can do, by reading the (printers, list-of-printers) 2-tuple. (The value list-of-printers is assigned automatically to a local variable.) Then it examines the list and chooses a printer—for example, printer-15. Having made its choice, it bundles off its printer job by generating and releasing into tuple space a 3-tuple of the form (printer-15, job, my-identity-tag). The printer executes a loop in which it solicits requests to printer-15, does them, and releases an acknowledgment tuple that includes the requestor's identity tag.

The tuple space model has proven useful and efficient in practice. Many production applications use Scientific Computing Associates' Linda system. Sun's JavaSpaces version seems to have a growing user base, too. IBM's T-spaces and many other systems worldwide—commercial and academic—are based on the model.

Tuple spaces are models of (merely) asynchronous ensembles, not of everything. But it is natural to imagine a system in which the separate synchronous pieces become increasingly small-scale. We can imagine each one as a small routine (say), or even a single statement. As each synchronous activity shrinks, the communication medium carries more and more of the computing burden.

It also becomes easy to imagine a programming model in which the separate units are given and the programmer's job is to put them together into an ensemble. Programming then becomes the same as asynchronous ensemble building.

Back to Top

Symmetric Programming Languages

What kind of object is an activity, an executing program?

Consider a space-time grid. Each column in the grid represents the history of a region in space, and each horizontal line is a moment in time. If the program uses a global variable called "count," the spacetime grid has a column labeled "count." If count is initially 0, is stepped to 100, and then the program terminates, the count column holds a 0 to start with at the top; moving down the column, a 1 appears at the moment count's value goes to 1; further down a 2 appears, and so forth.

The width of the column is a schematic representation of the size of the variable. The space-time grid also captures the sequence in which statements are executed. One column represents the execution locus; it is occupied by each executed statement in turn. (Of course, we don't know until runtime what the actual statement sequence will be.) The simplest way to represent each statement is to reproduce the appropriate phrase in the source programming language. (Language-level statements might be broken down, interleaved, and overlapped at runtime, but we can always use a sequence of language-level statements to capture the meaning of the runtime action.)

If the local variable "check" is created by a procedure instance midway through the execution history, then the statement column widens at this point to encompass a new column representing the new local variable; when a local variable disappears, its (temporary) column disappears.

The details of the picture don't matter. The point is this: Every variable created and every statement executed has a corresponding region in the grid. The width of the region is a stylized representation of the actual space required inside the computer to store the variable or execute the statement. The height of the region is a stylized representation of the actual time required to execute the statement, or of the variable's lifetime.

We now have (assuming we fill in the details) a precise answer to such questions as: What is an executing program? What is a process? A process is a space-time grid. The space-time grid captures exactly what happens (blow-by-blow) when the program runs. We can give the space-time grid a syntax and semantics: it is simply a data structure, albeit a peculiar one—a dynamic array where columns have different widths, and the number of columns changes over the vertical extent of the array. In other words, we have turned the idea of an executing program into a basic programming language element. Just as any programming language includes a model for integer-type objects and operations for manipulating them, any language could include a model for process-type objects and rules for manipulating them. These process models would not be models of some implementation of a process (of the stack and heap and processor state); they would be models of the process itself, independent of any implementation.

The space-time grid model suggests a programming language model in turn. We could generate such a grid from a program in any language, but there is a natural space-time grid language inherent in the model. We call such languages "symmetric" because they treat space and time symmetrically. To write a program using such a language, we simply lay out the top edge (or top surface) of the grid. Such a language is geometric instead of (like all conventional languages) algebraic: The programmer lays out a map or blueprint for his program.

Concurrency is inherent in all languages, but it becomes explicit in symmetric languages. Every language allows a programmer to create a set of variables with a certain extent. For concreteness, consider the local variables of some procedure in the language C. Suppose a routine f() uses local variables x, y and z. Clearly, C must be a concurrent language: the lifetimes of x, y, and z are concurrent. They are created at (in effect) the same moment, and they are destroyed at the same moment. Their lifetimes are concurrent. Indeed, their lifetimes are also concurrent with the lifetime of some instance of f(). All four events go on simultaneously: x lives, y lives, z lives, f() executes. Hence C is a concurrent language, QED.

Suppose we were to replace the variables x, y, and z with three separate executable statements. This is merely a thought experiment: C doesn't allow such substitutions. But suppose we made them anyway, and tried to predict the meaning of the resulting program by applying C semantics consistently. Clearly our three new statements would execute concurrently, along with f(). Instead of the (passive) lifetime of x existing concurrently with an executing instance of f(), the (active) lifetime of our executable statement would exist concurrently with an executing instance of f().

Symmetric languages make this concurrency by giving programmers explicit tools for laying out a map or blueprint of a computation. The goal, though, isn't concurrency but logical consistency. We seek a logically consistent view of the whole program, encompassing variables and executable statements.

If we put this symmetric language model3 of computation together with the tuple space model of coordination, we have (by our lights) covered the waterfront.

Back to Top

Lifestreams

Tuple space is a model for coordination in general, and in that sense a model for operating systems. But it is a bottom-up model that assumes nothing about the type of computing services we ultimately supply to the user. We could approach the same problem from the other direction, by specifying what the user needs and allowing those needs to imply an operating system. Instead of starting with coordination and ending with operating systems, we could travel the same road in the other direction.


Designed originally as a personal information management system, it soon became clear that a lifestream could also be a powerful communication and collaborative system.


The lifestreams system was designed originally as an information management model (IMM): in effect, as the central component of an operating system. It was a response to two factors, a negative and a positive. By the early 1990s, traditional file and email systems were (for many of us) breaking down; we wanted an IMM that abolished named files and directories. When creating or receiving a document, we wanted to give it a name only if it deserved one; a new document had be allowed to be anonymous or (equally) to share one name with many other documents. When creating or receiving a document, we never wanted to bother assigning it to a directory. We wanted to be able to classify or reclassify all documents instantly—by content, type, or other descriptive information. (For example: "show me everything that mentions project X," or "last spring's email about Zeppelins.") The time period (early 1990s) is part of the story: many of us had been desktop computer users for roughly a decade. We had 10-years worth of information online, and we had invented all the file and directory names we ever wanted to invent.

The positive part of our motivation was this: It seemed desirable to allow all the electronic documents in a person's life to accumulate in a single diary, journal, or timeline. All the email, documents, memos, URLs, electronic images, and so on we created or received would form a scrapbook history of our lives if we pieced them together into a timeline. The timeline would have a past, present, and future. The past would hold a complete time-organized archive; documents in the past were immutable. In the present, there would be documents that had just arrived or just been copied or created—active documents we were working on. In the future, appointments, reminders, and other calendar items. The entire stream would be time-organized; future would flow into present into past. A 9 A.M. Tuesday appointment in the future would materialize in the present when Tuesday at 9 A.M. rolled around; would then flow into the past as a permanent record.

It turned out that this timeline structure solved the name-and-directory problem. Suppose every document you create or receive is attached indiscriminately to the end of a growing stream of documents: you have a timeline or journal. If you specify the right operations for your stream, you have also eliminated conventional names and directories. Documents are attached to the stream without names (unless a name seems useful); they are not put in directories. To find documents, we use a combination of search, time-order, and easy browse. All documents are fully indexed, and can be searched by content or descriptive metadata. Often, time-order is a natural focus mechanism. You might be seeking a recent document (it will be toward the front of the stream), or something from yesterday or last week or last year. Easy browse means that a synopsis of any document is available without opening the document—for example, by over-sweeping the document with the cursor. If you are seeking one particular document and a search returns, say, a couple dozen, that's close enough; the receding-stream display allows them all to fit onscreen simultaneously, and oversweeping them with the cursor makes it easy to locate the right one.

Search is recursive in a structural sense. Searching a stream yields a stream. Substreams returned by search play the role of directories. Instead of putting some document in a project X directory, we search for project X and get a substream of all project X documents. The substream, however, is created automatically when we need it, instead of manually on a document-by-document basis. It persists until it is removed, and continues to capture project X documents.

The system was designed originally as a personal information management system, but it soon became clear that a lifestream could also be a powerful communication and collaborative system. In practice, current lifestream systems are usually maintained by some institution for its staff—for example, Mirror Worlds Technologies (the developer of the commercial system) maintains a stream for its employees. The stream is hosted on a server; any employee can tune it in using any browser.

Each user has his or her own private view of the stream; a private view includes personal documents (which are visible to no one else) interspersed with public documents and group documents for any group to which this user belongs. Company announcements and group discussions all take place on the stream. Company plans, meeting dates, goals, and deadlines are posted in the future. All current business flows automatically into the past, where it becomes part of the company's permanent records. A comment or question for, say, the Development Group is posted on the stream, with read-permission assigned to "Development group." Such comments are interspersed with all other documents on the stream. No one has to turn to a bulletin board or a mailer to see them; they show up automatically at the head of every intended recipient's stream—and then flow into the past, where they remain accessible forever (at least in principle). A staff member can respond or refer to such a comment seconds, weeks, or years after it first appears. The stream naturally supports multiway discussions. Each comment is a separate posting; all comments are indexed automatically, and are accessible to search and browse along with all other documents.

The lifestream system was based directly on Linda. Linda introduced the kind of structure we called a "cyberbody" or "cyberstructure" for lack of a (badly needed) better word. A cyberbody is a self-contained information structure that isn't stored on any particular computer; it's available to any net-connected computer. Tuple space was a medium for building cyberbodies. Data structures built in tuple space out of tuples are accessible to any computer with access to tuple space. Our implementations of tuple space have always been distributed (or, in modern terms, peer-to-peer). A tuple space wasn't centralized on some server; it was distributed over many computers. A lifestream4 is a data structure (or cyberbody) that is easy and natural to express using Linda.

On the other hand, a lifestream can also be an alternative to Linda as a coordination model. We would define the coordination characteristics of the stream in a way that emerges naturally from Linda. Many processes can read a stream simultaneously. Many processes may safely append to the stream simultaneously; they succeed in arbitrary order. Having used Linda to bootstrap the concurrency semantics of lifestreams, we can use lifestreams instead of tuple spaces as a coordination model. Consider a lifestream solution of the Jini problem: A new printer appends its own spec sheet (or a self-advertisement) to the stream. When some device needs a printer, it searches the stream. (The stream is a cyberbody, directly accessible to all processes and devices on the network.) The requesting device might copy down the printer's address and get in touch directly. But there's another, cleaner solution: When a device needs a file printed, it appends the file to the stream with a note saying "print." A printer that needs work scans the stream for a print request.

Back to Top

Conclusion

The projects described here represent a research thread under way for some 20 years. Our other main research thread involves a model of human thought; it balances the work described here. We have discussed two coordination and one computation models, but a model of human thought is another computation model, which evens the score at two apiece.

It would be nice to take stock of this work. Nice, but impossible. The jury is still out, and will probably stay out for a good long time. Perhaps another 20 years?

Back to Top

Authors

Nicholas Carriero ([email protected]) is a research scientist in the computer science department of Yale University, New Haven, CT, where he works on systems issues in the development of software tools for parallelism.

David Gelernter ([email protected]) is a professor of computer science at Yale University, New Haven, CT. He is the author of Mirror Worlds, The Muse in the Machine, and other books.

Back to Top

Footnotes

1 A claim made originally by the authors in 1992.

2 The tuple space model was first described by the authors in 1982 and first implemented in 1985.

3 The symmetric languages model was developed by Gelernter and Jagannathan (with major contributions from Carriero) and was the basis for the survey of programming language history and design in their 1990 book Programming Linguistics.

4 The lifestream system was a research project at Yale (Eric Freeman implemented the first version as his Yale thesis project). It was later developed by Mirror Worlds Technologies; the commercial version was launched earlier this year.


©2001 ACM  0002-0782/01/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.


 

No entries found