In computer science, when we say "time is money," we typically refer to two types of time that determine the costs and benefits of a given computer program: the time it takes the program to run, and the time it takes to write and maintain it. There is a delicate trade-off between these two types of time: the faster we want a program to run, the more time we need to spend when writing and maintaining it, and vice versa.
Until very recently, it seemed this trade-off could be mostly ignored. The job of making programs run faster fell into the lap of the hardware architects, who continued to deliver advances in single CPU clock speed at a reliable pace. These reliable speed increases allowed software engineers and programming language designers to focus on adding software constructs that offered substantial reductions in the time it takes to write and maintain code. How was this done? The terms that come to mind are abstraction, modularity, and compositionality.
Unfortunately, as we have all heard, things are about to change dramatically. Moore's Law has not been repealedeach year more and more transistors are being fit into the same spacebut CPU clock speeds can no longer be effectively increased. Instead, hardware designers have turned to multicore architectures, in which multiple computing cores are included on each processor chip. The switch to multicore architectures promises increased parallelism, but not increased single-thread performance. Even if this increased parallelism is delivered at a reliable pace, the hardware designers cannot by themselves deliver reliable increases in the speed at which programs run. This job will fall into the laps of software engineers.
The main tool for handling concurrency in today's programming languages are lockssoftware constructs that allow sequences of loads and stores to access data in a mutually exclusive manner. Indeed, lock-based programs have been known to deliver amazing performance on multicore architectures. However, it is becoming clear that, while using locks will allow us to continue to reduce the time it takes programs to run, they will cause the time it takes us to write and maintain our programs to shoot back up.
The heart of the problem is, perhaps, that no one really knows how to organize and maintain large systems that rely on locking. Locks are not modular and do not compose, and the association between locks and data is established mostly by convention. Ultimately, the association exists only in the mind of the programmer, and may be documented only in comments.
A promising solution to this problem is the introduction of atomic memory transactions as a multicore programming abstraction. While transactions have been used for years in the database community, they are now being proposed as an equal partner to, and perhaps even a replacement for, the loads and stores we typically use in our programs. The idea is simple: encapsulate sequences of loads and stores within a transaction, with the guarantee that if any operation takes place, they all do, and that if they do, they appear to other threads to do so atomically, as one indivisible operation.
Work on the design of efficient transactional memory implementations has been proceeding in earnest. However, there is a missing element: a framework of transactional memory-based concurrency that would provide the modularity and compositionality necessary when designing and maintaining large-scale concurrent systems.
This is where the breakthrough work on composable memory transactions by Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy takes center stage. They have provided, for the first time, a concurrent language model and a set of constructs that allow true simplicity in transactional programming.
One source of difficulty they had to overcome was that transactions are optimistic: they are tried but may fail and have to be rolled back, so one cannot allow events with side effects, for example I/O, to take place within transactions.
The authors' solution was to use transactions within a purely declarative language, which, as it turns out, is a perfect match for transactions since the type system explicitly separates computations that may have side effects from effect-free ones.
Another big problem was that concurrent programming requires that threads await events by other threads. Waiting on a condition outside a transaction greatly limits what it can do, and waiting inside a transaction can get it stuck.
The authors solved the problem by introducing new transactional constructs, among them an elegant retry command that allows a transaction to effectively abort, then restart only after a potentially good event has increased the likelihood of the condition being met. Quite surprisingly, retry is a key factor in allowing sequences of transactional actions to be composed so they all take effect together.
The language structure together with the added transactional composition constructs provide a clean framework that allows transactions to compose without giving up their natural advantages over locks: concurrent programmers can program using atomic operations that span multiple objects in memory, without having to break abstraction barriers.
If multicore architectures are the way to continue improving the time it takes programs to run, then perhaps composable memory transactions are the way to improve the time its takes us to write and maintain them.
©2008 ACM 0001-0782/08/0800 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.
No entries found