acm-header
Sign In

Communications of the ACM

Forum

Forum


Peter J. Denning's "The Profession of IT" column ("The Locality Principle," July 2005) invoked an anthropomorphic explanation for the prevalence of the locality principle in computational systems, observing that humans gather the most useful objects close around them to minimize the time and work required for their use, and that we've transferred these behaviors into the computational systems we design.

A more intellectually satisfying explanation might be that we are dealing with two parallel and independent evolutionary design paths. Trading some expensive high-quality space (fast memory) in order to gain time performance is a sound engineering decision. It is therefore likely that evolution first adapted the human brain by endowing it with limited but versatile short-term memory and large long-term memory—a structure that exhibits behavior similar to caching.

Millennia later, we make similar design decisions when building computing systems.

Diomidis Spinellis
Athens, Greece

Author Responds

I agree completely. The human brain's structure makes locality a natural feature of human behavior. Spinellis offers an explanation grounded in evolution. There are undoubtedly subtle points. For example, when short-term memory is full, the decision about which item is to be deleted next probably depends on context and not just on which item hasn't been used in a while.

Peter J. Denning
Monterey, CA

I take issue with some of Peter J. Denning's conclusions (July 2005). The "locality" principle he discussed is really Zipf's law in disguise, and many of the milestones he cited (such as Akamai's Web caches) are merely examples of Zipf's law in action. Further, Zipf provided a mathematical formula that could be tested, while locality is a qualitative observation.

Most operating systems are not particularly good at paging, but cheaper and cheaper main memory has minimized the pain of paging. Indeed, one can now routinely put more main memory on a PC than its processor is able to address.

Finally, Denning made quantitative claims about the ability of virtual memory to improve "programmer productivity." While I am a fan of the programming model of virtual memory and the simplifications it allows the programmer, I don't recall any studies of "programmer productivity" that would support this claim. Indeed, the optimizations required to deal with limited main memory don't go away with virtual memory but take on a slightly different guise. The programmer's productivity for data-intensive main-memory-limited problems (such as sorting and database operations) is only mildly improved through virtual memory. (Seymour Cray was right.)

Henry Baker
Encino, CA

Author Responds

Baker is apparently unhappy because he thinks computer scientists are trying to take credit for a concept already understood from Zipf's law and because virtual memory has not worked well. However, locality is not a re-expression of Zipf's law, which says that the frequency of the n-th most probable item is proportional to 1/n.

Early in our studies of paging performance we did consider the possibility that Zipf's law would explain the data and guide the choice of replacement policies. Unfortunately, none of our hypotheses held up under experiment.

When applied to virtual memory, this concept focused on the page-use frequencies of programs. But page-frequency data does not fit Zipf's law. Worse, these frequencies are not a good basis for paging algorithms. Least frequently used, or LFU, the paging algorithm that uses page-use frequencies, is a poor performer; least recently used, or LRU, is much better. We tried generating address traces with mathematical models derived from page-use frequency data, but the working sets on the generated traces were much larger than those of the real programs.

The essential feature of locality is that computing processes rank pages differently during different time phases. These dynamics are important; only when we had the notion of phases were we able to get locality models to generate the same memory demands as the real programs they were supposed to imitate. The working set model converted the qualitative notion of locality into a measurable, quantitative concept. Computer scientists added something substantive and had not merely rediscovered an old principle.

Virtual memory technology, which was mature by 1970, significantly lightened the burden of optimizing paging. The residual burden—locality-aware programming—was much lighter than planning overlays. Subsequent studies of virtual memory demonstrated that allocating the right amount of memory (the working set) exerted a stronger influence on performance than did any reasonable paging algorithm. The same was true for caches.

The economics of memory changed during the 1980s, as it was often cheaper to buy more memory than tolerate any slowing of execution caused by paging. However, virtual memory continued to be important because it provided sharing, access control, relocation, and memory partitioning. It also enabled virtual machine operating systems (such as the IBM VM370 series). The PC operating systems added virtual memory in the late 1980s; this would not have happened unless users and designers perceived a benefit. Intel and other chips have included address-translation hardware since the early 1980s.

The real world relies so fundamentally on virtual memory that operating systems would not work without it.

Programmer productivity has indeed been measured. David Sayre reported in Communications (Dec. 1969) that definitive studies at IBM Yorktown in the late 1960s showed that programmers were twice as productive when they did not have to plan page transfers (overlays) and simply program in a "locality-aware" style.

Baker mentioned two main types of data-intensive applications: those processing data in streams that should not cache at all (disk caches in database systems do this) and those filling memory with large randomly accessed data structures with working sets that equal their address spaces (the virtual memory should not page them at all). In either case, locality models advise correctly on the best course of action, and programmers have no need to try to optimize.

Virtual memory is not the only technology to rely on locality. Scientific computing is a prominent example that exploits spatial locality. Many scientific algorithms group data into spatial clusters, assign a processor to each one, and limit communication to immediate neighbors. This works because spatial locality is inherent in physical processes, where the behaviors of particles depend most strongly on immediately adjacent particles. We now have highly efficient parallel algorithms and architectures (such as hypercubes).

Locality is a much broader concept than virtual memory. I am not trying to defend its use or misuse in virtual memory but show the ubiquity and pervasiveness of the locality principle and its broad influence on the design and performance of computing systems.

Peter J. Denning
Monterey, CA

Back to Top

The Government's Response to Tech and CS Downturns

David A. Patterson's "President's Letter" ("Reflections on a Programming Olympiad," July 2005) was right on the money. I would like to mention my amazement that the federal governments in the U.S. and Canada have failed to respond to the downturns, both current and projected, in information technology and computer science enrollment. Given that intellectual work is the basis for the economy of the future and the significance of information technology in that economy, I suggest ACM consider establishing a task force to recommend how these countries could ensure their continued leading roles in software and information technology.

Alex Simonelis
Montreal

Author Responds

It's nice to know that people read and even agree with the "President's Letter" columns. Meanwhile, ACM and several other computing societies are working with the Computing Research Association to form two task forces: one on computing research funding, led by Ed Lazowska of the University of Washington, and another on improving the image of computing, led by Rick Rashid of Microsoft. Each will make recommendations on IT leadership to governments in North America, as well as to those in the rest of the world.

David Patterson
President of the ACM
Berkeley, CA

Back to Top

Disband the Hacker Posse

The "News Track" item about a U.S. "Hacker Posse" (July 2005) filled me with concern. Hiring computer security experts, including former hackers, to defend U.S. systems is fine, but hiring international hackers to carry out cyberwarfare is a horrible idea in many ways:

The U.S. military touts its hackers as "the best in the world." But the world is populated by innumerable hackers, and it is wrong to believe the U.S. military has command of the most capable ones—or more important—the only capable ones. Most of the viruses and worms that cause serious damage around the world are created by young hackers in countries other than the U.S.

Why would the military assume that hackers recruited from around the world would attack only systems it wants them to attack? Many capable hackers around the world would be quite happy to gain access to skills and tools (and funds) that would enhance their hacking abilities. Once they had them, they'd be just as likely to use them against U.S. systems as against the targets they've been assigned by the U.S. military. They might remain "on our side" for a while, then later—after, say, the U.S. drops them from its payroll—turn against us.

Even if the so-called "elite hackers" recruited into the brigade are all U.S. citizens, why would the U.S. military think it can control them? Hackers are well known for their strong anti-authoritarian beliefs and contrarian behavior. They are "rogue agents" almost by definition. Just like international hackers, American hackers would probably use any tools provided to them against U.S. systems, as well as against their assigned targets.

Creating such a force would cause other nations to do the same, possibly producing a hacker arms race. But training and arming hackers is not like caching nuclear or conventional weapons, which remain in their caches unless military commanders order them used. Training hackers is more like arming and training militias in Central America; more like giving AK-47s to angry young men in Kosovo or to child soldiers in the Congo. Those who have been armed and trained run wild, pillaging and looting, going after anyone they feel like. The victims pile up, including more innocents than intended targets. With hackers, the damage won't be just local but global.

Moreover, attacks by the "posse" would likely violate international treaties, trade agreements, and laws.

Jeff Johnson
Computer Professionals for Social Responsibility, San Francisco, CA

Back to Top

Author

Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email: [email protected].


©2005 ACM  0001-0782/05/1000  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.


 

No entries found