acm-header
Sign In

Communications of the ACM

The profession of IT

Locality and Professional Life


puzzle in human form, illustration

Credit: Getty Images

One Sunday morning nearly three decades ago, my wife Dorothy and I were walking along the Potomac River in Washington, D.C. I was considering a job change and was concerned about whether my new responsibilities would divert me from my aspiration that my work "make a mark." She asked what I meant by making a mark. That meant, I confided, that people would long remember my contribution by name. She said that if that is my philosophy of life, I am likely to be disappointed. She explained her philosophy, in which she does not have that concern. She sees herself as a cell in the large body of humanity past, present, and future. Her life purpose is to be a good cell. She embraces every project with care and excellence—to do the best possible job. In this way she will contribute to the health of the whole and have impact on the whole. It is not her purpose that her name be attached to anything she has contributed. When she is gone, her job is done and other cells will continue to serve the well-being of the whole. I asked her about the awards and recognitions she received for her work. She said she appreciated the honors, but it was never her objective or interest to win awards or be recognized. This conversation forever altered my thinking about the contributions I could make.

Think about this. If each of us is doing our job, being a good cell in the large body of all humanity, we keep our neighboring cells healthy and thereby contribute to the health of the whole. Our contribution flows through to the whole like a ripple in the river of humanity. Over a period of time, the ripple remains but any memory of me as the author is likely to disappear.

I could see her insight firsthand in my walks through the Arlington Cemetery. In every direction, fading into the distance, are lines of headstones, each labeled simply with the name and dates of a soldier. With only a few exceptions, that is all we know about them any more. There would never be an answer to my question, "Who was that soldier?" Yet we know that collectively they made our country safe. Their combined ripples were a torrent etching an indelible mark on history.

Back to Top

Locality and Human Identities

The ideas of strong interactions with neighbors and loss of identity over time sound like locality in computing. Locality is the idea that, as a program executes, it confines its accesses to a small subset of its data for an extended period. In our professional work we do the same. For extended periods, we confine our interactions mainly to people in our immediate teams or communities, a small subset of all the people we could possibly interact with. This restriction to local neighborhoods is the spatial aspect of locality.

Our contributions begin in our local networks (neighborhoods) and spread out like ripples across many local networks in the conversations people have with people in other networks. Over time, our identity as the origin of ripples disappears. Only in rare cases does a name survive. This dying out of identity is the temporal aspect of locality.

In our profession, we are brought up with stories of great scientists, engineers, and leaders who are held up as role models. We develop desires to leave our own mark, meaning that our contribution, like theirs, is remembered with our name attached. The urge to have fame seems to have grown stronger in recent times as the Internet reveals how tiny each of us is compared to all of humanity. More people recoil from a sense of insignificance by tracking followers in Twitter, likes of their social media posts, or citations of their published papers. Achieving some sort of fame seems to be a way of demonstrating that their life has had meaning and impact. Yet locality tells us that any memory with our name on it is likely to be ephemeral. Most of us have a professional concern to have an impact on the world. The locality principle offers some insights into how to do this.

To drive home the points about the similarities of the development and preservation of our identities I would like to review the idea of locality as it evolved in computer science.

Back to Top

Locality and Computer Memories

Locality grew up in computer science beginning in the middle 1960s. It is the idea that as a program computes, it confines its memory accesses to relatively small localities—subsets of its data objects—for extended periods of time. We design operating systems to arrange the workspace so that the current locality is accessible in nearby local memory. Other items can be farther away. When this is done well, the operating system and all its processes will operate at peak efficiency.

Locality has two aspects, spatial and temporal. The spatial aspect is the localities themselves and possible connections between them. The temporal aspect is that the actions within a locality are good predictors of actions immediately in the future and poor predictors of actions far in the future.


Locality teaches us that being human is to serve our communities and look for reward from our contribution rather than long-term recognition.


The first glimmers of locality were seen by OS engineers trying to figure out how to control virtual memory, which was introduced in 1962 by Tom Kilburn at University of Manchester. Virtual memory was seen as a tremendous breakthrough because, by automating data transfers between main and secondary memory, it doubled or tripled programmer productivity, and it significantly reduced errors resulting from manually planning page moves. The paging algorithm was the virtual memory component that determined what pages to evict from main memory. Early virtual memory was hampered by poor performance, traceable to poor paging algorithms. The first careful scientific study of paging algorithms was published by Les Belady of IBM in 1966. It revealed that paging algorithms that employed "use bits" to protect recently used pages from being evicted performed better than other algorithms. The LRU (least recently used) algorithm emerged as the most robust and the most likely to give good performance. Belady speculated that LRU worked because of a "locality principle"—programs were likely to reuse pages they had used in the recent past.

Contemporaneous with Belady I was studying how an operating system running a virtual memory ought to determine what pages to load—its working set. I had the insight that a program's working set could be measured by observing which pages it used in the immediate past. I defined the working set policy, which loaded process' working sets and protected them from swapping. Unlike the previous paging policies, which were restricted to deciding the contents of the fixed memory space allocated to each process, working set defined how to establish a dynamically varying multiprogramming partition and prevent it from thrashing.

By 1970, locality was accepted as a key determinant for good performance of virtual memory. It was also—mistakenly—seen as an artifact how compilers arranged code and data on pages.

Over the next 40 years designs to take advantage of locality spread well beyond virtual memory. These included caches in CPUs, devices, and Internet; buffers between memory and I/O devices and networks; video cards; information encapsulation in program modules; accounting and event logs; most recently used lists of applications; Web browsers; search engines; video streaming; and edge caches in the Internet. Designs to harness locality are everywhere.1

Back to Top

Locality and Human Thinking

The success of locality prompted investigations into what might be causing locality to appear in program behavior. In 1976, Wayne Madison and Alan Batson demonstrated that locality was measurable in source codes of programs. They and many others attributed this to the way we humans think about problem solving. Techniques such as iterative loops, divide and conquer, and modularity all generated locality behavior. Locality is a reflection of human thought.

The algorithm is another reflection of human thought, intended to capture a procedure so that it can be followed by any other person or, in modern cases, by a computing machine. In 2010 Yuri Gurevich published a report seeking to answer the question, "What is an algorithm?"3 He was trying to find the smallest set of essential assumptions that make a procedure an algorithm. One of his findings was a bounded domain principle: an operation can only alter a finite, bounded region of the data structure. In other words, to qualify as an algorithm, a computational method must necessarily obey a locality principle.

Because machines are also the product of human thought, we might ask if there is a locality principle embedded into the design of computers. There is. Each component of a machine interacts locally with a few other components by receiving inputs and generating outputs. This enables components to be very fast because they can fire without waiting for signals from distant components to propagate through the circuit. This unleashes the amazing speeds that enable computers to do many tasks humans cannot. The locality principle enables computing machines to be fast. It also deprives machines of the human capability to sense context. The brain with its many intricate folds can bring into physical proximity neurons that are neuronally distant from each other. This may be a structural reason why the brain can recognize context but our computing machines cannot.

Not only does locality reflect human thought, it shapes human thought. Consider the brain's working memory, which holds memories of recent events. To be recalled later, short term memories must be moved to the longer-term memory areas of the brain. Working memory is like a computer cache. This structure explains why multitasking is difficult for many people and inhibits their productivity. To switch to a new task, you need to purge the local working memory of the old task and load it for the new.2,4 This context-switch takes time and introduces errors when fragments of short-term memories are lost in the process. As in a computer, too much context switching causes performance loss because the cache must be purged and reloaded. If the demand for context switches is too high compared to your brain's capacity, your brain can thrash like a computer. You experience this as a state of overwhelm, in which your capacity can be greatly diminished.

Thus it seems that locality is bed-rock for everything we know and think about computation. We cannot have algorithms without it. We cannot manage memory well without it. We do not multitask well because of it.

Back to Top

Conclusion

Give the limitations on our brains and interactions with other people, how can we make a difference? Start with our neighbors in our communities. We do not accomplish anything alone. A contribution spreads in the conversations, stories, and practices we share with each other. A contribution becomes greater when we mobilize our communities around it—they adopt its practice and become voices advocating it in new networks. This helps explain why individual names often disappear from contributions—they were actually done by communities.

Locality teaches us that being human is to serve our communities and look for reward from our contribution rather than from long-term recognition. Instead of thinking we are the serfs of technologies that drive our conditions, consider that technologies can enable us to take care of our community's well being and to learn more about the human condition. The idea that technology reveals and enables the human condition is not the current common sense, although it was a commonly held view in previous eras. Perhaps the fears that computers will take away our humanity are overblown. Perhaps our pursuit of more effective human-centered computing—for example with data science, machine learning, and AI—will help us to become more fully human.

We do our best when we focus on what we can do together in our neighborhoods. If we are concerned about credit and recognition of our work propagating through the human network, we are chasing the chimera. Our human identities are mostly local. We have no control over what happens at large distances (in space and time) from where we are in the human network. Locality empowers our neighborhoods. The ripples we create with our neighbors will likely travel far, but our names will not. And that is how we fulfill our purpose.

Back to Top

References

1. Denning, P. The locality principle. Commun. ACM 48, 7 (July 2005), 19–24.

2. Denning, P. Multitasking without thrashing. Commun. ACM 60, 9 (Sept. 2017), 32–34.

3. Gurevich, Y. What is an algorithm? In Proceedings of the 38th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM'12). M. Bieliková et al., Eds. Springer-Verlag, Berlin, Heidelberg, (2012), 31–42.

4. Van der Stigchel, S. Dangers of divided attention. American Scientist 109 (Jan.–Feb. 2021), 46–53.

Back to Top

Author

Peter J. Denning ([email protected]) is Distinguished Professor of Computer Science and Director of the Cebrowski Institute for information innovation at the Naval Postgraduate School in Monterey, CA, is Editor of ACM Ubiquity, and is a past president of ACM. The author's views expressed here are not necessarily those of his employer or the U.S. federal government.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


 

No entries found