To overcome the theoretical limitations of "lock-free" parallel algorithms, which are used by developers to write programs for multicore chips, computer scientists have demonstrated "wait-free" algorithms, which guarantee all cores will make progress in a fixed span of time. However, deriving sequential code from wait-free algorithms is very complicated.
Now, in a paper to be presented at ACM's Annual Symposium on the Theory of Computing in May, researchers at the Massachusetts Institute of Technology (MIT), the Technion–Israel Institute of Technology, and Microsoft Research will demonstrate an analytics technique suggesting that, in a wide range of real-word scenarios, lock-free algorithms can give wait-free performance.
"In practice, programmers program as if everything is wait-free," says MIT professor Nir Shavit. "What we are exposing in the paper is this little-talked-about intuition that programmers have about how [chip] schedulers work, that they are actually benevolent," Shavit says.
The researchers say the chip's performance as a whole could be characterized more simply than the performance of individual cores because the allocation of different chunks of code executed in parallel is symmetric.
From MIT News
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found