The software-design community is split on the best approach for distributing their programs across a multicore architecture, and most programming languages were authored based on the assumption that only one processor would be working through the code sequentially. "The challenge is that we have not, in general, designed our applications to express parallelism," says Intel's James Reinders. He notes that parallel programming demands an approach with two areas of concentration — decomposing the problem so that it can be run in multiple parallel chunks, and achieving scalability.
The Defense Advanced Research Project Agency (DARPA) is funding the development of new programming languages through its High Productivity Computing Systems program. The languages use the Partitioned Global Address Space architecture, which allows multiple processors to share a global pool of memory while also permitting the programmer to retain individual threads in specified logical partitions so they will be as close to the data as possible in order to exploit the speed upgrade supported by locality.
Reinders contends that coder needs would be better served by extending commonly used languages instead of building new parallel-specific languages. He says the DARPA-developed languages would be too complicated for programmers to learn, and stresses that "people with legacy code need tools that have strong attention to the languages they've written and give them an incremental approach to add parallelism."
From Government Computer News
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found