The information technology industry is in the vanguard of "going green." Projects such as a $100 million hydro-powered high-performance data center planned for Holyoke, MA, and green corporate entities such as Google Energy, the search giant's new electrical power subsidiary, are high-profile examples of IT's big moves into reducing the greenhouse gases caused by computers.
However, the true benefits of such projects are likely to be limited; most users in areas supplied by coal, oil, or natural gas-fired power plants would likely find it difficult to change to a fully sustainable supply source.
These market dynamics have not been lost on government research directors. Agencies such as the U.S. National Science Foundation (NSF) have begun encouraging just the sort of research into component-level power management that might bring significant energy savings and reduced climatic impact to end users everywhere without sacrificing computational performance.
In fact, the NSF has held two workshops in the newly emphasized science of power management, one in 2009 and one in 2010. Krishna Kant, a program director in the Computer Systems Research (CSR) cluster at the NSF, says the power management project is part of the NSF's larger Science, Engineering, and Education for Sustainability (SEES) investment area.
"There are some fundamental questions that haven't been answered, and NSF funding might help answer them," Kant says. "These have been lingering for quite some time. For instance, when you look at the question of how much energy or power you really need to get some computation done, there has been some research, but it tends to be at a very, very abstract level to the extent it's not very useful."
However abstract the state of some of the research into power management might be, basic computer science has given the IT industry a head start over other industries in addressing power issues. Whereas an auto manufacturer could continue to make gas-guzzling vehicles as long as a market supported such a strategy, two factors in particular have focused microprocessor designers' efforts on the imperatives of power efficiency.
One of the factors is the thermal limitations of microprocessors as each succeeding generation grew doubly powerful per unit size. The other is the proliferation of laptops and mobile computing devices, which demand advanced power management features to extend battery life. Kirk Cameron, associate professor of computer science at Virginia Polytechnic Institute, says this shift in product emphasis has given engineers working on power management theories more tools with which to work on the central processing unit (CPU); these chips are also installed on desktop machines and servers as chip manufacturers design one family for numerous platforms, based on overall market demand. Examples of these tools include application programming interfaces such as Intel's Speed-Step and AMD's PowerNow, which allow third-party software to dynamically raise or lower the frequency of cycles and the voltage surging through the processor, depending on the computational load at any given time.
However, the default power management schemes supported by current operating systems, which allow users to specify either a high-performance or battery-maximizing mode on laptops, for instance, have numerous handicaps, including their static nature. The fact they need to be manually configured hampers their popularity.
Some power-management products, incubated by university researchers, are already available to dynamically manage power within a computer's CPU. Cameron is also the CEO of Miserware, a startup funded in part by an NSF Small Business Innovation Research Grant. Miserware produces intelligent power-management applicationscalled Granola for consumer PCs and Miserware ES for serversthat use predictive algorithms to dynamically manage frequency and voltage scaling. Company benchmarks claim that users can reduce power usage by 2%18%, depending on the application in use; best savings are generated by scaling down power during low-intensity activities.
Granola was launched on Earth Day last year, and has 100,000 downloads. Cameron says the dynamic voltage and frequency scaling (DVFS) technology is very stable, available on most systems, and "kind of the low-hanging fruit" in power management.
Susanne Albers, professor of computer science at Humboldt University of Berlin, believes speed scaling will be a standard approach to power management for some time. "I am confident that dynamic speed scaling is an approach with a long-term perspective," she says. "In standard office environments the technique is maybe not so important. However, data and computing centers, having high energy consumption, can greatly benefit from it."
Ironically, although the DVFS technology is currently the most ubiquitous power management solution for processors, Cameron and other researchers say new fundamentals of computing architecture will mandate wholly different solutions sooner rather than later.
The onset of mass production of multicore processors, for example, is mandating that researchers begin practically anew in exploring speed scaling approaches.
"Generally speaking, there exists a good understanding of speed scaling in single processor systems, but there are still many challenging open questions in the area of multicore architectures," Albers notes.
"The new technologies bring new algorithmic issues," says Kirk Pruhs, professor of computer science at the University of Pittsburgh, and an organizer of both NSF workshops. For instance, if a heterogeneous-cored processor is programmed correctly, the utility of using frequency and voltage scaling at all might be mootapplications needing lower power can be sent to a slower core.
However, Pruhs says programming these will be "much more algorithmically difficult for the operating system to manage, and the same thing happens in memories. The fact everything is changing means you have to go back and reexamine all the algorithmic issues that arise."
In the case of power management in a parallel environment, Cameron says his research has shown that one cannot take the principles of Amdahl's Law for parallelizationwhich states that any parallelized program can only speed up at the percentage of a given task within that program not run seriallyand get a correct assumption about power savings by simply taking into account the processors running a given application.
"In Amdahl's Law, you have one thing that changes, the number of processors," Cameron says. "In our generalization, we ask what if you have two observable changes? You might think you could apply Amdahl's Law in two dimensions, but there are interactive effects between the two. In isolation, you could measure both of those using Amdahl's Law, but it turns out there is a third term, of the combined effects working in conjunction, and that gets missed if you apply them one at a time."
In the long term, power management may borrow from sensor networks and embedded systems, which have extensively dealt with power constraints. Both David Culler, professor of computer science at the University of California, Berkeley, and Bernard Meyerson, vice president of innovation at IBM, cite the disproportionally large power demands of processors doing little or no work as an area where great savings may be realized.
Culler says processor design might take a lesson from network sensor design in principle. Measuring performance during active processing "talk" time is misplaced, he says. Instead, efficiency must be introduced while awaiting instruction"talk is cheap, listening is hard."
Culler says theories behind effectively shutting down idle processors ("doing nothing well") essentially fall into two basic camps that "hearken back to dark ages"the principles following Token Ring or other time division multiplex technologies, or a Carrier Sense Multiple Access approach akin to Ethernet topology, in which nodes about to transmit can first "sense" whether or not a network is idle before proceeding.
In the long term, processor power management may borrow from sensor networks and embedded systems, which have extensively dealt with power constraints.
He says this principle can apply to any scenario, be it a Wi-Fi network or a bus protocol on a motherboard. "Doing nothing well and being able to respond to asynchronous events anyway is the key to power proportionality, and can apply across the board," says Culler.
Market demand for dynamically provisioned processors is still an unknown. Albers says processor-level power management is not particularly viewed as a critical issue among European users.
"Energy and environmental issues have always received considerable attention in Europe. However, the typical person is probably more concerned about energy consumption in his household and private car than about the consumption of his PC or laptop," Albers observes.
IBM has placed a bet on combining chip-level energy allotment with the network architectures of homes and offices. The company has introduced fabricating technology for dedicated power management chips that control power usage while they communicate wirelessly in real time with systems used to monitor smart buildings, energy grids, and transportation systems. The main function of power-management chips is to optimize power usage and serve as bridges so electricity can flow uninterrupted among systems and electronics that require varying levels of current.
Meyerson says that, while reducing battery usage on end user devices may be sexy, "that's not the win for society. The win for society is when there's an area of a building and the sensors over a period of time crawl through all the data of the occupancy of all the offices, and they autonomically adjust for the fact this is Paris in Augustand in Paris in August people just aren't showing up."
IBM estimates the new technology can cut manufacturing costs by about 20% while allowing for the integration of numerous functions, resulting in one chip where previously three or four were needed. Meyerson says the technology can work for any appropriate algorithm researchers can come up with.
"Discovery algorithms that can look ahead and be predictive instead of reactive can be incredibly important," he says. "What we are doing is ensuring that if they come up with a solution, there's a way to execute it in a single chip, in a very efficient, synergistic way. It is a real footrace to stay ahead of the energy demands of society and IT."
Further Reading
Albers, S.
Energy-efficient algorithms, Communications of the ACM 53, 5, May 2010.
Bansal, N., Kimbrel, T., and Pruhs, K.
Speed scaling to manage energy and temperature, Journal of the ACM 54, 1, March 2007.
Ge, R. and Cameron, K.W.
Power-aware speedup. IEEE International Parallel and Distributed Processing Symposium, Long Beach, CA, March 26March 30, 2007.
Gupta, R., Irani, S., and Shukla, S.
Formal methods for dynamic power management. Proceedings of the International Conference on Computer Aided Design, San Jose, CA, Nov. 1113, 2003.
Yao, F., Demers, A., and Shenker, S.
A scheduling model for reduced CPU energy. Proceedings of the 36th IEEE Symposium on Foundations of Computer Science, Milwaukee, WI, Oct. 2325, 1995.
Figure. An intelligent power-management application, Granola uses predictive algorithms to dynamically manage frequency and voltage scaling in the chips of consumer PCs.
©2011 ACM 0001-0782/11/0200 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.
No entries found