By Jamie Beckett, Jan. 2006
Soaring demand for information technology, increasingly
powerful microprocessors and the growing popularity of
compact blade servers means that data centers are more
densely packed than ever – and that means a tremendous
demand for cooling.
There are two primary ways to reduce the costs of cooling
data centers: Design more energy-efficient data centers
and design machines that generate less heat. HP Labs is
developing both.
'We believe in looking at energy-aware computing holistically – from
the chips to the servers to the data centers to the services
that run in the data centers," says John Sontag, director of the lab's infrastructure virtualization and data center architecture research.
This month, HP is announcing several new products and services,
including Data Center Thermal Assessment Services, that
grew in part from these research efforts.
Researchers are now pursuing new frontiers -- designing
hardware that consumes less energy, deploying computing
workloads in a more energy-efficient manner, developing
systems for dynamically allocating cooling resources and
creating sophisticated sensing systems to control cooling.
The goal: To provide better energy efficiency, increase
data center uptime and to allow data centers to operate
at higher power densities so users can get the most out
of their IT investments."
The key technical challenge of an always-on compute utility
is the management of energy as a resource," says Chandrakant
Patel, who leads the HP Labs component of the "Cool
Team," a network of technologists across HP who are
developing energy-management technologies.
Partha Ranganathan is focused on the subtraction side
of the energy equation. He's investigating technologies
that will squeeze inefficiencies out of current systems
to reduce the amount of heat they generate.
One approach involves introducing heterogeneity. Rather
than using identical processors on multi-core chips, for
example, Ranganathan says his research has shown that mixing
different processors on a chip can reduce energy consumption
by 40 percent with almost no effect on performance. (See
related technical paper, Power-aware computing:
Heterogeneous Chip Multiprocessors.)
How so? By assigning tasks to whichever processor can
perform it most efficiently.
Similar principles apply to individual systems in a blade
server, he says.
Ranganathan and his team have also developed an algorithm
for workload placement and resource provisioning that,
in essence, directs the most heat-intensive workloads to
the coolest location in the data center.
In addition, researchers have prototyped a system that
can work with 20 to 30 percent reduced cooling by managing
a heat "budget" across an entire collection of
systems -- allocating resources according the urgency of
a job, its importance to the organization, service-level
agreements or other factors.
The prototype works in much the way any other budget does,
Ranganathan says. "My manager knows that potentially,
every one in his team could order a new PC in a quarter,
but he does not plan his purchase budget for that worst
case. He just plans it for a likely scenario based on past
experience. You should be able to plan for heat in the
same way."
Chandrakant Patel and his team are focused on better managing
what heat a data center generates. They've designed a dynamic
thermal management system aimed at more closely monitoring
and controlling temperature in a data center.
Traditional systems measure data center temperature at
the hot-air return of air conditioning units instead at
the heat source – the racks. Because controls are
imprecise, these data centers are often operated at less-than
maximum capacity to avoid overheating. That's an expensive
proposition.
The researchers' solution uses a distributed sensor network
attached to standard racks, providing a direct measurement
of the environment where it is most useful. Experiments
to compare the HP Labs solution with a conventional system
show potential savings of more than 50 percent in cooling
costs.
"We're putting the control point where it should be," says
Cullen Bash, a "Cool Team" researcher in HP Labs. "And
instead of relying on a single sensor, we're controlling
cooling based on many sensors."
HP Labs' dynamic smart cooling technology processes sensor
data to determine how best to allocate cooling resources
to maintain specified rack temperatures.
"It's not just that we can save energy. By monitoring temperature
close to the heat source, we can react a lot faster," Bash
says.
That adds up to increased availability for computing systems.
Not only do system uptimes increase, but IT managers can
run equipment at higher power densities, which allows them
to get more use out of existing machines. Bottom line:
Businesses can get the most out of their IT investments.
Energy-aware computing is not a new problem for HP Labs,
which has been working in this area since the early 1990s.
In the process, they've found some novel ways to use old
technologies -- including HP's classic inkjet technology,
which they put to use for targeted spray cooling of microprocessors.
Working with engineers in HP's printing and imaging group
and elsewhere in the company, the researchers re-configured
the inkjet head to spray tiny droplets of dielectric liquid
coolant instead of ink.
Later, researchers developed technology that became
the basis for HP's recently announced Data Center Thermal
Assessment Services. This service, already in use by several
customers, assesses the unique thermal conditions in a
data center and develops recommendations for better cooling.
A follow-on technology called dynamic smart cooling is
used in HP Labs' Palo Alto, CA, data center. In the lab's
100-rack data center, dynamic smart cooling has generated
savings of 60 percent.
The next step for researchers is merge all their work into
a single integrated system.
"What we're doing is using the most efficient ways of distributing
computing resources," says Patel, "then threading
those together in a stack in such a way that we're minimizing
energy consumption."
Jamie Beckett is managing editor of this Web site and a former reporter and editor at the San Francisco Chronicle.
|