Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP

hp.com home


Smart Cooling


Research in Thermo-Mechanical Architecture

printable version
» 

HP Labs

» Research
» News and events
» Technical reports
» About HP Labs
» Careers @ HP Labs
» People
» Worldwide sites
» Downloads
content starts here

Welcome to the external web page of the Thermo-mechanical Architecture Research group.

We are a small team of researchers with expertise in heat transfer, fluid mechanics, thermo-mechanical physical design and system product design. We have extensive thermal modeling and metrology capabilities at the Hewlett-Packard Laboratories Palo Alto site.

objective - Our group is engaged in the following salient areas:
  • creation of a portfolio of cooling solutions for future computer systems
     
  • development of novel thermo-mechanical system designs for future computers and data centers
     
  • develop thermo-mechanical system architectures that will enable pervasive computing in the future

answering the cooling challenge - In the last decade, the microprocessor power dissipation has gone up by a factor of ten. The frequency of operation of CMOS devices has increased ten fold. While the input voltage and capacitance of devices has decreased, the number of devices on a typical microprocessor die has increased by an order of magnitude. Moreover, device miniaturization has led to integration of cache contained at a multi-chip level to one contained on the microprocessor die. This has resulted in high CPU core power density - e.g. 50% of a 20 mm by 20 mm micro-processor die may contain the CPU core, with the rest being cache. The total power dissipation from such a microprocessor has reached 100 W, and the power density is estimated to be 40 W/cm2. Extrapolating the changes in microprocessor organization and the device miniaturization, one can project future power dissipation density of 200 W/cm2! This increase in total power, and to a greater extent the power density, has resulted in the need for very low thermal resistance cooling solutions at device and system level.

The system power, especially the high and mid range computer servers, has undergone similar increase. In the high compute density data centers, populated with rack mounted slim servers, one can pack 15 KW of power in a standard rack. The data center is now akin to a symphony hall with 150 people per seat. For such power density, energy balance alone in sizing air conditioning capacity in a data center does not suffice. One has to model the airflow and temperature distribution to assure inlet air temperature to systems. The data center is the computer.

answering the energy consumption challenge - In the not-too-distant future, billions of people, places and things could all be connected to each other and to useful services through the Internet. Processing and storage will be accessible via a utility where customers pay for what they need and when they need it. This processing and storage utility will become as ubiquitous as electrical and water utilities are today. Computers, storage and networking devices, deployed in very high density configurations in data centers, will make up this information processing and storage utility. These data centers, often referred to as "steel mills of tomorrow", will require significant energy for powering and cooling the compute, networking and storage devices - 15 to 50 MW of power per data center per metropolis. The cooling infrastructure will consume 5 MW of power in a 15 MW data center.

Thus, building this computing and storage utility will require:
  • high density deployment of computers
  • high energy utilization - 15 to 50 MW

The high energy consumption lends us the following opportunities:

  • energy savings by judicious management of the cooling of data centers
  • utilize low-grade waste heat energy (exhaust air) from computers
  • utilize distributed and alternate energy sources, such as fuel cells, to power computers instead of relying solely on energy utilities in a given locality

At Hewlett Packard Laboratories, we are working on these opportunities that have availed themselves in this scenario of the future. We are devising energy efficient intelligent cooling approach for data centers that will reduce energy consumption of the cooling portion of a data center by 25%. This translates to $1 million in savings per annum for a 15 MW data center at $100 per MWh. We call this proposition: Smart Cooling of Data Centers. The smart cooling is realized by computer modeling, metrology and intelligent control of air conditioning resources. While we are working on Smart Cooling, we are also seeking to collaborate on utilization of low grade energy and on use of distributed sources of energy to power a data center. We thank you for visiting.

Hewlett-Packard Laboratories Cool Team

To explore our openings and our research, send an email to chandrakant_patel@hp.com.

» data center architecture
» system architecture
» data center operations solutions
  smart cooling
» internet systems & storage
» projects
» publications
» product impact
» people
» employment
» contact
researchers Chandrakant Patel and Cullen Bash
Privacy statement Using this site means you accept its terms Feedback to HP Labs
© 2009 Hewlett-Packard Development Company, L.P.