Optimizing data centers for high-density computing

technology brief, 2nd edition

Abstract.............................................................................................................................................. 2 Introduction......................................................................................................................................... 2 Power consumption and heat load ......................................................................................................... 2 Power consumption .......................................................................................................................... 2 Heat load........................................................................................................................................ 5 The Need for Planning ......................................................................................................................... 7 Optimizing the effectiveness of cooling resources .................................................................................... 8 Raised floors.................................................................................................................................... 8 Perforated tiles ............................................................................................................................. 8 Air supply plenum ........................................................................................................................ 9 Racks.............................................................................................................................................. 9 Cooling footprint ........................................................................................................................ 10 Internal airflow........................................................................................................................... 10 Hot and cold aisles..................................................................................................................... 11 Rack geometry ........................................................................................................................... 11 Computer room air conditioners....................................................................................................... 12 Capacity of CRAC units .............................................................................................................. 12 Placement of CRAC units ............................................................................................................. 12 Discharge velocity ...................................................................................................................... 13 Airflow distribution for high-density data centers.................................................................................... 14 Ceiling return air plenum ................................................................................................................ 15 Dual supply air plenums.................................................................................................................. 15 Ceiling-mounted heat exchangers .................................................................................................... 16 Advanced thermal management techniques .......................................................................................... 16 Static Smart Cooling....................................................................................................................... 17 Dynamic Smart Cooling .................................................................................................................. 18 Conclusion........................................................................................................................................ 19 For more information.......................................................................................................................... 20

Abstract
This paper describes factors causing the increase in power consumption and heat generation of computing hardware. It identifies methods to optimize the effectiveness of cooling resources in data centers that are deploying high-density equipment or that are already fully populated with high-density equipment. The intended audience for this paper includes IT managers, IT administrators, facility planners, and operations staff.

Introduction
From generation to generation, the power consumption and heat loads of computing, storage, and networking hardware in the data center have drastically increased. The ability of data centers to meet increasing power and cooling demands is constrained by their designs. Most data centers were designed using average (per unit area) or "rule of thumb" criteria, which assume that power and cooling requirements are uniform across the facility. In actuality, power and heat load within data centers are asymmetric due to the heterogeneous mix of hardware and the varying workload on computing, storage, and networking hardware. These factors can create "hot spots" that cause problems related to overheating (equipment failures, reduced performance, and shortened equipment life) and drastically increase operating costs. The dynamic nature of data center infrastructures creates air distribution problems that cannot always be solved by installing more cooling capacity or by using localized cooling technologies. A more sophisticated scientific method can help to find the most effective solutions. Research at HP Laboratories has found that proper data center layout and improved Computer Room Air Conditioner (CRAC) 1 utilization can prevent hot spots and yield substantial energy savings. This paper is intended to raise awareness of present and future challenges facing data centers beginning to deploy or already fully populated with high-density equipment. This paper describes power consumption and heat load, recommends methods to optimize the effectiveness of data center cooling resources, and introduces thermal management methods for high-density data centers.

Power consumption and heat load
In the past when data centers mainly housed large mainframe computers, power and cooling design criteria were designated in average wattage per unit area (W/ft2 or W/m2) and British Thermal Units per hour (BTU/hr), respectively. These design criteria were based on the assumption that power and cooling requirements were uniform across the entire data center. Today, IT managers are populating data centers with a heterogeneous mix of high-density hardware as they try to extend the life of their existing space. This high-density hardware requires enormous amounts of electricity and produces previously unimaginable amounts of heat. For example, IT infrastructures are now using 1U dual-processor and 4U quad-processor ProLiant blade servers that can be installed together in a rack-mounted enclosure, interconnected, and easily managed. This high-density server technology lowers the operating cost per CPU by reducing management expenses and the requirements for floor space. Despite speculation that high-density server technology has the opposite effect on power consumption and heat load, a closer server-toserver comparison reveals that HP p-Class blades consume less power and generate less heat load.

Power consumption
HP provides online power calculators to estimate power consumption for each ProLiant server. The power calculators provide information based on actual system measurements, which are more
1

CRAC units are sometimes referred to as air handlers.

2

com/configurator/calc/Power Calculator Catalog. Screen shot of ProLiant DL380 G4 server power calculator 3 . Figure 1 displays an HP power calculator. Power calculators for all current HP ProLiant servers can be found at http://www.xls. Figure 1.hp.accurate than using nameplate ratings. which is a macrodriven Microsoft Excel spreadsheet.

4 GB.From generation to generation. standard racks can house several of these servers.0 A @ 208V 3106 BTU/hr The table above compares the power consumption of individual servers. a very useful metric is power density.) contribute to the amount of power consumed. 2 PCI) DL380 G2 362 W 1. etc. For racks. the power consumption of high-density servers is increasing due to the extra power needed for faster and higher capacity internal components. 1 PCI) Power Supply Rating per Server Generation Power Increase DL360 G2 246 W 1.2 A @ 208V 1540 BTU/hr DL380 G4 605 W 3. Power density captures all of the key variables that contribute to rack densification. Figure 2. type and number of components in each server. Estimating the power consumption of a rack of servers is more difficult because several variables (number of servers per rack. 4 GB. however.7 A @ 208V 2573 BTU/hr DL580 G3 910 W 4.2 A @ 208V 840 BTU/hr DL360 G3 389 W 1.6 A @ 208V 1328 BTU/hr DL360 G4 460 W 1. Figure 2 illustrates the overall power density trend from generation to generation of HP ProLiant BL and DL servers. the power required by a ProLiant DL360 G3 featuring a 3. 6 HDD. Table 1.4 GHz processor (see Table 1).0-GHz Intel® Xeon™ processor is 58 percent higher than its predecessor with a Pentium III 1. Increase in power consumption from generation to generation of ProLiant DL and BL servers Rack Unit (CPUs/Memory/ Drives/Adapters) 1U (2P. 4 HDD.9 A @208V 1570 BTU/hr 2U (2P.3 A @ 208V 1556 BTU/hr DL580 G2 754 W 2. 8 GB. or power consumption per rack unit (W/U). Power density of HP ProLiant BL and DL servers (Watts/U) 4 . For example.0 A @ 208V 2065 BTU/hr 4U (4P. 3 PCI) DL580 G1 456 W 2. 2 HDD.8 A @ 208V 1233 BTU/hr DL380 G3 452 W 2.

www3. which is more than that of a typical one-story house. once the power consumption of a computer or a rack of computers is known. Therefore.75 kW 50. the heat load for a DL360 G4 server is 460 W × 3.413 BTU/hr/W =1. its heat load can be calculated as follows: Heat Load = Power [W] × 3. Table 2.0 kW 21 21 × 575W = 12.939 BTU/hr. where 1 W equals 3. The heat generated by the computer is typically expressed in BTU/hr. Power and heat loads of fully-configured. The heat load of a 42U rack of DL 360 G4 servers is almost 65.374 BTU/hr Heat load 30.com/configurator/calc/Site%20Preparation%20Utility. density-optimized ProLiant servers* DL 580 G3 ProLiant Server DL380 G4 DL360 G4 BL20p G2 Servers per Rack Power 10 10 × 900W = 9.08 kW 41.Heat load Virtually all power consumed by a computer is converted to heat. HP provides a Rack/Site Installation Preparation Utility to assist customers in approximating the power and heat load per rack for facilities planning (Figure 3).212 BTU/hr 42 42 × 460W = 19.570 mBTU/hr. IT equipment manufacturers typically provide power and heat load information in their product specifications.hp.413 BTU/hr per watt For example. This utility can be downloaded from http://h30099. The table shows the trend toward higher power and heat load with rack densification. 5 .32 kW 65.939 BTU/hr 48 6 × 2458W = 14.413 BTU/hr.xls.717 BTU/hr * These calculations are based on the product nameplate values for fully configured racks and therefore may be higher than the actual power consumption and heat load. Table 2 lists the power requirements and heat loads of racks of densityoptimized ProLiant DL and BL class servers. The Site Installation Preparation Utility uses the power calculators for individual platforms so that customers can calculate the full environmental impact of racks with varying configurations and loads.

Rack/Site Installation Preparation Utility available on the HP website 6 . The ProLiant Class.Figure 3.

The goal of all data centers is to optimize the effectiveness of existing cooling resources. future data center designs will have to take a more holistic approach that examines cooling from the chip level to the facility level. 7 . Some data center personnel believe that limiting rack power density neutralizes the space-saving benefits of densification because the racks are not full. data centers must ensure adequate localized cooling capacity to match non-uniformly distributed heat loads. Limiting rack power density based on power consumption results in lower rack utilization. The following section describes proven methods to achieve better airflow distribution. However.The Need for Planning The server densification trend is being driven by customers’ need to maximize the use of valuable data center floor space. Figure 4 shows several rack configurations using six generations of ProLiant DL360 servers. each limited to 10 kW by controlling the number of servers per rack. For example. Given that power consumption and cooling demands will continue to increase. Figure 4. The section titled “Advanced thermal management techniques” outlines breakthrough research by HP Laboratories to create intelligent data centers that dynamically redistribute compute workloads and provision cooling resources for optimum operating efficiency. from generation to generation the compute power per server is increasing faster than the corresponding increase in power consumption. Data centers that have sufficient capacity or that can afford to add cooling capacity can use HP small form factor servers to maximize rack computing power so that different facilities can be consolidated to reduce overall operating costs. their main challenge is directing cooling to racks that generate high heat loads. For data centers that cannot afford to upgrade cooling capacity to handle concentrated heat loads. resulting in higher efficiency. The ability to limit rack power density while increasing computing power can prolong the lifecycle of space-constrained infrastructures. Because concentrated heat generation is an inevitable byproduct of concentrated computing power. Most data centers have sufficient cooling resources already in place. HP’s small form factor servers offer the flexibility to limit rack power density (kW/rack) based on the capacity of nearby cooling resources.

The CRAC unit draws in warm air from the top. and discharges it into the supply plenum beneath the floor. which are supported by a grounded grid structure. The mixing of cold and hot air in the aisles is very inefficient and wastes valuable cooling resources and energy. Perforated tiles should be placed in front of at least every other rack. The static pressure in the supply plenum pushes the air up through perforated floor tiles to cool the racks. A 25 percent perforated tile provides approximately 500 cubic feet per minute (cfm) at a 5 percent static pressure drop. Figure 5. perforated tiles may be necessary in the front of each rack. 8 . While this layout can work with lower power densities and heat loads. Ideally. Many traditional data centers arrange rows of racks in the front-to-back layout shown in Figure 5. Most equipment draws in the cold supply air and exhausts warm air out the rear of the racks. Perforated tiles are classified by their open area. a common configuration used in today's data centers. while a 56 percent perforated tile provides approximately 2000 cfm. In higher density environments.Optimizing the effectiveness of cooling resources This section recommends methods to optimize the effectiveness of cooling resources in raised floor infrastructures. The down draft airflow pattern requires a raised floor configuration that forms an air supply plenum beneath the raised floor (Figure 5). Raised floors Most data centers use a down draft airflow pattern in which air currents are cooled and heated in a continuous convection cycle. The percentage and placement of perforated floor tiles are major factors in maintaining static pressure. Raised floors typically measure 18 inches (46 cm) to 36 inches (91 cm) from the building floor to the top of the floor tiles. Traditional raised floor configuration with high-density racks arranged front to back Perforated tiles Floor tiles range from 18 inches (46 cm) to 24 inches (61 cm) square. which may vary from 25 percent (the most common) to 56 percent (for high airflow). the equipment inlet temperatures will begin to rise (shown in the figure) and overheat critical resources. cools the air. as the power density and heat load increase. the warm exhaust air rises to the ceiling and returns along the ceiling back to the top of the CRAC units to repeat the cycle.

conduit. Electrical and network cables from devices in the racks pass through cutouts in the tile floor to wireways and cable trays beneath the floor. The plenum is also used to route piping. Unorganized cables (left) and organized cables (right) beneath a raised floor. Subfloor plenum dividers should be constructed in areas with large openings or with no subfloor perimeter walls. The integrity of the subfloor perimeter (walls) is critical to prevent moisture retention and to maintain supply plenum pressure. In some data centers. This means that openings in the plenum perimeter and raised floor must be filled or sealed. Self-sealing cable cutouts are required to maintain the static pressure in the plenum (Figure 7). Another option is to use overhead cable trays to route network and data cables so that only power cables remain in the floor plenum. cables are simply laid on the floor in the plenum where they can become badly tangled (Figure 6). Figure 7. Oversized or unsealed cable cutouts allow supply air to escape from the plenum. This can result in cable dams that block airflow or cause turbulence that minimizes airflow and creates hot spots above the floor.Air supply plenum The air supply plenum must be a totally enclosed space to achieve pressurization for efficient air distribution. and cables that bring power and network connections to the racks. HP enterprise-class cabinets provide 65 percent open ventilation using perforated front and rear door assemblies (Figure 8). U-shaped “basket” cable trays or cable hangers can be used to manage cable paths. Self-sealing cable cutout in raised floor Racks Racks (cabinets) are a critical part of the overall cooling infrastructure. and provide a path for future cable additions. To support the newer high-performance equipment. thereby reducing the static pressure. prevent blockage of airflow. Figure 6. glass doors must be removed from older HP racks and from any third-party racks. 9 .

Some configurations such as those with extreme cable or server density may create a backpressure situation forcing heated exhaust air around the side of a server and back into its inlet. they will access the higher-pressure discharge air flowing inside the cabinet easier than they will access cooling air coming through the front of the cabinet. In addition. a width of two floor tiles is needed in front of the rack. Therefore. ramps. Servers will intake air from the path of least resistance. Figure 9. Almost all HP equipment cools from front to rear so that it can be placed in racks positioned sideby-side. Cooling footprint Internal airflow Front and rear cabinet doors that are 65 percent open to incoming airflow also present a 35 percent restriction to air discharged by the equipment in the rack. Typically. The total physical space required for the data center includes the cooling footprint of all the racks plus free space for aisles. air from the cold isle or hot 10 . and a width of at least one unobstructed floor tile is needed behind the rack to facilitate cable routing. Equipment that draws in air from the bottom or side or that exhausts air from the side or top will have a different cooling footprint. and air distribution. HP enterprise-class cabinets Cooling footprint The floor area that each rack requires must include an unobstructed area to draw in and discharge air. The cooling footprint (Figure 9) includes width and depth of the rack plus the area in front for drawing in cool air and the area in back for exhausting hot air.Figure 8.

11 . Figure 10. or seven full tiles. See the "Static Smart Cooling" section for more information. Gaskets or blanking panels must be installed in any open spaces in the front of the rack to support the front-to-back airflow design and prevent these negative effects (Figure 10). Research by HP Laboratories has revealed that minor changes in rack placement can change the fluid mechanics inside a data center and lead to inefficient utilization of CRAC units. Figure 11. Airflow in rack without blanking panels (top) and with blanking panels (bottom) Hot and cold aisles The front-to-rear airflow through HP equipment allows racks to be arranged in rows front-to-front and back-to-back to form alternating hot and cold aisles. The equipment draws in the cold supply air and exhausts warm air out the rear of the rack into hot aisles (Figure 11). Also critical is the geometry of the rack layout. • Cold aisle spacing should be 48 inches. 24 inches minimum. Airflow pattern for raised floor configuration with hot aisles and cold aisles Rack geometry Designing the data center layout to form hot and cold aisles is one step in the cooling optimization process.isle can flow straight through a rack with open "U" spaces. • Cold aisles should be a minimum of 14 feet apart center-to-center. two full tiles. and hot isle spacing should be at least one full tile. The amount of space between rows of racks is determined as follows. This spacing is required for equipment installation and removal and for access beneath the floor.

the tons rating is very subjective because it is based on total cooling. The volume of air required is related to the moisture content of the air and the temperature difference between the supply air and return air (ΔT): Cubic feet per minute = BTU/hr ÷ (1. CRAC unit manufacturers typically provide cooling capacities as "total BTU/Hr" and "sensible BTU/Hr" at various temperatures and RH values. Assuming a fixed heat load from the equipment in its airflow pattern." Computer equipment produces sensible heat only.Computer room air conditioners A common question with respect to cooling resources is how many kilowatts a particular CRAC unit can cool. At 72˚F. However. discharging air into the supply plenum in same direction (Figure 13). Customers should review the manufacturer's specifications and then divide the sensible cooling capacity (at the desired operating temperature and humidity) by 12. the sensible cooling capacity of a CRAC unit is the most useful value. the recommended operating conditions for CRAC units are 70˚ to 72˚F and 50 percent relative humidity (RH). CRAC units can be placed inside or outside the data center walls. This configuration provides the 12 .000 BTU/Hr per ton to calculate the useable capacity of a given CRAC unit. which is comprised of "sensible cooling" and "latent cooling.. therefore.1 m (Figure 12). where one ton corresponds to a heat absorption rate of 12.000 BTU/hr. however. so other factors must be considered to determine the effective range of a particular CRAC unit. Figure 12. For this reason. Furthermore. its placement in the facility. Typically.S. Cooling ranges of CRAC units Placement of CRAC units The geometry of the room and the heat load distribution of the equipment determine the best placement of the CRAC units. Cooling capacity is also expressed in volume as cubic feet per minute (cfm).4 m) from the CRAC unit. Capacity of CRAC units The heat load of equipment is normally specified in BTU/hr. and its discharge velocity. Customers should consider placing liquid-cooled units outside the data center to avoid damage to electrical equipment that could be caused by coolant leaks. The "tons" capacity rating is measured at 80˚F. the most effective cooling begins about 8 feet (2. in the U. CRAC unit capacity is often expressed in "tons" of refrigeration.08 × ΔT) The cooling capacity calculations presented here are theoretical. Although units with capacities greater than 20 tons are available. the CRAC unit output capacity is considerably reduced. the increased heat density of today's servers limits the cooling range to approximately 30 feet or 9. CRAC units should be placed perpendicular to the rows of equipment and aligned with the hot aisles. expressed in tons of cooling. The effective cooling range is determined by the capacity of the CRAC unit and the heat load of the equipment in its airflow pattern. the answer depends largely on the capacity of the CRAC unit.

The velocity of the cooled air is highest near the CRAC unit because the entire flow is delivered through this area. To counter this situation. Inc. Large. P.shortest possible distance for the hot air to return to the CRAC units. ComputerSite Engineering. The static pressure increases as the highvelocity discharge moves away from the unit.E. CRAC units should be placed perpendicular to hot aisles so that they discharge cool air beneath the floor in the same direction. 2 From Changing Cooling Requirements Leave Many Data Centers at Risk. The air velocity decreases as air flows through the perforated tiles away from the CRAC unit. and Edward C. thereby increasing the airflow through the perforated tiles. P. 2 Another option is to use a fan-assisted perforated tile to increase the supply air circulation to a particular rack or hot spot.E. 13 . the static pressure in the supply air plenum must be greater than the pressure above the raised floor. airfoils under the raised floor can be used to divert air through the perforated tiles. W. Koplin. Fan-assisted tiles can provide 200 to 1500 cfm of supply air. The decrease in velocity is accompanied by an increase in static pressure with distance from the CRAC unit. Rooms that are long and narrow may be cooled effectively by placing CRAC units around the perimeter. causing inadequate airflow (Figure 14). Discharging in the same direction eliminates dead zones that can occur beneath the floor when blowers oppose each other. Excessive discharge velocity from the CRAC unit reduces the static pressure through perforated tiles nearest the unit. Figure 13. Discharge velocity To force air from beneath the raised floor through the perforated tiles. square rooms may require CRAC units to be placed around the perimeter and through the center of the room. Pitt Turner IV.

Ideally. Mixing of supply air and exhaust air 14 . • The temperature of the supply increases. the rest may mix with the supply air. the warm exhaust air will rise to the ceiling and return to the CRAC unit intake. two things can happen:· • The temperature of the exhaust air decreases. Plenum static pressure greater than pressure above the floor (left). if cold air goes into the hot aisles. Figure 15. Mixing occurs if exhaust air goes into the cold aisles. When warm exhaust air mixes with supply air. In reality. High-velocity discharge reduces static pressure closest to the unit (right). Airflow distribution for high-density data centers To achieve an optimum down draft airflow pattern. which causes warmer air to be recirculated through computer equipment. only the warm air closest to the intake may be captured. warm exhaust air must be returned to the CRAC unit with minimal obstruction or redirection.Figure 14. or if there is insufficient ceiling height to allow for separation of the cold and warm air zones (Figure 15). thereby lowering the useable capacity of the CRAC unit.

Ceiling return air plenum In recent years. it can travel to the nearest CRAC unit intake. a single supply air plenum under the raised floor may be insufficient to remove the heat that will be generated. Figure 17. Figure 16. one above and one below (see Figure 17). raised floor computer rooms with very high heat density loads have begun to use a ceiling return air plenum to direct exhaust air back to the CRAC intake. Dual air supply plenum configuration for high-density solutions 15 . the ceiling return air plenum removes heat while abating the mixing of cold air and exhaust air. In this configuration. The return air grilles in the ceiling can be relocated if the layout of computer equipment changes. Once the heated air is in the return air plenum. High-density solutions may require dual supply air plenums. As shown on the right of Figure 16. Ceiling return air plenum Dual supply air plenums As power and heat densities climb. additional supply air is forced downward in the cold aisle.

networking. Unique mechanical design ideas. For more information.hpl. For more information. have been implemented in the heat exchangers to direct airflow to and from the racks. The units eject the cool air downward by using fan trays located over on the intake side of each rack. and storage equipment. modular heat exchangers offer the flexibility to scale the cooling as needed. More importantly. HP Laboratories has devised two thermal analysis approaches—Static Smart Cooling 4 and Dynamic Smart Cooling—that model heat distribution throughout a data center using computational fluid dynamics (CFD). It enables intelligent data centers that dynamically provision cooling resources to match the changing heat dissipation of computing. Static Smart Cooling can also predict the changes in heat extraction of each CRAC unit when the rack layout and equipment heat load are varied.hpl.Ceiling-mounted heat exchangers The drive to maximize the compute density of data centers has prompted research and development of alternative cooling methods that do not require floor space.pdf. The heat exchanger units collect the hot air exhaust from the racks and cool it using circulated chilled water. please read “Computational Fluid Dynamics Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications” at http://www. 3 Figure 18 shows a representation of an alternate cooling method using modular air-to-liquid heat exchangers in the ceiling.com/research/papers/power. such as the ability to move the intake and exhaust sections. ceiling-mounted heat exchangers save revenue-generating floor space and allow the raised floor to be mainly used for cable distribution.pdf. With this scheme. The variation in heat load is too complex to predict intuitively or to address by adding cooling capacity. rack cooling can be localized.hp. Dynamic Smart Cooling offers a higher level of automated facility management. please read “Thermal Considerations in Cooling Large Scale High Compute Density Data Centers” at http://www.com/research/papers/2002/thermal_may02. and the addition or removal of racks over time. The advantage of this approach is the proximity of the heat exchangers to the racks. It also redistributes compute workloads based on the most efficient use of cooling resources within a data center or a global network of data centers. Static Smart Cooling uses CFD modeling to aid planners in designing the physical layout of the data center for optimum distribution of cooling resources and heat loads. 3 4 16 .hp. Figure 18. changing compute workloads. Additionally. Ceiling-mounted air-to-liquid heat exchangers Advanced thermal management techniques Heat loads vary throughout a data center due to the heterogeneous mix of hardware types and models.

This results in wasted energy if operation of the unit cannot be adjusted to match the lower cooling load. storage. For example." The provisioning of each unit in the data center is presented as a positive or negative percentage as follows: • An under-provisioned CRAC unit (positive percentage) indicates that the cooling load is higher than the capacity of the unit. • An over-provisioned CRAC unit (large negative percentage) operates significantly below the capacity of the unit. Figure 19. The CFD model shows that the provisioning of the CRAC units is completely out of balance. • A closely provisioned CRAC unit (small negative percentage) signifies that the cooling load is less than but reasonably close to the capacity of the unit. Poorly provisioned CRAC units 17 . Figure 19 shows the row-wise distribution of heat loads (41 kW to 182 kW) for a combination of compute. The heat extraction of each CRAC unit is compared to its rated capacity to determine how efficiently (or inefficiently) the CRAC unit is being used. and networking equipment in a typical raised floor data center with four CRAC units. or "provisioned. leading to efficient use of energy resources.Static Smart Cooling Static Smart Cooling uses CFD modeling to determine the best layout and provisioning of cooling resources based on fixed heat loads from data center equipment.

E.. The hot air must return to the CRAC units with minimal infiltration into the cold zones because such mixing increases the inlet temperatures at the racks. St. the 102-kW row and the 182-kW row have been repositioned to better distribute the heat load. Bash.D. SHI is a measure of heat infiltration into cold aisles. This CFD model shows that the CRAC units are now provisioned within 15 percent of their capacity. Sharma R.. June 2002. Patel. A. 6 RHI denotes the degree of mixing of the cold air with the hot return air to the CRAC units. American Institute of Aeronautics and Astronautics Conference. Sharma. C. MO.In Figure 19. Statically provisioned CRAC units Dynamic Smart Cooling A “smart” data center requires a distributed monitoring system and a feedback control system that continually provisions the cooling resources based on the workload distribution. Hawaii.D.International Electronics Packaging Technical Conference and Exhibition. Proceedings of IPACK03. 5 Computing resources must be pooled and virtualized—rather than dedicated to a particular user or application— so that workloads can be allocated dynamically. The computing resources not in use are put on standby to improve operating efficiency. Due to the high airflow rates in data centers. Researchers have developed dimensionless parameters. Beitelmal. Louis. C. Maui. Figure 19. C. R. thermal management is achievable only if hot and cold air mixing is minimal. C. “Smart Cooling of Data Centers. The control system schedules compute workloads across racks of servers in a way that minimizes energy use and maximizes cooling efficiency. Dynamic Smart Cooling is possible for a data center with the following features: • Distributed sensors such as temperature sensors on the racks measuring supply and exhaust air temperature of the systems temperature sensors in the aisles measuring three dimensional temperature distribution in the data center temperature sensors at the CRAC return and supply pressure sensors in the air distribution plenum Patel.” July 2003.E. Bash. known as Return Heat Index (RHI) and Supply Heat Index (SHI) that can be used as control points to allocate compute workloads and cooling to minimize energy use. “Dimensionless Parameters for Evaluation of Thermal Design and Performance of Large Scale Data Centers. Friedrich. 5 6 18 .” AIAA-2002-3091. SHI is the primary control parameter used to minimize the energy required to meet inlet air specifications. R.. IPACK2003-35059.K.

such as supply plenum static pressure. Static Smart Cooling can rectify cooling problems as well as enhance the overall efficiency of air conditioning resources. Conclusion The growing power consumption of computing components requires modular data center designs with sufficient headroom to handle increasing power and cooling requirements. HP is a leader in the thermal modeling of data centers. HP Laboratories is developing the control system and the data center manager. In most cases. including room geometry and the capacity and placement of the CRAC units. The modeling services can also be used to confirm new data center designs or predict what will happen in a room when certain equipment fails. and plans to report its progress in future papers. the energy savings alone may pay for the cost of the service in a relatively short period. and variable cooling coil temperature in the CRACs to modulate thermodynamic work • Data aggregation system that collects sensed data from all locations visually presents the real time power draw calculates data center control parameters: RHI and SHI • Control system that modulates the variable air conditioning resources through a control algorithm for a given distribution of workloads (heat loads) • Data center manager (computerized system) that uses thermal policies to distribute workloads in the data center At the time of this writing.- sensors that measure power drawn by machines • Variable air flow devices to modulate flow work. airflow blockages beneath raised floors. As long as the data center has the power and cooling resources to support the expected loads. 19 . and configurations that result in airflow mixing in the data center. To determine actual requirements. facilities planners must consider several factors. Planners must also give special attention to factors that affect airflow distribution. HP Professional Services can work directly with customers to optimize existing data centers for more efficient cooling and energy consumption.

hp.com/configurator/calc/Site%20Preparation%20Utility.xls © Copyright 2004.com/configurator/calc/Power%20Calculator%20Catalog. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services.pdf http://h30099.hpl. 2005 Hewlett-Packard Development Company.hp. The information contained herein is subject to change without notice.For more information For additional information. Nothing herein should be construed as constituting an additional warranty.xls http://h30099. L. Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.www3.www3.hp. 9/2005 .com/research/papers/2002/thermal_may02. HP shall not be liable for technical or editorial errors or omissions contained herein.P. TC050901TB. Resource description Thermal Considerations in Cooling Large Scale High Compute Density Data Centers white paper HP Rack/Site Installation Preparation Utility Power calculators Web address http://www. refer to the resources detailed below.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.