This action might not be possible to undo. Are you sure you want to continue?
Julius Neudorfer 04.21.2008 Rating: -4.83- (out of 5) IT infrastructure news Digg This! StumbleUpon Del.icio.us
It's 2008 and the virtual environment is the new computing paradigm, and software and hardware seem to work together as advertised. The virtual machine appears to have many benefits, including providing better resource utilization and management while saving energy. We have come full circle and are now concentrating more computing power into a single rack. By using high-density 1U servers and Blade Servers, a single rack can have more processing power than an entire midsize mainframe of 10 years ago. However, "virtualization" has not repealed the laws of physics, the hardware is very real, and it requires a lot of energy and cooling resources. The plain truth about all computers is that they turn every Watt (W) of power directly into heat (and, yes, I know that they also do useful computing). With the advent of the widespread virtualization based on high-performance, multi-core, multiprocessor servers, the amount of power used per square foot has risen from 25 W to 50 W/square foot up to 250 W to 500 W/square foot and continues to rise. When discussed in terms of watts per rack, in the mid to late 1990s (in the last century) it ranged from 500 W to 1,000 W and occasionally 1 kilowatt to 2 kW. Once we got past the dreaded Y2K frenzy and started moving forward instead of focusing on remediation, the servers got smaller and faster and started drawing more power. Today a typical 1U server draws 250 W to 500 W, and when 40 of them are stacked in a standard 42U rack they can draw 10 kW to 20 kW and produce 35,000 British thermal units (BTUs) to 70,000 BTUs. This requires 3 to 6 tons of cooling per rack. For comparison, the same amount of cooling typically specified for a 200-square-foot to 400-square-foot room with 10 to 15 racks five years ago. See 1U server cooling requirements table (click image for larger version).
Blade servers provide more space saving benefits, but demand higher power and cooling requirements. These servers can support dozens of multi-core processors, at only 8U to 10U tall -- but require 6 kW to 8 kW each. With 4 to 5 Blade servers per rack, you use 24 kW to 32 kW per rack!! See Blade server
It has proven to work effectively and has many benefits. the . Of course. One of the many claims is improved energy efficiency. what is the downside? Power requirements: Virtualizing the environment will use less server power when executed properly using fewer servers. See Rack cooling requirements with Weber barbeque heat comparison (click image for larger version). the concentration of high-density servers in a much smaller space creates deployment problems. However.cooling requirements table (click image for larger version). However. it should follow that they need less cooling. high-density servers. The server hardware takes less energy overall because there are fewer of them. and the servers use less energy overall. Therefore. because it can reduce the number of "real" servers. along with the "upgrade" to a virtualized environment is the addition of new high performance. Cooling and virtualization This concept has taken hold and is rapidly becoming the latest defacto computing trend. virtualization should more be energy efficient and your data center is more "green" (ah. So if virtualization uses less space. Thus. many existing power distribution systems cannot handle providing 20 kW to 30 kW per rack. Cooling requirements: It seems if virtualization is properly implemented it can use less space and power by using fewer and more dense servers.
These perforated floor tiles have even been replaced by floor grates in order to try to supply enough cold air to a rack that need a to of cold air to cool off high-density servers. This cooling method is still predominately used in most data centers that have been built in the last 10 years. It had a raised floor which served several purposes: it was capable of easily distributing cold air from the computer room air conditioner (CRAC). this where the virtualization-efficiency conundrum manifests. this is a cost-effective method only up to certain power level. Originally.000 BTUs of heat. With the introduction of rack-mounted servers the average power levels began to rise to 35 W to 75 W per square foot. While mainframes were very large. In many cases the cold air entered the bottom of the equipment cabinets and the hot air exited the top of the cabinet. Thus the "Hot Aisle-Cold Aisle" came into being in the 1990s. This method of cooling was relatively efficient as the cold air was going directly into the equipment cabinets and did not mix with the warm air. Once past a certain power level this method has multiple drawbacks. and it contained the power and communications cabling. their cooling systems are not capable of efficiently removing heat from a compact area. See raised floor airflow diagram showing how rack temperatures increase with distance from CRAC (click image for larger version).lowering the overall average power per square foot. instead of specifying how many tons of cooling for an entire data center.magic "G" word). Raised floors became deeper (2 foot to 4 foot are now more common) to allow cold air to be distributed using this "time-tested and proven" methodology. CRAC units were still located mainly at the perimeter of the data center. 20. This worked better and the cooling systems were upgraded to take care of the rising heat load by adding more and larger CRAC units with higher power blowers and increased size floor tile vent openings. or even 30 kW per rack. As mentioned earlier. It takes much more energy for the blower motors in the perimeter CRACs to push more air at higher velocities and pressures to deliver enough cold air into a single 2-foot by 2-foot perforated tile to support a 30 kW rack. Even some recently built Tier IV data centers are still limited to 100 W to 150 W per square foot average. However. So. If all the racks were configured at 20 kW per rack.5 kW/rack has been exceeded many times over with the advent of the "1U" and Blade server. requiring 1 ton of cooling). As a result many high-density projects have had to spread the servers across half empty racks in order to not overheat -. The cabinet orientation become a problem because the hot air now exited out the back of one row of racks in to the front of the next row. Unfortunately 3. The floor generally had no perforated tiles.5 kW produces 12. As such. it was designed with rows oriented facing the same to make it look neat and organized. the average power/cooling could exceed 500 W/square foot. they only averaged 25 W to 50 W per square foot. Comparing cooling options The "classic" data center harkens back to the days of the mainframe. Now. data centers that were built only five years ago were not designed for 10. and in many that are in the design stage. (Note: Each 3. we may now need . but the floor tiles now had vents (or were perforated) in the cold aisles.
This improves cooling performance and efficiency without a raised floor. Hot aisle containment requires that the "hot aisle" be sealed along with inrow-style cooling units. By having cooling coils within a fully-sealed rack you can cool up to 30 kW in a single rack. these systems provide significantly reduced cooling costs using less power to move air. In some applications.5 to 10 tons per rack! As a result. Some systems from major server manufacturers offer their own "fully enclosed" racks with built-in cooling coils that contain airflow within the cabinet. and minimizing the mixing of the hot and cold air. Some systems are "inrow" with the racks. See close coupled cooling technology versus traditional cooling technology diagram (click image for larger version). $2 or more is spent for cooling." They offer a significant increase in cooling racks at up to 20 kW/rack. This represents one of the most effective high performance high-density cooling solutions to support standard "air cooled" servers at up to 30 kW per rack. Non-raised floor cooling: Some newer cooling systems have been placed in close proximity to the racks. not twice as much. it should use less than half the energy to cool.In some cases. Close-coupled cooling advancements: Various cooling manufacturers have developed systems that shorten the distance air has to travel from the racks to the cooling unit. This ensures that all the heat is efficiently extracted directly into the cooling system over a short distance. See completely contained rack cooling systems diagram (click image for larger version).ideally. the amount of power that is used for cooling high-density server "farms" has exceeded the power used by the servers. and others are "overhead. These systems can be used as a complete solution or as an adjunct to an overtaxed cooling system. This is primarily due to the path efficiency problem -. . for every $1 spent to power the servers. Fully-enclosed racks offer the highest cooling density. With required supporting infrastructure. the traditional raised-floor perimeter cooling system for high-density applications is causing an overall increase in energy use and inadequately cooling of a full rack of high-density servers. It potentially offers the highest level of energy efficiency. This increases the effectiveness and efficiency of cooling high-power racks.
and enclosed) can be added or retrofitted to existing data centers to improve the overall existing cooling. large or small. Blanking panels: This is by far the simplest." Instead of fans pushing air through the server chassis. Moreover. Facilities and IT Like the many other groups. Ensuring that the warm air from the rear of the rack cannot be drawn back into the front of the rack via open rack spaces will immediately improve efficiency. there may be low acceptance and uptake.many without disrupting operations. even the best planned data center tends to have new equipment installed wherever there is space -. As a result. but inefficiently. . Several techniques can be implemented to improve the cooling efficiency of data centers dealing with highdensity servers. power supplies. is that computing equipment changes constantly. usually without regard to the different levels rack density. some mutual understanding of the underlying issues is needed so that both sides can cooperate and optimize the cooling systems to meet the rising high-density heat load with a more efficient solution. They are primarily concerned with the overall cooling requirements of the room (in BTUs or tons of cooling). The typical response from facilities is to add more of the same type of CRAC that is already installed (if there is space). It is unusual to be able to fully stop and reorganize the data center to optimize cooling. Simple low-cost solutions for optimizing cooling in existing installations Clearly the raised floor is the present standard and is not going to suddenly disappear. address the problem. Facilities personnel are usually the ones you call to address any cooling systems projects. and the reliability of systems that have been used in the past. However. which may partially. However.Liquid-cooled servers: Today all servers use air to transfer heat out of the server. It is important to note that this method is different than using liquid (chilled water or glycol-based) for heat removal from the CRAC. The necessity to keep the old systems running while installing new systems sometimes means that expediency rules the day. They leave the racks to IT and just want to provide the raw cooling power to meet the entire heat load. Best practice versus reality One of the realities of any data center. etc). Several manufacturers are exploring building or modifying servers to use "fluid-based cooling. because of the danger of liquids leaking onto electronic systems pose. If you can take an unbiased look at your data center (avoid saying "That's how it has always been done"). See raised floor airflow diagram showing how blanking panels improve efficiency (click image for larger version). The good news is that some of the new cooling technology (inrow. you may find that many of the recommendations below can make a significant improvement to your cooling efficiency -. this technology is still in the testing and development stage. IT people and facilities people do not see things the same way. cost-effective and most misunderstood item that can improve cooling efficiency. or they can be used only for specific "islands" to provide additional high-density cooling. overhead. liquid is pumped through the heat producing components of the server (CPUs.leading to cooling issues.
It also lowers the static air pressure in the floor reducing the amount of cold airflow available for the vented tiles in front of the rack. Also. This opening is a great source of cooling inefficiency because it wastes the cold air by allowing it to enter the back of the rack where it is totally useless. See raised floor airflow diagram showing how cold air containment improves cooling efficiency (click image for larger version). . vents. and direction of floor vents and the flow rating of perforated tiles affect how much cool air is delivered to where it is needed. Many larger data centers have reserved 1 feet to 2 feet of depth under the floor just for cabling to minimize affect on airflow. Floor tiles and vents: The size. and grates to match the airflow to the heat load of the area. A careful evaluation of the placement and the amount airflow in relation to the highest power racks is useful. which is best described as a series of panels that span the top of the cold aisle from the top edge of the racks. cabling under the floor causes a similar problem. Make sure that the rear heat exhaust areas of the servers are not blocked. Use different tiles.typically a "brush" style grommet collar which allows cables to enter but blocks the airflow. shape. This blocks the warm air from the hot aisle from mixing with the cold air and concentrates the cold in front of the racks where it belongs. Cold aisle containment: A recent development is the cold aisle containment system. chances are that it is impeding the airflow and causing the servers to run hotter than necessary. Unwanted openings: Cables normally enter the racks though holes cut into the floor tiles. position. See raised floor airflow diagram showing how extraneous floor openings impact cooling efficiency (click image for larger version).Cable management: If the back of your racks are cluttered with cables. It can also be fitted with side doors to further contain cold air. Cables should be run together and tightly bundled for minimal airflow impact. Every floor tile opening for cables should be surrounded by air containment devices -.
By simply broadening your hi-low set points to 75% to 25%. This can take a significant amount of energy. In reality. While each manufacturer is different. It can easily be determined if this is the case. but I have seen many of these in many futile attempts to prevent equipment overheating. In order to maintain humidity. while the others can have the hi-low points set to a much wider range to be used as a backup. most servers will operate well at 75 degrees Fahrenheit at the intake. humidity is also maintained by the CRAC. and reduce wear on the CRACs. it is possible to carefully raise this a few degrees. Humidity settings: Just as maintaining temperature is important. The typical target set point is 50% humidity. . if the primary unit should fail. with the hi-low range set at 60% and 40%. The most important temperature should be measured at the intake of the highest server in the warmest rack. you can save a substantial amount of energy (check with your server vendors to verify their acceptable range). House fans: This does not really solve anything. Resolving this can save significant energy over the course of a year. Each unit simply bases it temperature and humidity setting on the temperature and humidity being sensed in the (warm) return air. Synchronize your CRACs: In many installations each CRAC is not in communication with any other CRAC. Thermal survey: A thermal survey may provide surprising results. Therefore it is possible (and even common) for one CRAC to be trying to cool or humidify the air while another CRAC is trying to dehumidify and or reheat the air. See why fans aren't really an option (click image for larger version). and if properly interpreted can provide many clues to improving efficiency by using any or all of the mentioned items or methods discussed here. In many cases only one CRAC is needed to control humidity. and you can have your contractor add a master control system or change the set points of the units to avoid or minimize the conflict. as long as there is adequate airflow (check with your server vendors to verify their acceptable range). most CRACs use a combination of adding moisture and/or also "reheating" the air.Temperature settings: It has always been the "rule" to use 68 to 70 degrees Fahrenheit as a setting point to maintaining the "correct" temperature in a data center.
Location and climate: While we have discussed many of the issues and technologies within the data center. It did not matter if it was hot or cold outside. dedicated Tier IV data center is designed to be energy efficient. a second cooling coil connected by lines filled with water and antifreeze to an outside coil. In colder climates. This simple and effective system was introduced many years ago. A modern. large Multi-Megawatt. this can be a significant source of energy savings."free cooling": Most smaller and older installations used a single type of cooling technology for thier CRACs. the location and climate can have a significant affect on cooling efficiency. When the outside temperature was low (50 degrees Fahrenheit or less) "free cooling" was achieved because the compressor could be less used or totally stopped (below 35 degrees Fahrenheit). While it is usually not possible to retrofit this to existing systems. and in some states and cities it is even a requirement for new installations. and low-cost power is available.Economizer coils -. but was not that widely deployed because of increased cost and the requirement to have a second outside coil unit. This is primarily used in areas with colder climates. See economizer equipped cooling system diagram (click image for larger version). The use of the economizer has risen sharply in the last several years. A significant improvement was added to this basic system. Google built a super-sized data center in Oregon because the average temperature is low. In fact. It usually involved a cooling coil that was cooled by a compressor located within the unit. It typically uses large water chiller systems with built-in economizer functions (see below) as part of the chiller system. water is plentiful.to . Not everyone operates in the rarified atmosphere of a Tier IV world. This provides the ability to shut down the compressors during the winter months and only uses the low exterior ambient air temperature to provide chilled water to the internal CRACs. it is highly recommended for any new site or cooling system upgrade. the compressor needed to run all year to cool the data center. The tens of thousands of small.
Therefore. When your organization is considering a new office location. if done correctly.icio. while others have a nominal expense. Don't just think about "going green" because it is fashionable.medium-sized data centers located in high-rise office building or office parks may not have this option. not just how nice the lobby looks. but they all will produce a positive effect. The bottom line There is no one best solution to address the cooling and efficiency challenge. Rate this Tip To rate tips. // --> DATA CENTER RELATED LINKS Ads by Google Lee Technologies Mission-Critical Facility Design Data Center Design/Build www. In this case it is necessary to meet the high-density cooling challenge. the ability of the building to meet the requirements of the data center should also be considered.000 square feet. Digg This! StumbleUpon Del. ABOUT THE AUTHOR: Julius Neudorfer has been CTO and a founding principal of NAAT since its inception in 1987. In fact.leetechnologies. you must be a member of SearchDataCenter. Register now to start rating these tips. He has designed and project managed communications and data systems projects for commercial clients and government customers. and some do not have it at all.us ').com. Log in if you are already a member. while lowering energy operating costs. your data center will improve in energy efficiency. the data center is limited in its ability (or unable) to use efficient high-density cooling. Whether it is 500 or 5. It will also increase the uptime because the equipment will receive more cooling and the cooling systems will not be working as hard to provide the cooling. Sometimes the IT department has no say in its design. Often when the office floor plan is being laid out the data center is given the odd-ball space that no else wants.com . They are usually limited to the building-based cooling facilities (or lack thereof). Some cost nothing to implement. the size and shape may not be ideal for rack and cooling layouts. in such installations. As a result. however by a careful assessment of the existing conditions a variety of solutions and optimization techniques can substantially improve the cooling performance of your data center. some buildings do not operate or supply condenser water during the winter.
9 data center committee: What's on tap AFCOM keynote: Will data center budgets survive economic woes? When best practices aren't: CFD analysis forces data center cooling redesign Data center cooling: Air-side and water-side economizers Server cabinet air velocities increasing Drawbacks to hot/cold aisle containment Chilled-water production optimization for data center cooling Hot-aisle/cold-aisle containment stokes fire-code issues Virgin builds data center from scratch on Verari blades Data center cooling Research Data center room design and location How to prepare for remote data center maintenance trips Data Center Decisions 2008 conference coverage Welsh data center expected to be Europe's most advanced Harley-Davidson unpacks data center myths Lights-out data centers have downsides. cautions Data Center World presenter When best practices aren't: CFD analysis forces data center cooling redesign Drawbacks to hot/cold aisle containment Hot-aisle/cold-aisle containment stokes fire-code issues ADC data center aiming for 1.Flomerics. LEED Platinum HP's POD: The newest containerized data center Data center room design and location Research Data center server virtualization Data center managers face mountains of technology shifts .biz RELATED CONTENT Data center cooling Bank refreshes outdated data center with PDUs.stulz.Avocent.Data Center Cooling Precision Air Conditioning Solution for High Heat Density Data Centers. new CRAC units ASHRAE TC9.netstructures. Cooling.com Data Center Cooling Airflow/Thermal Simulation. Security.1 PUE. ats. A/V & more www.com Total Data Center Svcs Design. www. Optimize Data Center Cooling. Cabling.com FREE Guide Virtualization Co-existence of virtual & physical Learn more now! www. AC/DC Power. Demo.
ERP software.com − the technology online dictionary ambient temperature (SearchDataCenter.com for the latest white papers and business webcasts Whatis.com) HVAC (SearchDataCenter. CRM software and business software systems Search Bitpipe.com) data center chiller (SearchDataCenter.com) compaction (SearchDataCenter.com) green data center (SearchDataCenter. the online computer dictionary .com) Calibrated Vectored Cooling (SearchDataCenter.com) water cooling (SearchDataCenter.com) RELATED RESOURCES 2020software. VMware collaborate on VFrame Data Center Solaris upgrade continues Sun's push among Linux users RELATED GLOSSARY TERMS Terms from Whatis.com. trial software downloads for accounting software.0: Beyond best practices Virtual server policy is key to mitigating systems vulnerabilities HP taps Scalent to virtualize I/O on c-Class blades Sun signs Dell as Solaris reseller and launches xVM Cisco.com) ASHRAE (SearchDataCenter.Disruptive technologies of 2009: Virtualization and cloud computing Elevating cloud computing: Citrix and 3Tera The role of virtualization in data center disaster recovery The green data center 2.com) computer room air conditioning unit (SearchDataCenter.com.com) hot/cold aisle (SearchDataCenter.