Professional Documents
Culture Documents
A Best Practice Review prepared for the Joint Information Services Committee (JISC)
May 27 2009
Contents
Introduction.....................................................................................................4 1. Data Centres in Further and Higher Education............................................5 2. Energy and Environmental Impacts of Data Centres................................... 2.1 Em!edded Environmental Impacts......................................................1" 2.2 Energy Issues in Data Centres.............................................................1" 2.# $atterns of Energy %se in Data Centres..............................................1# #. Data Centre &olutions ' &trategy...............................................................1( 4. Data Centre &olutions ) $urchasing *ore Energy Efficient Devices...........2" 5. Data Centre &olutions ) Changing Computing +pproaches........................22 5.1 Energy $roportional Computing...........................................................22 5.2 Consolidation and ,irtualisation of &ervers.........................................22 5.# *ore Energy Efficient &torage.............................................................2# -. Data Centre &olutions ) *ore Efficient Cooling and $o.er &upply.............25 -.1 *ore Effective Cooling.........................................................................25 -.2 *ore Energy Efficient $o.er &upply....................................................2 -.# /educing +ncillary Energy...................................................................#" -.4 0etter *onitoring and Control.............................................................#" -.5 1e. &ources of Energy Inputs.............................................................#" 2. 1et.or3ing Issues......................................................................................#2 2.1 4he Environmental Impacts of ,5I$ 4elephony...................................#2 2.2 6iring and Ca!ling..............................................................................## . Conclusions ...............................................................................................## 0i!liography...................................................................................................#5
#
Introduction
4his paper provides supporting evidence and analysis for the discussion of data centres and servers in the main &usteI4 report 78ames and Hop3inson 2""(a9. *ost university and college computing today uses a more decentralised :client) server; model. 4his involves a relatively large num!er of :servers; providing services< and managing net.or3ed resources for< an even greater num!er of :clients;< such as personal computers< .hich do much of the actual computing :.or3; re=uired !y users. 4he devices communicate through net.or3s< !oth internally .ith each other< and e>ternally through the Internet. + typical data centre< or :server room;< therefore contains? &ervers< such as application servers 7usually dedicated to single applications< in order to reduce soft.are conflicts9< file servers 7.hich retrieve and archive data such as documents< images and data!ase entries9< and print servers 7.hich process files for printing9@ &torage devices< to variously store :instantly accessi!le; content 7e.g. user files9< and archive !ac3)up data@ and /outers and s.itches .hich control data transmission .ithin the data centre< !et.een it and client devices such as $Cs and printers< and to and from e>ternal net.or3s. 4his infrastructure has considera!le environmental and financial costs< including those of? Energy use< car!on dio>ide emissions and other environmental impacts from production@ Direct energy consumption .hen servers and other IC4 e=uipment are used< and indirect energy consumption for their associated cooling and po.er supply losses@ and 6aste and pollution arising from e=uipment disposal. *a3ing definitive Audgments a!out these environmental impacts ' and especially ones .hich aim to decide !et.een different procurement< or technical< options ) is difficult !ecause? Data centres contain many diverse devices< and vary in usage patterns and other parameters@
It re=uires the collection of information for all stages of the life cycle< .hich is very difficult in practice 7see discussion for $Cs in 8ames and Hop3inson 2""(!9@ and 4echnology is rapidly changing .ith more efficient chips@ ne. or improved methods of cooling and po.er supply@ and ne. computing approaches such as virtualisation and thin client. 4herefore caution must !e ta3en .hen e>trapolating any of the follo.ing discussion to specific products and models. 1onetheless< some !road conclusions can !e reached< as descri!ed !elo.. 4hey are !ased on the considera!le num!er of codes and !est practice guides .hich have recently !een pu!lished 7for e>ample< European Commission 8oint /esearch Centre 2"" @ %& Environmental $rotection +gency 7%& E$+ 2""2a9.
:car!on footprint;. It also creates a potential constraint on future plans in areas .here the electricity grid is near capacity< such as central Dondon. 4hese changes are reflected in gro.ing num!ers of servers. 4he main &usteI4 report estimates that %E higher education has an estimated 215<""" servers< .hich .ill pro!a!ly account for almost a =uarter of the sector;s estimated IC4)related car!on dio>ide 7C529 emissions of 225<""" tonnes< and IC4)related electricity !ill of F-1 million< in 2""( 78ames and Hop3inson 2""(a9. 7Further education has only an estimated 2#<""" servers< so their impact is much less in this area9. 4he &usteI4 footprinting of IC4)related electricity use at the %niversity of &heffield also found that servers< high performance computing 7H$C9 and net.or3s ' most< though not all< of .hich .ould !e co)located in data centres ) accounted for 4"C of consumption< .ith an annual !ill of F4""<""" 7Cartledge 2"" a ' see also 4a!le 19. 6hilst these figures .ill !e lo.er at institutions .ithout H$C< they reinforce the point that the topic is significant. &ome responses are !eing made .ithin the sector. Ho.ever< 4a!le 2 ' .hich sho.s the prevalence of some of the 3ey energy efficiency measures .hich are discussed in the remainder of this document ' suggests that there is considera!le scope for improvement. 4his is especially true given that the most common option< !lade servers< are< .hilst advantageous< not the most environmentally superior option for reasons discussed !elo.. *ore positively< 2#C of responding institutions .ere e>pecting to ta3e significant measures to minimise server energy consumption in the near future. If the sector is to have more sustaina!le IC4 it is therefore vital that the energy consumption and environmental footprint of data centres is minimised. )a+"e ,- E"ectricity Consumption of non.residentia" #C) at the &ni!ersity of (heffie"d 2007./ 0rounded to nearest ,01 0Cart"edge 200/a12 #C) Category $Cs &ervers High performance computing Imaging devices 1et.or3s 4elephony +udio),isual 4otal E"ectricity Consumption 0M3h4y1 4<1-" 1<52" 1<21" 4" -(" 2"" -" <- " 5 4 C 1 C 14C 1"C C 2C 1C 1""C
)a+"e 2- 6esu"ts for sur!ey 7uestion . Ha!e you imp"emented any of the fo""o8ing inno!ations to reduce energy consumption in your data -
centre4ser!er room0s19 P"ease choose a"" that app"y2 7Guestion as3ed to server room operatorsHmanagers only9. /esults further analysed !y institution. #nno!ation 0lade servers &erver virtualisation $o.er management features Do. po.er processors High efficiency po.er supplies 415, +C po.er distri!ution Dayout changes 6ater cooling ,aria!le capacity cooling Heat recovery Fresh air cooling other 1one of these Don;t 3no. )ota" #nstitutions :um+er of responding institutions 5 4 4 # # 2 2 1 " " 2 " 11 5 2# 55 45 ##22 22 1 1 ( " " 1 "
)a+"e ;- 3eighted a!erage po8er 03atts1 of top < ser!ers% +y sa"es 0*oomey 200712 (er!er c"ass ,olume *id)range High)end 2000 1 424 55#4 &( 200; 2"2 524 -42 200= 212 -41 1"-2# 2000 1 # 42# 4 24 3or"d 200; 214 522 5 15 200= 21 -# 12- 2
)a+"e > ? #ncreasing Po8er Density of (er!ers 8ith )ime 0#nformation from Edin+urgh Para""e" Computing Centre and Cardiff &ni!ersity12 (ite Date Po8er density 0 34m21 ".5 2.5 2 2" 1"KJ
/C5 0uilding< % Edin!urgh +dvanced Computer Facility 7$hase 19< % Edin!urgh +CF 7$hase 2 ' initial Hector9< % Edin!urgh H$C Facility< Cardiff% +CF 7final Hector9
costs .ill !ecome the second highest cost in 2"C of the .orld;s data centres !y 2""(< trailing staffHpersonnel costs< !ut .ell ahead of the cost of the I4 hard.are 7Iartner Consulting 2""29. 4his is li3ely to remain the case< even after the price fall!ac3s of 2""(. 4his is one reason .hy *icrosoft is !elieved to !e charging for data center services on a per).att !asis< since its internal cost analyses demonstrate that gro.th scales most closely to po.er consumed 7Denegri 2"" 9. Increasing energy consumption creates other pro!lems. + %& study concluded that< !y the end of 2"" < 5"C of data centres .ould !e running out of po.er 7%&E$+ 2""2!9. Dealing .ith this is not easy< either in the %& or in the %E< as po.er grids are often operating near to capacity< !oth overall and in some specific areas. Hence< it is not al.ays possi!le to o!tain connections for ne. or upgraded facilities ' for e>ample< in Dondon 7Hills 2""29. 4he high loads of data centres may also re=uire investment in transformers and other aspects of the electrical system .ithin universities and colleges. Interestingly< Ioogle and *icrosoft are said to !e responding to these pressures !y moving to.ards a model of data centres using 1""C rene.a!le energy< and !eing independent of the electricity grid ' a model .hich some !elieve .ill give them considera!le competitive advantage in a .orld of constrained po.er supply< and discouragement of fossil fuel use through car!on regulation 7Denegri 2"" 9. )a+"e =- E"ectricity &se in a Mode""ed ><>m2 &( Data Centre 0Emerson 20071 Category Demand (ide $rocessor &erver po.er supply 5ther &erver &torage Communication e=uipment (upp"y (ide Cooling po.er dra. %niversal $o.er &upply 7%$&9 and distri!ution losses 0uilding &.itchgearH4ransformer Dighting $o.er Distri!ution %nit 7$D%9 Po8er Dra8 051 =2 0@ =// 31 15 14 15 4 4 >/ 0@ =;9 31 # 5 # 1 1
)a+"e <- )ypica" (er!er Po8er &se 0&(EPA 2007+1 Components $&% losses Fan C$% Po8er &se # 6 1"6 "6 11
12
4his apparent divergence !et.een the %E and %&+ is credi!le !ecause? 4he %& sample includes many data centres in much hotter and more humid areas than the %E< .hich .ill have correspondingly greater cooling loads@ Energy and electricity prices are higher in the %E than most part of the %&+< so there are greater incentives for efficient design and use of e=uipment@ Energy efficiency standards for cooling< po.er supply and other e=uipment are generally more stringent in the %E than most areas of the %&+@ and %& data centres are also improving ' a detailed !enchmar3ing e>ercise found that energy efficiency measures and other changes had reduced the average overhead from (2C in 2""# to -#C in 2""5 7Ireen!erg< *ills< 4schudi< /umsey< and *yatt 2""-9< and the recently opened +dvanced Data Center facility near &acramento achieved 22C 7Ireener Computing 2"" 9. Hence< a !road!rush estimate for achieva!le supply overheads in %E data centres is perhaps 4")-"C in those .ithout free cooling< and 25)4"C for those .ith it< or e=uivalent energy efficiency features. 4he ratio of infrastructure overheads to processing .or3 done is much greater than these percentages !ecause a9 servers re=uire additional e=uipment to operate< and !9 they seldom operate at 1""C of capacity. 4he latter is the case !ecause? &erver resources< !oth individually and collectively< are often siBed to meet a pea3 demand .hich occurs only rarely@ and &ervers come in standard siBes< .hich may have much greater capacity than is needed for the applications or other tas3s running on them. *ost estimates suggest that actual utilisation of the #-5H24H2 capacity of a typical server can !e as lo. as 5)1"C 7FuAitsu &iemens Computers and Enurr 2""29. Ho.ever< most servers continue to dra. #")5"C of their ma>imum po.er even .hen idle 7Fichera 2""-9. Cooling and %$& e=uipment also operates fairly independently of computing load in many data centres. 4hese figures suggest that there is considera!le potential to increase the energy efficiency of most data centres< including those in %E further and higher education. Indeed< one %& study has suggested that a complete optimisation of a traditional data centre could reduce energy consumption and floor space re=uirements !y -5C 7Emerson 2""29. &ome means of achieving this are summarised in 4a!le 2 and 0o> 1< .hich represent t.o slightly differing vie.s of prioritisation from a European and a 1orth +merican source. In !road terms< the options fall into four main categories? $urchasing more energy efficient devices@ 14
Changing computing approaches@ Changing physical aspects such as layouts< po.er supply and cooling@ and *odular development.
15
)a+"e 7- Most 'eneficia" Data Centre Practices% According to the E& Code of Conduct on Energy Efficient Data Centres 0Measures scoring =% on a ,.= sca"e1 0European Commission Joint 6esearch Centre 200/1 Category &election and Deployment of 1e. I4 E=uipment )ype *ultiple tender for I4 hard.are ) po.er
Deploy using Irid and ,irtualisation technologies Decommission unused services ,irtualise and archive legacy services Consolidation of e>isting services
Description Include the Energy efficiency performance of the I4 device as a high priority decision factor i the tender process. 4his may !e through the use of Energy &tar or &$EC$o.er type standard metrics or through application or deployment specific user metrics more closely aligned to the target environment .hich may include service level or relia!ility components. 4he po.e consumption of the device at the e>pected utilisation or applied .or3load should !e considered in addition to pea3 performance per 6att figures. $rocesses should !e put in place to re=uire senior !usiness approval for any ne. service tha re=uires dedicated hard.are and .ill not run on a resource sharing platform. 4his applies to servers< storage and net.or3ing aspects of the service. Completely decommission and s.itch off< prefera!ly remove< the supporting hard.are for unused services
&ervers .hich cannot !e decommissioned for compliance or other reasons !ut .hich are not used on a regular !asis should !e virtualised and then the dis3 images archived to a lo. po.er media. 4hese services can then !e !rought online .hen actually re=uired
+s a!ove
E>isting services that do not achieve high utilisation of their hard.are should !e consolidate through the use of resource sharing technologies to improve the use of physical resources. 4his applies to servers< storage and net.or3ing devices.
E>panded I4 e=t inlet environmental conditions 7temp and humidity9 Direct +ir Free Cooling Indirect +ir Free Cooling Direct 6ater Free Cooling Indirect 6ater Free Cooling
Description 4here are a num!er of design concepts .hose !asic intent is to contain and separate the co air from the heated return air on the data floor@ L Hot aisle containment L Cold aisle containment L Contained rac3 supply< room return L /oom supply< Contained rac3 return L Contained rac3 supply< Contained rac3 return 4his action is e>pected for air cooled facilities over 136 per s=uare meter po.er density. 6here appropriate and effective< Data Centres can !e designed and operated .ithin the air inlet temperature and relative humidity ranges of 5 to 4"MC and 5 to "C /H< non) condensing respectively< and under e>ceptional conditions up to K45MC. 4he current< relevant standard is E4&I E1 #"" "1(< Class #.1.
+s a!ove
+s a!ove
+dsorptive Cooling
E>ternal air is used to cool the facility. Chiller systems are present to deal .ith humidity and high e>ternal temperatures if necessary. E>haust air is re)circulated and mi>ed .ith inta3e air to avoid unnecessary humidification H dehumidification loads. /e circulated air .ithin the facility is primarily passed through a heat e>changer against e>ternal air to remove heat to the atmosphere. Condenser .ater chilled !y the e>ternal am!ient conditions is circulated .ithin the chilled .ater circuit. 4his may !e achieved !y radiators or !y evaporative assistance through spray onto the radiators. Condenser .ater is chilled !y the e>ternal am!ient conditions. + heat e>changer is used !et.een the condenser and chilled .ater circuits. 4his may !e achieved !y radiators< evaporative assistance through spray onto the radiators or evaporative cooling in a cooling to.er. 6aste heat from po.er generation or other processes close to the data centre is used to po.er the cooling system in place of electricity< reducing overall energy demand. In such deployments adsorptive cooling can !e effectively free cooling. 4his is fre=uently part of a 4 Ien com!ined cooling heat and po.er system. 12
5ne option .hich also needs to !e considered today is .hether some or all of planned data centres can either !e outsourced to third party providers< or hosted .ithin common data centres< in .hich several institutions share a single data centre .hich is under their control. 4his could !e managed !y the institutions themselves< !ut is more li3ely to !e managed !y a specialist supplier. 4he colla!oration !et.een the %niversity of the 6est of &cotland and &outh Danar3shire Council 7.ho manage the shared centre9 is one of the fe. e>amples in the sector !ut several feasi!ility studies have !een done on additional proAects 7see !elo.9. 4he main &usteI4 report discusses some of the potential sustaina!ility advantages of such shared services 78ames and Hop3inson 2""(a9. Common data centres are made feasi!le !y virtualisation< .hich !rea3s the lin3 !et.een applications and specific servers< and therefore ma3es it possi!le to locate the latter almost any.here. 4he &usteI4 survey found that 52C of respondents .ere adopting this to some degree< and it is important that the potential for it is fully considered 78ames and Hop3inson 2""(c9. 4he &usteI4 case study on virtualisation of servers at &heffield Hallam %niversity demonstrates the large cost and energy savings that can !e realised. It is also important that all investment decisions are made on a total cost of o.nership 74C59 !asis< and that every effort is made to estimate the full costs of cooling< po.er supply and other support activities.
procurement. Ho.ever< there is de!ate a!out ho. effective it is li3ely to !e< due to :.atering do.n; in response to supplier pressure 7/elph)Enight 2"" 9. +s .ith cars< one pro!lem is that manufacturer;s data on po.er ratings is often !ased on test conditions< rather than :real life; circumstances. +ccording to the independent 1eal 1elson 0enchmar3 Da!oratory< in early 2"" the .idely used &$EC$o.er test had a small memory footprint< a lo. volume of conte>t s.itches< simple net.or3 traffic and performed no physical dis3 InputH5utput. 4heir o.n testing< !ased on .hat .ere said to !e more realistic configurations< produced rather different figures and< in particular< found that :.hile some Guad)Core Intel Neon !ased servers delivered up to 14 percent higher throughput< similarly configured Guad)Core +*D 5pteron !ased servers consumed up to 41 percent less po.er; 71eal 1elson 2"" 9. + 3ey reason is said to !e the use of Fully 0uffered memory modules in the Neon< rather than DD/)II memory modules of +*D. 71ote that Intel does dispute these findings< and previous ones from the same company9 7*odine 2""29. 4here is less disagreement on the energy efficiency !enefits of !oth the +*D and Intel =uad)core processors 7i.e. four high capacity microprocessors on a single chip9< compared to dual)core or single)core predecessors 70ro.nstein 2"" 9. 4he !enefits arise !ecause the processors can share some circuitry@ can operate at a lo.er voltage@ and !ecause less po.er is consumed sending signals outside the chip. 4hese !enefits are especially great .hen the processors also ta3e advantage of dynamic fre=uency and voltage scaling< .hich automatically reduces cloc3 speeds in line .ith computational demands 7%&E$+ 2""2!9. + more radical approach !eing introduced into commercial data centres is that of !lade servers. 4hese involve a single chassis providing some common features such as po.er supply and cooling fans to up to 2" :stripped do.n; servers containing only a C$%< memory and a hard dis3. 4hey can !e either self)standing or rac3 mounted 7in .hich case a chassis typically occupies one rac3 unit9. 0ecause the server modules share common po.er supplies< cooling fans and other components< !lade servers re=uire less po.er for given processing tas3s than conventional servers< and also occupy less space. Ho.ever< they have much greater po.er densities< and therefore re=uire more intense cooling. 5ne study estimates that the net effect can !e a 1"C lo.er po.er re=uirement for !lade than conventional servers for the same processing tas3s 7Emerson 2""29. 4he t.o stage interconnections involved in !lade servers 7from !lade to chassis< and !et.een the chassis;s themselves9 mean that they are not suita!le for activities such as high performance computing 7H$C9 .hich re=uire lo. latency. Even in other cases< the higher initial cost arising from the specialist chassis< and the increased comple>ity of cooling< means that they may not have great cost or energy advantages over alternatives for many universities and colleges. Certainly< installations such as that at Cardiff %niversity 7see &usteI4 case9< have achieved 21
similar advantages of high po.er density from =uad core devices< .hilst retaining the fle>i!ility and other advantages of having discrete servers.
/unning more applications on the same server 7!ut all utilising the same operating system9@ and Creating :virtual servers;< each .ith its o.n operating system< running completely independently of each other< on the same physical server. +nalyst figures suggest that in 2""2 the proportion of companies using server virtualisation .as as little as one in 1" 7Courtney 2""29. Ho.ever< Iartner figures suggest that !y 2""( the num!er of virtual machines deployed around the .orld .ill soar to over 4 million 70angeman 2""29. ,irtualisation has great potential< !ecause it potentially allo.s all of a server;s operating capacity to !e utilised. 0asic; virtualisation involves running a num!er of virtual servers on a single physical server. *ore advanced configurations treat an array of servers as a single resource and assign the virtual servers !et.een them in a dynamic .ay to ma3e use of availa!le capacity.. Ho.ever< virtualisation does re=uire technical capacity< and is not suita!le for every tas3< and may not therefore !e suita!le for every institution. 1onetheless< a num!er of institutions have applied it successfully< such as &heffield Hallam %niversity and &toc3port College 7see &usteI4 cases9. .
2#
4a3ing these actions can also create other !enefits< such as faster operation< deferring hard.are and soft.are upgrades< and less e>posure during /+ID re!uilds due to faster copy times 7&chulB 2""2!9. 1et+pp claims that the average enterprise uses only 25) "C of its storage capacity 7Cohen< 5ren and *aheras 2"" 9. *ore effective utilisation can reduce capital and operating e>penditure< and energy consumption. 4he data centre can !e also !e configured so that data can !e transferred directly to storage media .ithout using a net.or3< there!y avoiding energy consumption in routers< and !ypassing net.or3 delays 7Hengst 2""29. &torage in data centres typically involves storing data on a /andom +rray of Independent Dis3s 7/+ID9. If data on one dis3 cannot !e read< it can !e easily !e retrieved from others and copied else.here. Ho.ever< this approach has relatively high energy consumption !ecause dis3s are constantly spinning< and also !ecause they are seldom filled to capacity. *+ID 7*assive +rray of Idle Dis3s9 systems can reduce this consumption !y dividing data according to speed of response criteria< and po.ering do.n or s.itching off dis3s containing those .here rapid response is not re=uired. ,endors claim that this can reduce energy consumption !y 5"C or more 7&chulB 2"" 9. Even greater savings can !e o!tained .hen infre=uently accessed data is archived onto tapes and other media .hich re=uire no energy to 3eep. +chieving this re=uires a more structured approach to information life cycle management< .hich involves classifying data !y re=uired longevity 7i.e. .hen can it !e deletedJ9< and availa!ility re=uirements 7i.e. ho. rapidly does it need to !e accessedJ9. *ost university data centres also have storage re=uirements many times greater than the core data they hold. Different versions of the same file are often stored at multiple locations. +s an e>ample< a data!ase .ill typically re=uire storage for its ma>imum capacity< even though it has often not reached this. Different versions of the data!ase .ill often stored for different purposes< such as the live application and testing. +t any point in time< each data!ase .ill often e>ist in multiple versions 7the live version@ a on)line !ac3up version@ and one or more archived versions .ithin the data centre< and possi!ly others utilised else.here9. 5ver time< many legacy versions ' and possi!ly duplicates< if the data is used !y a variety of users ' can also accumulate. In this .ay< one 4era0yte 7409 of original data can easily s.ell to 15)2"40 of re=uired storage capacity. In most cases< this is not for any essential reason. Hence< there is the potential for data deduplication !y holding a single reference copy< .ith multiple pointers to it 7&chulB 2""2a9. &ome storage servers offer this as a feature< e.g. 1etapp. 4he %niversity of &heffield has used this and other means to achieve deduplication< .ith 2")("C savings< depending on the type of data 7Cartledge 2"" !9. 7Ienerally< savings have !een at the lo.er end of the spectrum9.
24
25
0etter separation of cooled and hot air !y changing layouts 7in a simple .ay through hot aisleHcold aisle layouts< and in a more comple> .ay !y sealing of floors and containment of servers9< and !y air management 7e.g. raised plenums for inta3e air< and ceiling vents or fans9 to dra. hot air a.ay@ /educing areas to !e cooled !y concentrating servers< and !y using !lan3ing panels to cover empty spaces in rac3s@ and *atching cooling to load more effectively through use of supplemental cooling units< andHor varia!le flo. capa!ility. &upplemental cooling units can !e mounted a!ove or alongside e=uipment rac3s< and !ring cooling closer to the source of heat< reducing the fan po.er re=uired to move air. 4hey also use more efficient heat e>changers and deliver only sensi!le cooling< .hich is ideal for the dry heat generated !y electronic e=uipment. /efrigerant is delivered to the supplemental cooling modules through an overhead piping system< .hich< once installed< allo.s cooling modules to !e easily added or relocated as the environment changes. +ir flo. can also !e reduced through ne. designs of air compressor andHor varia!le fre=uency fan motors .hich are controlled !y thermal sensors .ithin server rac3s. ,aria!le drive fans can !e especially !eneficial as a 2"C reduction in fan speed can reduce energy re=uirements !y up to 5"C< giving a pay!ac3 of less than a year .hen they replace e>isting fans. *inimising fan po.er in these and other .ays has a dou!le !enefit !ecause it !oth reduces electricity consumption< and also reduces the generation of heat so that the cooling system has to .or3 less hard. Computational fluid dynamics 7CFD9 can also assist these measures !y modeling air flo.s to identify inefficiencies and optimal configurations 7Chandra3ant et al 2""19.
Free cooling is especially effective .hen it is com!ined .ith an e>panded temperature range for operation. 04 no. allo. their 25" or so sites top operate .ithin a range of 5 and 4" degrees Celsius 7compared to a more typical 2")24 degrees Celsius9. 4his has reduced refrigeration operational costs !y 5C< .ith the result that they have less that 4"C of the total energy demand of a tier # data centre< .ith similar or greater relia!ility 75;Donnell 2""29. +lthough there remains considera!le concern amongst smaller operators a!out the relia!ility of such approaches< they are !eing encouraged !y changes in standards< e.g. the 4C(.( standard of +&H/+E 7a %& !ody9 .hich increases operating !ands for temperature and humidity.
22
0o> 2 ) Free Cooling at the %niversity of Edin!urgh 4he Hector supercomputing facility 7High End Computing 4erascale /esources9 generates 1 36 of heat per rac3. Free cooling is used for around 22C of the year< and provides all the cooling needed for a!out (C of the year. 4his has reduced energy consumption< !y 2-C annually. Further reductions have come from full containment of the rac3s so that cooled supply air cannot mi> .ith .armer room or e>haust air< and ma>imum use of varia!le speed drives on most pumps and fans. +t early 2"" prices< the measures created annual savings of F45#<(5# compared to an older e=uivalent facility 7see the short and long &usteI4 case studies9.
2(
#"
2. 1et.or3ing Issues
+s noted a!ove< routers and other e=uipment connected .ith net.or3s account for around C of IC4)related electricity consumption at the %niversity of &heffield. In addition< there .ill !e additional energy consumption related to &heffield;s use of the national 8+1E4 net.or3. Ienerally spea3ing< net.or3)related energy and environmental issues have received less attention than those .ith regard to computing and printing !ut it is clear that there is considera!le scope for improvement 70aliga et al 2"" @ Ceuppens< Eharitonov and &ardella 2"" 9. + ne. energy efficiency metric has also !een launched for routers in the %& 7EC/ 2"" 9.
I$ 4elephony is also 3no.n as Internet telephony< 0road!and telephony< 0road!and $hone and ,oice over 0road!and and ,oice over Internet $rotocol 7,5I$9.
#2
4he relative impacts of $oE can also !e reduced if its full potential to replace mains po.er for some other devices is adopted 7Ilo!al +ction $lan 2""29. 4he energy overheads can also !e shared .ith other applications< such as :intelligent; !uilding services 7see main report9.
. Conclusions
It is clear that there are many proven technical options to ma3e data centres much more energy efficient than is currently the norm. Ho.ever< a crucial re=uirement to achieving this .ill !e effective colla!oration !et.een Estates and I4 departments< as cooling and po.er issues clearly involve !oth. In the longer term< there is real potential to achieve :Bero car!on; data centres. Indeed< this may !e re=uired any.ay in a fe. years. 4he %E Ireening Iovernment I4 initiative re=uires Bero car!on in Iovernment offices ' and therefore in IC4 and< in many cases< data centres ' !y 2"12 7Ca!inet 5ffice 2"" 9. 4he 6elsh +ssem!ly Iovernment also re=uires all pu!licly funded ne. developments in 6ales to !e
2
Data .iring and ca!ling is categorised !y its transmission speed< .ith the lo.est< Category 1< !eing used for standard telephone or door!ell type connections< and the highest< Category -< !eing used for very high capacity connections< such as are re=uired in data centres or for high performance computing.
##
:Bero car!on; from 2"11. Hence< a goal of Bero car!on data centres could !e a =uestion more of !ringing the inevita!le for.ard< than of radical trail!laBing. Qero car!on data centres .ould fit .ell .ith the drive for more shared services .ithin IC4. 4he greater freedom of location .hich could result from this could ena!le optimal siting for rene.a!le energy and other relevant technologies such as tri)generation and underground thermal storage< there!y achieving Bero car!on targets in an e>emplary fashion .ithout e>cessive rises in capital cost.
#4
0i!liography
0angeman< E.< 2""2. "artner# $irtuali%ation to rule server room b& '()(. +/& 4echnica< *ay 2""2. R5nlineS +vaila!le at? http?HHarstechnica.comHne.s.arsHpostH2""2"5" )gartner)virtualiBation)to)rule) server)room)!y)2"1".html R+ccessed 2 8uly 2"" S. 0arroso< D. and HolBle< %.< 2""2. 4he Case for Energy)$roportional Computing * I+++ Compute*r Decem!er 2""2. R5nlineS +vaila!le at? http?HH....!arroso.orgHpu!licationsHieeeTcomputer"2.pdf R+ccessed #1 Decem!er 2"" S. 0ro.nstein< *.< 2"" . 4ips for 0uying Ireen. Processor< ,ol.#" Issue #< 1 8anuary 2"" . R5nlineS +vaila!le at? http?HH....processor.comHeditorialHarticle.aspJ articleUarticlesC2Fp#""#C2F22p"#C2F22p"#.asp R+ccessed 1 5cto!er 2"" S. Ca!inet 5ffice< 2"" . "reenin, "overnment ICT- R5nlineS Dondon. +vaila!le at? http?HH....ca!inetoffice.gov.u3HVHmediaHassetsH....ca!inetoffice.gov.u3Hpu!licati onsHreportsHgreeningTgovernmentHgreeningTgovernmentTictC2"pdf.ash>. R+ccessed 2 8uly 2"" S. Cartledge< C.< 2"" a. Sheffield ICT .ootprint Commentar&. /eport for &usteI4. R5nlineS +vaila!le at? http?HH....susteit.org.u3 7under tools9. R+ccessed 2" 1ovem!er 2"" S. Cartledge< C. 2"" !. $ersonal Communication !et.een Chris Cartledge< formerly %niversity of &heffield and $eter 8ames< 2# 1ovem!er 2"" . Ceuppens< D.< Eharitonov< D.< and &ardella< +.< 2"" . Power savin, Strate,ies and Technolo,ies in /etwor!- +0uipment 1pportunities and Challen,es* Ris! and Rewards. &+I14 2"" . International &ymposium on +pplications and the Internet< 8uly 2 ) +ug. 1< 2"" . Chandra3ant D. $atel< Cullen E. 0ash< 0elady C.< &tahl< D.< &ullivan D.< 2""1. Computational .luid &namics 2odelin, of 3i,h Compute ensit& ata Centers to Assure S&stem Inlet Air Specifications- /eprinted from the proceedings of the $acific /im +&*E International Electronic $ac3aging 4echnical Conference and E>hi!ition 7I$+CE 2""19. +vaila!le at? http?HH....hpl.hp.comHresearchHpapersHpo.er.html R+ccessed 2" 1ovem!er 2"" S. Cohen< &.< 5ren< I.< and *aheras I.<2"" . Empo.ering IT to 1ptimi%e Stora,e Capacit& 2ana,ement . 1et+pp 6hite $aper. 1ovem!er 2"" . R5nlineS +vaila!le at? http?HHmedia.netapp.comHdocumentsH.p)2"-")empo.ering)it.pdf R+ccessed #1 Decem!er 2"" S. #5
Citel< undated. 4 steps to a ,reen $1IP mi,ration . R5nlineS +vaila!le at? http?HH....citel.comH$roductsH/esourcesH6hiteT$apersH5Tsteps.asp R+ccessed 5 8une 2"" S. Climate Iroup< 2"" . Smart '('( 5 +nablin, the 6ow Carbon +conom& in the Information A,e. Ilo!al e&ustaina!ility Initiative.
FuAitsu &iemens Computers and EnWrr< 2""2. +ner,& +fficient Infrastructures for ata Centers. R5nlineS 6hite $aper. 8uly 2""2. +vaila!le at? http?HHsp.fuAitsu) siemens.comHdmspHdocsH.pTenergyTefficiencyT3nuerrTfsc.pdf R+ccessed 2# 8une 2"" S. Ilo!al +ction $lan< 2""2. An Inefficient Truth. Decem!er 2""2. +vaila!le at? http?HH....glo!alactionplan.org.u3HeventTdetail.asp>JeidU2-(-e"e")2 fe)4121) !d#-)#-2"c"2eda4( R+ccessed 2# 8une 2"" S. IoodClean4ech< 2"" . .ive "reen IT Tips for /etwor! Admins. $osted !y &teven ,olynets< 24 8uly 2"" . R5nlineS +vaila!le at? http?HH....goodcleantech.comH2"" H"2H3vmTfirmToffersTgreenTitTtips.php R+ccessed 5 1ovem!er 2"" S. Iralla< $. 2""(. :Energy &tar for &ervers? 1ot 1early Iood Enough;< Ireener Computing< 21 *ay 2""(. R5nlineS +vaila!le at? http?HH....greenercomputing.comH!logH2""(H"5H21Henergy)star)servers)not)nearly) good)enough R+ccessed 22 *ay 2""(S. #-
Ireen!erg< &.< *ills< E.< 4schudi< 0.< /umsey< $.< and *yatt. 0.< 2""-. Best Practices for ata Centers# 6essons 6earned from Benchmar!in, '' ata Centers . $roceedings of the +CEEE &ummer &tudy on Energy Efficiency in 0uildings in +silomar< C+. +CEEE< +ugust. ,ol #< pp 2-) 2. R5nlineS +vaila!le at? http?HHeetd.l!l.govHemillsH$%0&H$DFH+CEEE)datacenters.pdf. R+ccessed 5 1ovem!er 2"" S. Ireener Computing 2"" . /ew ata Center from A C to +arn 6++ Platinum Certification. 5 +ugust 2"" . R5nlineS +vaila!le at? http?HH....greenercomputing.comHne.sH2"" H" H"5Hadc)data)center)leed)platinum R+ccessed #1 5cto!er 2"" S. Ireen Irid 2""(. &ee ....greengrid.org. Henderson< 4. O Dvora3< /.< 2"" . Dinu> captures the XgreenX flag< !eats 6indo.s 2"" po.er)saving measures. /etwor! 7orld< - &eptem!er 2"" . R5nlineS +vaila!le at? ....net.or3.orld.comHresearchH2"" H"-"(" )green).indo.s)linu>.html R+ccessed 5cto!er 2"" S. Hengst< +.< 2""2. Top )( 7a&s to Improve Power Performance in 9our atacenter . 4 5cto!er 2""2. +vaila!le at? http?HH....itmanagement.comHfeaturesHimprove)po.er) performance)datacenter)1""4"2H R+ccessed 2# 8une 2"" S. Hic3ey< +.< /.< 2""2< Power over +thernet power consumption# The hidden costs . 2" *arch 2""2. R5nlineS +rticle for 4ech4arget +1Q. +vaila!le at? http?HH....searchvoip.com.auHtopicsHarticle.aspJDocIDU124 152 R+ccessed 21 5cto!er 2"" S. Hills< *. 2""2. DondonXs data)centre shortage. : /et 1 *ay 2""2. R5nlineS +vaila!le at http?HHresources.Bdnet.co.u3HarticlesHcommentH"<1"""""2( 5<#(2 21#(<"".htm R+ccessed 2( 8uly 2"" S. Hopper< +. and /ice< +.< 2"" . Computing for the Future of the $lanet. Philosophical Transactions of the Ro&al Societ&* + #--71 19?#- 5'#-(2. R5nlineS +vaila!le at? http?HH....cl.cam.ac.u3HresearchHdtgHpu!licationsHpu!licHacr#1Hhopper)rs.pdf Raccessed 2( 5cto!er 2"" S. I0* Ilo!al 4echnology &ervices< 2""2. ;"reen IT<# the ne=t burnin, issue for business. 8anuary 2""2. +vaila!le at? http?HH...) (#5.i!m.comHservicesHu3HigsHpdfHgreenitTpovTfinalT"1"2.pdf R+ 1 *ay2"" S. 8ames< $. and Hop3inson< D.< 2""(a. Sustainable ICT in .urther and 3i,her +ducation > A Report for the Joint Information Services Committee (JISC) . R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed #1 8anuary 2""(S. #2
8ames< $. and Hop3inson< D.< 2""(!. +ner,& and +nvironmental Impacts of Personal Computin,. + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed 22 *ay 2"" S.8ames< $. and Hop3inson< D.< 2""(c. +ner,& +fficient Printin, and Ima,in, in .urther and 3i,her +ducation . + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed 2( *ay 2""(S. 8ames< $. and Hop3inson< D.< 2""(c. Results of the '((? SusteIT Surve&- + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. 8anuary 2"" R5nlineS. +vaila!le at? ....susteit.org.u3 R+ccessed 22 *ay 2""(S. Eoomey< 8.< I.< 2""2< +stimatin, Total Power Consumption b& Servers in the 8S and the 7orld. Fe!ruary 2""2. R5nlineS +vaila!le at? http?HHenterprise.amd.comHDo.nloadsHsvrp.rusecompletefinal.pdf R+ccessed 2# 8une 2"" S. Da.rence 0er3eley Da!oratories< undated. Data Center Energy *anagement 0est $ractices Chec3list. R5nlineS +vaila!le at? http?HHhightech.l!l.govHDC4rainingH0est) $ractices.html R+ccessed 21 5cto!er 2"" S. *odine< +. 2""2. /esearchers? +*D less po.er)hungry than Intel. The Re,ister< #1 +ugust 2""2< R5nlineS +vaila!le at? http?HH....theregister.co.u3H2""2H" H#1HnealTnelsonTassociatesTclaimTamdT!eatsTi ntelH R+ccessed #" 8uly 2"" S. 1eal 1elson and +ssociates< 2"" . A2 Beats Intel in @uad Core Server Power +fficienc&- 5nline 6hite $aper. R5nlineS +vaila!le at? http?HH.....orlds) fastest.comH.fB( -.html R+ccessed 8uly #" 2"" S. 1e.com!e D.< 2"" . ata Centre Coolin,- A report for SusteIT b& "rid Computin, /owA< 5cto!er 2"" . R5nlineS +vaila!le at http?HH....susteit.org.u3 R+ccessed 22 *ay 2""(S. 5;Donnell< &.< 2""2. The ')st Centur& ata Centre. $resentation at the seminar< Information +ge< Eco /esponsi!ility in I4 "2< Dondon< 1ovem!er 2""2. R5nlineS +vaila!le at? http?HH....information) age.comHTTdataHassetsHpdfTfileH"""5H1 4-4(H&teveT5TDonnellTpresentationT) TE/T"2.pdf R+ccessed 2# +pril 2"" S. /elph)Enight< 4. 2"" . :+*D and Intel differ on Energy &tar server specifications;< 3eise 1nline< 8une 2 2"" . +vaila!le at http?HH....heise)online.co.u3Hne.sH+*D) and)Intel)differ)on)Energy)&tar)server)specifications))H111"11 R+ccessed? 8uly #" 2"" S.
&chulB< I.< 2""2a. Business Benefits of ata .ootprint Reduction < &torageI5 Iroup< 15 8uly 2""2. +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$T"215"2.pdf R+ccessed 5 +ugust 2"" S. &chulB< I.< 2""2!. Anal&sis of +PA Report to Con,ress< &torageI5 Iroup< 14 +ugust 2""2. +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$TE$+T/eportT+ug14"2.pdf R+ccessed 5 +ugust 2"" S. &chulB< I.< 2"" . 2aid '-( 5 +ner,& Savin,s 7ithout Performance Compromises < &torageI5 Iroup< 2 8anuary 2"" . +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$TDec11T2""2.pdf R+ccessed 5 +ugust 2"" S. 4ro> 2""-. $roAect Imperial College Dondon. http?HH....tro>aitcs.comHaitcsHserviceHdo.nloadTcenterHstructureHtechnicalTdocume ntsHimperialTcollege.pdf R+ccessed 5 +ugust 2"" S. %& Environment $rotection +gency 7%&E$+9< 2""2a. +/+R"9 STARB Specification .ramewor! for +nterprise Computer Servers- R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJcUne.Tspecs.enterpriseTservers R+ccessed 2# 8une 2"" S. %& Environment $rotection +gency 7%&E$+9< 2""2!. Report to Con,ress on Server and ata Center +ner,& +fficienc& . +ugust 2""2. R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJcUprodTdevelopment.serverTefficiencyTstudy R+ccessed 2# 8une 2"" S. %& Environment $rotection +gency 7%&E$+9< 2"" . 8S +nvironmental Protection A,enc&* +/+R"9 STAR Server Sta!eholder 2eetin, iscussion "uide < 8uly (< 2"" . R5nlineS +vaila!le at http?HH....energystar.govHiaHpartnersHprodTdevelopmentHne.TspecsHdo.nloadsH&er verTDiscussionTDocTFinal.pdf R+ccessed #" 8uly 2"" S. %& Environmental $rotection +gency 7%&E$+9< 2""(a. E1E/IY &4+/Z $rogram /e=uirements for Computer &ervers< 15 *ay 2""(. R5nlineS +vaila!le at http?HH....energystar.govHiaHpartnersHproductTspecsHprogramTre=sHcomputerTserve rTprogTre=.pdf R+ccessed 22 *ay 2""(S. %& Environmental $rotection +gency 7%&E$+9< 2""(!. E$+ *emo to &ta3eholders< 15 *ay 2""(. R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJ cUne.Tspecs.enterpriseTservers R+ccessed 22 *ay 2""(S.
#(
6orrall< 0.< + Ireen 0udget Dine. .orbes< 2 8uly 2"" . R5nlineS +vaila!le at? http?HH....for!es.comHtechnologyH2"" H"2H22Hsun)energy)crisis)tech)cio) c>Tr.T"22 sun.html R+ccessed 5 +ugust 2"" S
4"