You are on page 1of 40

Energy Efficient Data Centres in Further and Higher Education

A Best Practice Review prepared for the Joint Information Services Committee (JISC)
May 27 2009

Peter James and Lisa Hop inson


Higher Education En!ironmenta" Performance #mpro!ement Pro$ect% &ni!ersity of 'radford (ustain#)% &* Centre for Economic and En!ironmenta" De!e"opment

Contents
Introduction.....................................................................................................4 1. Data Centres in Further and Higher Education............................................5 2. Energy and Environmental Impacts of Data Centres................................... 2.1 Em!edded Environmental Impacts......................................................1" 2.2 Energy Issues in Data Centres.............................................................1" 2.# $atterns of Energy %se in Data Centres..............................................1# #. Data Centre &olutions ' &trategy...............................................................1( 4. Data Centre &olutions ) $urchasing *ore Energy Efficient Devices...........2" 5. Data Centre &olutions ) Changing Computing +pproaches........................22 5.1 Energy $roportional Computing...........................................................22 5.2 Consolidation and ,irtualisation of &ervers.........................................22 5.# *ore Energy Efficient &torage.............................................................2# -. Data Centre &olutions ) *ore Efficient Cooling and $o.er &upply.............25 -.1 *ore Effective Cooling.........................................................................25 -.2 *ore Energy Efficient $o.er &upply....................................................2 -.# /educing +ncillary Energy...................................................................#" -.4 0etter *onitoring and Control.............................................................#" -.5 1e. &ources of Energy Inputs.............................................................#" 2. 1et.or3ing Issues......................................................................................#2 2.1 4he Environmental Impacts of ,5I$ 4elephony...................................#2 2.2 6iring and Ca!ling..............................................................................## . Conclusions ...............................................................................................## 0i!liography...................................................................................................#5
#

Introduction
4his paper provides supporting evidence and analysis for the discussion of data centres and servers in the main &usteI4 report 78ames and Hop3inson 2""(a9. *ost university and college computing today uses a more decentralised :client) server; model. 4his involves a relatively large num!er of :servers; providing services< and managing net.or3ed resources for< an even greater num!er of :clients;< such as personal computers< .hich do much of the actual computing :.or3; re=uired !y users. 4he devices communicate through net.or3s< !oth internally .ith each other< and e>ternally through the Internet. + typical data centre< or :server room;< therefore contains? &ervers< such as application servers 7usually dedicated to single applications< in order to reduce soft.are conflicts9< file servers 7.hich retrieve and archive data such as documents< images and data!ase entries9< and print servers 7.hich process files for printing9@ &torage devices< to variously store :instantly accessi!le; content 7e.g. user files9< and archive !ac3)up data@ and /outers and s.itches .hich control data transmission .ithin the data centre< !et.een it and client devices such as $Cs and printers< and to and from e>ternal net.or3s. 4his infrastructure has considera!le environmental and financial costs< including those of? Energy use< car!on dio>ide emissions and other environmental impacts from production@ Direct energy consumption .hen servers and other IC4 e=uipment are used< and indirect energy consumption for their associated cooling and po.er supply losses@ and 6aste and pollution arising from e=uipment disposal. *a3ing definitive Audgments a!out these environmental impacts ' and especially ones .hich aim to decide !et.een different procurement< or technical< options ) is difficult !ecause? Data centres contain many diverse devices< and vary in usage patterns and other parameters@

It re=uires the collection of information for all stages of the life cycle< .hich is very difficult in practice 7see discussion for $Cs in 8ames and Hop3inson 2""(!9@ and 4echnology is rapidly changing .ith more efficient chips@ ne. or improved methods of cooling and po.er supply@ and ne. computing approaches such as virtualisation and thin client. 4herefore caution must !e ta3en .hen e>trapolating any of the follo.ing discussion to specific products and models. 1onetheless< some !road conclusions can !e reached< as descri!ed !elo.. 4hey are !ased on the considera!le num!er of codes and !est practice guides .hich have recently !een pu!lished 7for e>ample< European Commission 8oint /esearch Centre 2"" @ %& Environmental $rotection +gency 7%& E$+ 2""2a9.

1. Data Centres in Further and Higher Education


Data centres range in siBe from one room of a !uilding< one or more floors< or an entire !uilding. %niversities and colleges typically contain a small num!er of central data centres run !y the I4 department 7usually at least t.o< to protect against one going do.n9< !ut many .ill also have secondary sites providing specific services to schools< departments< research groups etc. 4he demand for greater data centre capacity in further and higher education is rising rapidly< for reasons .hich include? 4he gro.ing use of internet media and online learning< and demands for faster connectivity from users@ + move to .e! !ased interfaces .hich are more compute intensive to deliver@ Introduction of comprehensive enterprise resource planning 7E/$9 soft.are solutions .hich are much more compute intensive than earlier soft.are@ Increasing re=uirements for comprehensive !usiness continuity and disaster recovery arrangements .hich results in duplication of facilities@ Increasing digitisation of data@ and /apidly e>panding data storage re=uirements. 4he &usteI4 survey found that -#C of responding institutions .ere e>pecting to ma3e additional investments in housing servers .ithin the ne>t t.o years 78ames and Hop3inson 2""(c9. 4his has considera!le implications for future IC4 costs< and ma3es data centres one of the fastest gro.ing components of an institution;s 5

:car!on footprint;. It also creates a potential constraint on future plans in areas .here the electricity grid is near capacity< such as central Dondon. 4hese changes are reflected in gro.ing num!ers of servers. 4he main &usteI4 report estimates that %E higher education has an estimated 215<""" servers< .hich .ill pro!a!ly account for almost a =uarter of the sector;s estimated IC4)related car!on dio>ide 7C529 emissions of 225<""" tonnes< and IC4)related electricity !ill of F-1 million< in 2""( 78ames and Hop3inson 2""(a9. 7Further education has only an estimated 2#<""" servers< so their impact is much less in this area9. 4he &usteI4 footprinting of IC4)related electricity use at the %niversity of &heffield also found that servers< high performance computing 7H$C9 and net.or3s ' most< though not all< of .hich .ould !e co)located in data centres ) accounted for 4"C of consumption< .ith an annual !ill of F4""<""" 7Cartledge 2"" a ' see also 4a!le 19. 6hilst these figures .ill !e lo.er at institutions .ithout H$C< they reinforce the point that the topic is significant. &ome responses are !eing made .ithin the sector. Ho.ever< 4a!le 2 ' .hich sho.s the prevalence of some of the 3ey energy efficiency measures .hich are discussed in the remainder of this document ' suggests that there is considera!le scope for improvement. 4his is especially true given that the most common option< !lade servers< are< .hilst advantageous< not the most environmentally superior option for reasons discussed !elo.. *ore positively< 2#C of responding institutions .ere e>pecting to ta3e significant measures to minimise server energy consumption in the near future. If the sector is to have more sustaina!le IC4 it is therefore vital that the energy consumption and environmental footprint of data centres is minimised. )a+"e ,- E"ectricity Consumption of non.residentia" #C) at the &ni!ersity of (heffie"d 2007./ 0rounded to nearest ,01 0Cart"edge 200/a12 #C) Category $Cs &ervers High performance computing Imaging devices 1et.or3s 4elephony +udio),isual 4otal E"ectricity Consumption 0M3h4y1 4<1-" 1<52" 1<21" 4" -(" 2"" -" <- " 5 4 C 1 C 14C 1"C C 2C 1C 1""C

)a+"e 2- 6esu"ts for sur!ey 7uestion . Ha!e you imp"emented any of the fo""o8ing inno!ations to reduce energy consumption in your data -

centre4ser!er room0s19 P"ease choose a"" that app"y2 7Guestion as3ed to server room operatorsHmanagers only9. /esults further analysed !y institution. #nno!ation 0lade servers &erver virtualisation $o.er management features Do. po.er processors High efficiency po.er supplies 415, +C po.er distri!ution Dayout changes 6ater cooling ,aria!le capacity cooling Heat recovery Fresh air cooling other 1one of these Don;t 3no. )ota" #nstitutions :um+er of responding institutions 5 4 4 # # 2 2 1 " " 2 " 11 5 2# 55 45 ##22 22 1 1 ( " " 1 "

2. Energy and Environmental Impacts of Data Centres


+ccording to one forecast< the num!er of servers in the .orld .ill increase from 1 million in 2""2 to 122 million in 2"2" 7Climate Iroup and Ie&I 2"" 9. 4hese servers .ill also have much greater processing capacity than current models. 4he historic trend of rising total po.er consumption per server 7see 4a!le #9 is therefore li3ely to continue. 4his gro.th .ill create many adverse environmental effects< especially those arising from the? Energy< resource and other impacts of materials creation and manufacture .hich are em!edded .ithin purchased servers and other data centre e=uipment@ Energy consumption of data centres< and activities such as cooling and humidification that are associated .ith it@ and Disposal of end of life e=uipment. 5ne recent study has analysed these impacts in terms of their C5 2 emissions 7Climate Iroup and Ie&I 2"" 9. It forecasts that the glo!al data centre footprint< including e=uipment use and em!odied car!on< .ill more than triple from 2- million tonnes C52 e=uivalent emissions in 2""2< to 25( million tonnes in 2"2". 4he study assumed that 25C of these emissions .ere related to use. 4he totals represent a!out 14C and 1 C respectively of total IC4)related emissions. IC4)related C52 e=uivalent emissions are said to !e a!out 2C of the glo!al total 7Climate Iroup and Ie&I 2"" 9. Hence< data centres account for around ".#C of glo!al C52 e=uivalent emissions.

)a+"e ;- 3eighted a!erage po8er 03atts1 of top < ser!ers% +y sa"es 0*oomey 200712 (er!er c"ass ,olume *id)range High)end 2000 1 424 55#4 &( 200; 2"2 524 -42 200= 212 -41 1"-2# 2000 1 # 42# 4 24 3or"d 200; 214 522 5 15 200= 21 -# 12- 2

)a+"e > ? #ncreasing Po8er Density of (er!ers 8ith )ime 0#nformation from Edin+urgh Para""e" Computing Centre and Cardiff &ni!ersity12 (ite Date Po8er density 0 34m21 ".5 2.5 2 2" 1"KJ

/C5 0uilding< % Edin!urgh +dvanced Computer Facility 7$hase 19< % Edin!urgh +CF 7$hase 2 ' initial Hector9< % Edin!urgh H$C Facility< Cardiff% +CF 7final Hector9

1(22""4 2""2 2"" 2"1"J

2.1 Em!edded Environmental Impacts


&ervers and other devices in data centres are made from similar materials< and similar manufacturing processes< to $Cs. End of life issues are also similar to $Cs. +s !oth these topics are considered in detail in the parallel paper on The Sustainable es!top 78ames and Ho3inson 2"" !9 they are not discussed further here. Ho.ever< one important issue .ith regard to em!edded energy is its relationship to energy in use. If it is higher< it suggests that a :green I4; policy .ould see3 to e>tend the lives of servers and other devices to gain the ma>imum compensating !enefit from the environmental !urden created !y production. If lo.er ' and if ne. models of server can !e significantly more energy efficient than the ones they are replacing ' it .ould suggest that a more vigorous :scrap and replace; policy .ould !e appropriate. +s the parallel paper discusses< different estimates have !een produced for the em!eddedHuse energy ratio in $Cs< ranging from #?1 to 1?# 78ames and Hop3inson 2""(!9. 4he paper concludes that it is reasona!le to assume a 5"?5" ratio in %E non)domestic applications. 4his is even more li3ely to !e true of servers than $Cs as? *ost operate on a 24H2 !asis< and therefore have much higher levels of energy use 7per unit of processing activity9 than $Cs@ 4he intensity of use is increasing as more servers are virtualised@ 4he devices are stripped do.n to the !asic activities of processing and storing data< and are therefore less materials) 7and therefore energy)9 intensive than $Cs 7this effect may !e offset< !ut is unli3ely to !e e>ceeded< !y the avoidance of po.er consumption for peripherals such as monitors< graphics cards< etc.9@ and *anufacturers have reduced em!edded energy< !oth through cleaner and leaner production< and greater revalorisation of end of life e=uipment 7FuAitsu &iemens Computers and Enurr 2""29.

2.2 Energy Issues in Data Centres


4he energy consumption of data centres has greatly increased over the last decade< primarily due to increased computational activities< !ut also !ecause of increases in relia!ility< .hich is often achieved through e=uipment redundancy 7Hopper and /ice 2"" 9. 1o relia!le figures are availa!le for the %E !ut %& data centres consumed a total of -1 !illion 36h of electricity ) 1.5C of national consumption ) in 2""5 7%&E$+ 2""2!9. 4his consumption is e>pected to dou!le !y 2"11. 4his high energy consumption of course translates into high energy costs. Even !efore the 2"" price rises< the Iartner consultancy .as predicting that energy 1"

costs .ill !ecome the second highest cost in 2"C of the .orld;s data centres !y 2""(< trailing staffHpersonnel costs< !ut .ell ahead of the cost of the I4 hard.are 7Iartner Consulting 2""29. 4his is li3ely to remain the case< even after the price fall!ac3s of 2""(. 4his is one reason .hy *icrosoft is !elieved to !e charging for data center services on a per).att !asis< since its internal cost analyses demonstrate that gro.th scales most closely to po.er consumed 7Denegri 2"" 9. Increasing energy consumption creates other pro!lems. + %& study concluded that< !y the end of 2"" < 5"C of data centres .ould !e running out of po.er 7%&E$+ 2""2!9. Dealing .ith this is not easy< either in the %& or in the %E< as po.er grids are often operating near to capacity< !oth overall and in some specific areas. Hence< it is not al.ays possi!le to o!tain connections for ne. or upgraded facilities ' for e>ample< in Dondon 7Hills 2""29. 4he high loads of data centres may also re=uire investment in transformers and other aspects of the electrical system .ithin universities and colleges. Interestingly< Ioogle and *icrosoft are said to !e responding to these pressures !y moving to.ards a model of data centres using 1""C rene.a!le energy< and !eing independent of the electricity grid ' a model .hich some !elieve .ill give them considera!le competitive advantage in a .orld of constrained po.er supply< and discouragement of fossil fuel use through car!on regulation 7Denegri 2"" 9. )a+"e =- E"ectricity &se in a Mode""ed ><>m2 &( Data Centre 0Emerson 20071 Category Demand (ide $rocessor &erver po.er supply 5ther &erver &torage Communication e=uipment (upp"y (ide Cooling po.er dra. %niversal $o.er &upply 7%$&9 and distri!ution losses 0uilding &.itchgearH4ransformer Dighting $o.er Distri!ution %nit 7$D%9 Po8er Dra8 051 =2 0@ =// 31 15 14 15 4 4 >/ 0@ =;9 31 # 5 # 1 1

)a+"e <- )ypica" (er!er Po8er &se 0&(EPA 2007+1 Components $&% losses Fan C$% Po8er &se # 6 1"6 "6 11

*emory Dis3s $eripheral slots *other!oard 4otal

#-6 126 5"6 256 2516

12

2.# $atterns of Energy %se in Data Centres


&ervers re=uire supporting e=uipment such as a po.er supply unit 7$&%9< connected storage devices< and routers and s.itches to connect to net.or3s. +ll of these have their o.n po.er re=uirements or losses. 4a!les 5 and - present %& data on these from t.o sources 7Emerson 2""2@ %&E$+ 2""2!9< .ith the first focusing on all po.er consumed .ithin server rooms< and the second on the consumption of the servers themselves. 4he fact that< even allo.ing for their lac3 of compara!ility< Emerson estimates server po.er consumption to !e much greater than the E$+ illustrates some of the difficulties of analysing the topic. &ervers also generate large amounts of heat< .hich must !e removed to avoid component failure< and to ena!le processors to run most efficiently. +dditional cooling to that provided !y the server;s internal fans is usually re=uired. 4he need for cooling is increasing as servers !ecome more po.erful< and generate larger amounts of heat 7I0* Ilo!al 4echnology &ervices 2""2< see also 4a!le #9. Cooling also helps to provide humidity control through dehumidification. Humidification is also re=uired in some data centres and ' as it is achieved !y evaporation ' can consume additional energy. 4he ;mission critical; nature of many of their applications also means that data centres must have an :%ninterrupti!le $o.er &upply; 7%$&9 to guard against po.er failures or potentially damaging fluctuations. 5ne study 7Emerson 2""2 ' see also 4a!le 59 found that? 5nly #"C of the energy used for computing .as actually consumed in the processor itself@ and IC4 e=uipment accounted for only 52C of the total po.er consumption of 1122 36< i.e. there .as a support :overhead; of cooling< po.er supply and lighting of (2C. +lthough the situation has improved since then< the figures nonetheless demonstrate the potential for reducing energy efficiency. 4he figures are certainly rather high for many data centres in %E universities and colleges. For e>ample? 4he Hector supercomputing facility at the %niversity of Edin!urgh has an overhead of only #(C even on the hottest of days< and this falls to 21C in mid.inter< .hen there is 1""C :free cooling; 7see &usteI4 case study and !o> 2 in &ection -9@ and 4he %niversity of &heffield estimates the overhead on its o.n data centres to !e in the order of 4"C 7Cartledge 2"" a9. 1#

4his apparent divergence !et.een the %E and %&+ is credi!le !ecause? 4he %& sample includes many data centres in much hotter and more humid areas than the %E< .hich .ill have correspondingly greater cooling loads@ Energy and electricity prices are higher in the %E than most part of the %&+< so there are greater incentives for efficient design and use of e=uipment@ Energy efficiency standards for cooling< po.er supply and other e=uipment are generally more stringent in the %E than most areas of the %&+@ and %& data centres are also improving ' a detailed !enchmar3ing e>ercise found that energy efficiency measures and other changes had reduced the average overhead from (2C in 2""# to -#C in 2""5 7Ireen!erg< *ills< 4schudi< /umsey< and *yatt 2""-9< and the recently opened +dvanced Data Center facility near &acramento achieved 22C 7Ireener Computing 2"" 9. Hence< a !road!rush estimate for achieva!le supply overheads in %E data centres is perhaps 4")-"C in those .ithout free cooling< and 25)4"C for those .ith it< or e=uivalent energy efficiency features. 4he ratio of infrastructure overheads to processing .or3 done is much greater than these percentages !ecause a9 servers re=uire additional e=uipment to operate< and !9 they seldom operate at 1""C of capacity. 4he latter is the case !ecause? &erver resources< !oth individually and collectively< are often siBed to meet a pea3 demand .hich occurs only rarely@ and &ervers come in standard siBes< .hich may have much greater capacity than is needed for the applications or other tas3s running on them. *ost estimates suggest that actual utilisation of the #-5H24H2 capacity of a typical server can !e as lo. as 5)1"C 7FuAitsu &iemens Computers and Enurr 2""29. Ho.ever< most servers continue to dra. #")5"C of their ma>imum po.er even .hen idle 7Fichera 2""-9. Cooling and %$& e=uipment also operates fairly independently of computing load in many data centres. 4hese figures suggest that there is considera!le potential to increase the energy efficiency of most data centres< including those in %E further and higher education. Indeed< one %& study has suggested that a complete optimisation of a traditional data centre could reduce energy consumption and floor space re=uirements !y -5C 7Emerson 2""29. &ome means of achieving this are summarised in 4a!le 2 and 0o> 1< .hich represent t.o slightly differing vie.s of prioritisation from a European and a 1orth +merican source. In !road terms< the options fall into four main categories? $urchasing more energy efficient devices@ 14

Changing computing approaches@ Changing physical aspects such as layouts< po.er supply and cooling@ and *odular development.

0o> 1 ) /educing Energy Consumption Data Centres ' + &upplier ,ie.


Emerson suggest that applying the 1" !est practice technologies to data centres ' ideally in se=uence ) can reduce po.er consumption !y half< and create other !enefits. 4hese technologies are? 1. Do. po.er processors 2. High)efficiency po.er supplies #. $o.er management soft.are 4. 0lade servers 5. &erver virtualisation -. 415, +C po.er distri!ution 710 *ore relevant to the %&+ than the %E9 2. Cooling !est practices 7e.g. hotHcold aisle rac3 arrangements9 . ,aria!le capacity cooling? varia!le speed fan drives (. &upplemental cooling 1". *onitoring and optimisation? cooling units .or3 as a team.

15

)a+"e 7- Most 'eneficia" Data Centre Practices% According to the E& Code of Conduct on Energy Efficient Data Centres 0Measures scoring =% on a ,.= sca"e1 0European Commission Joint 6esearch Centre 200/1 Category &election and Deployment of 1e. I4 E=uipment )ype *ultiple tender for I4 hard.are ) po.er

Deployment of 1e. I4 &ervices

*anagement of E>isting I4 E=t and &ervices +s a!ove

Deploy using Irid and ,irtualisation technologies Decommission unused services ,irtualise and archive legacy services Consolidation of e>isting services

Description Include the Energy efficiency performance of the I4 device as a high priority decision factor i the tender process. 4his may !e through the use of Energy &tar or &$EC$o.er type standard metrics or through application or deployment specific user metrics more closely aligned to the target environment .hich may include service level or relia!ility components. 4he po.e consumption of the device at the e>pected utilisation or applied .or3load should !e considered in addition to pea3 performance per 6att figures. $rocesses should !e put in place to re=uire senior !usiness approval for any ne. service tha re=uires dedicated hard.are and .ill not run on a resource sharing platform. 4his applies to servers< storage and net.or3ing aspects of the service. Completely decommission and s.itch off< prefera!ly remove< the supporting hard.are for unused services

&ervers .hich cannot !e decommissioned for compliance or other reasons !ut .hich are not used on a regular !asis should !e virtualised and then the dis3 images archived to a lo. po.er media. 4hese services can then !e !rought online .hen actually re=uired

+s a!ove

E>isting services that do not achieve high utilisation of their hard.are should !e consolidate through the use of resource sharing technologies to improve the use of physical resources. 4his applies to servers< storage and net.or3ing devices.

Category +ir Flo. *anagement and Design

)ype Design ' Contained hot or cold air

4emperature and Humidity &ettings

Free and Economised Cooling +s a!ove +s a!ove

E>panded I4 e=t inlet environmental conditions 7temp and humidity9 Direct +ir Free Cooling Indirect +ir Free Cooling Direct 6ater Free Cooling Indirect 6ater Free Cooling

Description 4here are a num!er of design concepts .hose !asic intent is to contain and separate the co air from the heated return air on the data floor@ L Hot aisle containment L Cold aisle containment L Contained rac3 supply< room return L /oom supply< Contained rac3 return L Contained rac3 supply< Contained rac3 return 4his action is e>pected for air cooled facilities over 136 per s=uare meter po.er density. 6here appropriate and effective< Data Centres can !e designed and operated .ithin the air inlet temperature and relative humidity ranges of 5 to 4"MC and 5 to "C /H< non) condensing respectively< and under e>ceptional conditions up to K45MC. 4he current< relevant standard is E4&I E1 #"" "1(< Class #.1.

+s a!ove

+s a!ove

+dsorptive Cooling

E>ternal air is used to cool the facility. Chiller systems are present to deal .ith humidity and high e>ternal temperatures if necessary. E>haust air is re)circulated and mi>ed .ith inta3e air to avoid unnecessary humidification H dehumidification loads. /e circulated air .ithin the facility is primarily passed through a heat e>changer against e>ternal air to remove heat to the atmosphere. Condenser .ater chilled !y the e>ternal am!ient conditions is circulated .ithin the chilled .ater circuit. 4his may !e achieved !y radiators or !y evaporative assistance through spray onto the radiators. Condenser .ater is chilled !y the e>ternal am!ient conditions. + heat e>changer is used !et.een the condenser and chilled .ater circuits. 4his may !e achieved !y radiators< evaporative assistance through spray onto the radiators or evaporative cooling in a cooling to.er. 6aste heat from po.er generation or other processes close to the data centre is used to po.er the cooling system in place of electricity< reducing overall energy demand. In such deployments adsorptive cooling can !e effectively free cooling. 4his is fre=uently part of a 4 Ien com!ined cooling heat and po.er system. 12

#. Data Centre &olutions ' &trategy


+ strategic approach to data centre energy efficiency is re=uired to ensure that the approaches adopted< and the e=uipment purchased< meets institutional needs in the most cost effective and sustaina!le .ay possi!le. Compared to personal computing< data centres involve :lumpier; and larger scale investments< and so the scope for action .ill !e constrained !y circumstances. 4he 3ey strategic moment is clearly .hen significant ne. investment is !eing planned< for there .ill !e maAor opportunities to save money and energy consumption !y doing the right thing. 4he 3ey to effective action at this stage ' and a definite help in others ' is effective colla!oration !et.een I4 and Estates !ecause many of the 3ey decisions are around physical layout of !uilding< cooling and po.er supply< for .hich Estates are often :suppliers; to I4 customers. %nfortunately< communication ' or mutual understanding ' is not al.ays good and special effort .ill !e needed to try to achieve it. 4he &usteI4 cases on Cardiff %niversity and Gueen *argaret %niversity sho. that this can pay off ' in the former case through a very energy efficient data centre< and in the latter through perhaps the most advanced application of thin client .ithin the sector. 4hree 3ey topics then need to !e considered? Careful analysis of needs< to avoid over)provisioning@ E>amination of alternative approaches< such as shared services and virtualisation@ and 5vercoming !arriers. 4he traditional approach to designing data centres has !een to try and anticipate future needs< add a generous margin to provide fle>i!ility< and then !uild to this re=uirement. 4his has the maAor disadvantages of incurring capital and operating costs .ell in advance of actual need< and higher than necessary energy consumption !ecause cooling and po.er supply is over)siBed in the early years< and an ina!ility to ta3e advantage of technical progress. 4he E% Code of Conduct 7EC 8oint /esearch Centre 2"" 9 and other e>perts 7e.g. 1e.com!e 2"" 9 therefore advocate more modular approaches< so that ne. !atches of servers and associated e=uipment can !e installed on an :as needs; !asis. 5ver)provisioning can also !e avoided !y careful e>amination of actual po.er re=uirements< rather than manufacturer;s claims. 7+lthough on a fe. occasions< it may !e that e=uipment actually uses more energy and so additional provision is re=uired9.

5ne option .hich also needs to !e considered today is .hether some or all of planned data centres can either !e outsourced to third party providers< or hosted .ithin common data centres< in .hich several institutions share a single data centre .hich is under their control. 4his could !e managed !y the institutions themselves< !ut is more li3ely to !e managed !y a specialist supplier. 4he colla!oration !et.een the %niversity of the 6est of &cotland and &outh Danar3shire Council 7.ho manage the shared centre9 is one of the fe. e>amples in the sector !ut several feasi!ility studies have !een done on additional proAects 7see !elo.9. 4he main &usteI4 report discusses some of the potential sustaina!ility advantages of such shared services 78ames and Hop3inson 2""(a9. Common data centres are made feasi!le !y virtualisation< .hich !rea3s the lin3 !et.een applications and specific servers< and therefore ma3es it possi!le to locate the latter almost any.here. 4he &usteI4 survey found that 52C of respondents .ere adopting this to some degree< and it is important that the potential for it is fully considered 78ames and Hop3inson 2""(c9. 4he &usteI4 case study on virtualisation of servers at &heffield Hallam %niversity demonstrates the large cost and energy savings that can !e realised. It is also important that all investment decisions are made on a total cost of o.nership 74C59 !asis< and that every effort is made to estimate the full costs of cooling< po.er supply and other support activities.

4. Data Centre &olutions ) $urchasing *ore Energy Efficient Devices


4here is a .ide variation in energy efficiency !et.een different servers. Hence< !uying more energy efficient models can ma3e a considera!le difference to energy consumption. 4hree main options 7.hich are not mutually e>clusive9 are availa!le at present? &ervers .hich have !een engineered for lo. po.er consumption through design< careful selection of components 7e.g. ones a!le to run at relatively high temperatures9< and other means@ :Guad)core; servers 7i.e. ones containing four processors .ithin the same chassis9@ and :0lade servers;< 4here is considera!le disagreement on .hat constitutes an energy efficient server ' or indeed .hat constitutes a server 7/elph)Enight 2"" 9. 4he de!ate has !een stimulated !y the %& Environmental $rotection +gency;s attempt to develop an Energy &tar la!eling scheme for servers. 5nce completed< this .ill also !e adopted .ithin the European %nion< and could therefore !e a useful tool in server 2"

procurement. Ho.ever< there is de!ate a!out ho. effective it is li3ely to !e< due to :.atering do.n; in response to supplier pressure 7/elph)Enight 2"" 9. +s .ith cars< one pro!lem is that manufacturer;s data on po.er ratings is often !ased on test conditions< rather than :real life; circumstances. +ccording to the independent 1eal 1elson 0enchmar3 Da!oratory< in early 2"" the .idely used &$EC$o.er test had a small memory footprint< a lo. volume of conte>t s.itches< simple net.or3 traffic and performed no physical dis3 InputH5utput. 4heir o.n testing< !ased on .hat .ere said to !e more realistic configurations< produced rather different figures and< in particular< found that :.hile some Guad)Core Intel Neon !ased servers delivered up to 14 percent higher throughput< similarly configured Guad)Core +*D 5pteron !ased servers consumed up to 41 percent less po.er; 71eal 1elson 2"" 9. + 3ey reason is said to !e the use of Fully 0uffered memory modules in the Neon< rather than DD/)II memory modules of +*D. 71ote that Intel does dispute these findings< and previous ones from the same company9 7*odine 2""29. 4here is less disagreement on the energy efficiency !enefits of !oth the +*D and Intel =uad)core processors 7i.e. four high capacity microprocessors on a single chip9< compared to dual)core or single)core predecessors 70ro.nstein 2"" 9. 4he !enefits arise !ecause the processors can share some circuitry@ can operate at a lo.er voltage@ and !ecause less po.er is consumed sending signals outside the chip. 4hese !enefits are especially great .hen the processors also ta3e advantage of dynamic fre=uency and voltage scaling< .hich automatically reduces cloc3 speeds in line .ith computational demands 7%&E$+ 2""2!9. + more radical approach !eing introduced into commercial data centres is that of !lade servers. 4hese involve a single chassis providing some common features such as po.er supply and cooling fans to up to 2" :stripped do.n; servers containing only a C$%< memory and a hard dis3. 4hey can !e either self)standing or rac3 mounted 7in .hich case a chassis typically occupies one rac3 unit9. 0ecause the server modules share common po.er supplies< cooling fans and other components< !lade servers re=uire less po.er for given processing tas3s than conventional servers< and also occupy less space. Ho.ever< they have much greater po.er densities< and therefore re=uire more intense cooling. 5ne study estimates that the net effect can !e a 1"C lo.er po.er re=uirement for !lade than conventional servers for the same processing tas3s 7Emerson 2""29. 4he t.o stage interconnections involved in !lade servers 7from !lade to chassis< and !et.een the chassis;s themselves9 mean that they are not suita!le for activities such as high performance computing 7H$C9 .hich re=uire lo. latency. Even in other cases< the higher initial cost arising from the specialist chassis< and the increased comple>ity of cooling< means that they may not have great cost or energy advantages over alternatives for many universities and colleges. Certainly< installations such as that at Cardiff %niversity 7see &usteI4 case9< have achieved 21

similar advantages of high po.er density from =uad core devices< .hilst retaining the fle>i!ility and other advantages of having discrete servers.

5. Data Centre &olutions ) Changing Computing +pproaches


6ithin a given level of processing and storage demand< three !road approaches are availa!le? Energy proportional computing@ Consolidation of servers< through virtualisation and other means@ and *ore efficient data storage.

5.1 Energy $roportional Computing


+s noted a!ove< most current servers have a high po.er dra. even .hen they are not !eing utilised. Increasing attention is no. !eing paid to the o!Aective of scaling server energy use in line .ith the amount of .or3 done 70arroso O HolBle 2""2@ Hopper and /ice 2"" 9. 5ne means of achieving this is virtualisation 7see !elo.9. +nother is po.er management< .ith features such as varia!le fan speed control< processor po.erdo.n and speed scaling having great potential to reduce energy costs< particularly for data centres that have large differences !et.een pea3 and average utilisation rates. Emerson 72""29 estimates that they can save up to C of total po.er consumption. In practice< servers are often shipped .ith this feature disa!led< andHor users themselves disa!le them !ecause of concerns regarding response time 7%&E$+ 2"" 9. &oft.are products< such as ,erdiem< .hich ena!le net.or3 po.erdo.n of servers< also have limited mar3et penetration. 4his is certainly the case in %E universities and colleges< .here .e have found fe. e>amples of server po.er management occurring. Different soft.are can also have different energy consumption ' as a result of varying demands on C$%s< memory etc. ' and these may also !e easier to =uantify in future 7although most differences are li3ely to !e small compared .ith the range !et.een normal use and po.erdo.n9 7Henderson and Dvora3 2"" 9.

5.2 Consolidation and ,irtualisation of &ervers


&erver utilisation can !e increased 7and< therefore< the total num!er of servers re=uired decreased9 !y consolidating applications onto fe.er servers. 4his can !e done !y? 22

/unning more applications on the same server 7!ut all utilising the same operating system9@ and Creating :virtual servers;< each .ith its o.n operating system< running completely independently of each other< on the same physical server. +nalyst figures suggest that in 2""2 the proportion of companies using server virtualisation .as as little as one in 1" 7Courtney 2""29. Ho.ever< Iartner figures suggest that !y 2""( the num!er of virtual machines deployed around the .orld .ill soar to over 4 million 70angeman 2""29. ,irtualisation has great potential< !ecause it potentially allo.s all of a server;s operating capacity to !e utilised. 0asic; virtualisation involves running a num!er of virtual servers on a single physical server. *ore advanced configurations treat an array of servers as a single resource and assign the virtual servers !et.een them in a dynamic .ay to ma3e use of availa!le capacity.. Ho.ever< virtualisation does re=uire technical capacity< and is not suita!le for every tas3< and may not therefore !e suita!le for every institution. 1onetheless< a num!er of institutions have applied it successfully< such as &heffield Hallam %niversity and &toc3port College 7see &usteI4 cases9. .

5.# *ore Energy Efficient &torage


4he amount of data stored is increasing almost e>ponentially< !oth glo!ally< and .ithin further and higher education. *uch of this data is stored on dis3 drives and other devices .hich are permanently po.ered and< in many cases< re=uire cooling< and therefore additional energy consumption. + study of data centres !y the %& Environment $rotection +gency 72""29 found that storage .as around 4)5C of average IC4 e=uipment)related consumption< !ut another report has argued that this an underestimate< and that )1"C .ould !e more accurate 7&chulB 2""2a9. 0y one estimate< roughly t.o thirds of this energy is consumed !y the storage media themselves 7dis3 drives and their enclosures9< and the other third in the controllers .hich transfer data in and out of storage arrays 7&chulB 2"" 9. 4hree important means of minimising this consumption are? %sing storage more effectively@ Classifying data in terms of re=uired availa!ility 7i.e. ho. rapidly does it need to !e accessedJ9@ and *inimising the total amount of data stored.

2#

4a3ing these actions can also create other !enefits< such as faster operation< deferring hard.are and soft.are upgrades< and less e>posure during /+ID re!uilds due to faster copy times 7&chulB 2""2!9. 1et+pp claims that the average enterprise uses only 25) "C of its storage capacity 7Cohen< 5ren and *aheras 2"" 9. *ore effective utilisation can reduce capital and operating e>penditure< and energy consumption. 4he data centre can !e also !e configured so that data can !e transferred directly to storage media .ithout using a net.or3< there!y avoiding energy consumption in routers< and !ypassing net.or3 delays 7Hengst 2""29. &torage in data centres typically involves storing data on a /andom +rray of Independent Dis3s 7/+ID9. If data on one dis3 cannot !e read< it can !e easily !e retrieved from others and copied else.here. Ho.ever< this approach has relatively high energy consumption !ecause dis3s are constantly spinning< and also !ecause they are seldom filled to capacity. *+ID 7*assive +rray of Idle Dis3s9 systems can reduce this consumption !y dividing data according to speed of response criteria< and po.ering do.n or s.itching off dis3s containing those .here rapid response is not re=uired. ,endors claim that this can reduce energy consumption !y 5"C or more 7&chulB 2"" 9. Even greater savings can !e o!tained .hen infre=uently accessed data is archived onto tapes and other media .hich re=uire no energy to 3eep. +chieving this re=uires a more structured approach to information life cycle management< .hich involves classifying data !y re=uired longevity 7i.e. .hen can it !e deletedJ9< and availa!ility re=uirements 7i.e. ho. rapidly does it need to !e accessedJ9. *ost university data centres also have storage re=uirements many times greater than the core data they hold. Different versions of the same file are often stored at multiple locations. +s an e>ample< a data!ase .ill typically re=uire storage for its ma>imum capacity< even though it has often not reached this. Different versions of the data!ase .ill often stored for different purposes< such as the live application and testing. +t any point in time< each data!ase .ill often e>ist in multiple versions 7the live version@ a on)line !ac3up version@ and one or more archived versions .ithin the data centre< and possi!ly others utilised else.here9. 5ver time< many legacy versions ' and possi!ly duplicates< if the data is used !y a variety of users ' can also accumulate. In this .ay< one 4era0yte 7409 of original data can easily s.ell to 15)2"40 of re=uired storage capacity. In most cases< this is not for any essential reason. Hence< there is the potential for data deduplication !y holding a single reference copy< .ith multiple pointers to it 7&chulB 2""2a9. &ome storage servers offer this as a feature< e.g. 1etapp. 4he %niversity of &heffield has used this and other means to achieve deduplication< .ith 2")("C savings< depending on the type of data 7Cartledge 2"" !9. 7Ienerally< savings have !een at the lo.er end of the spectrum9.

24

-. Data Centre &olutions ) *ore Efficient Cooling and $o.er &upply


4here are five !road 3inds of cooling and po.er supply measure .hich can !e adopted .ithin data centres? *ore effective cooling@ +dopting more energy efficient means of po.er supply@ /educing ancillary energy@ 0etter monitoring and control@ and 1e. sources of energy inputs.

-.1 *ore Effective Cooling


Cooling issues are discussed in a separate &usteI4 paper 71e.com!e 2"" ' prepared in association .ith Irid Computing 1o.P9< and so are discussed only !riefly here.

-.1.1 *ore effective air cooling


4he conventional method of cooling servers and other e=uipment in dedicated data centres is !y chilling air in computer room air conditioning 7C/+C9 units and !lo.ing it over them. 4hree maAor 7and often inter)related9 sources of energy inefficiency associated .ith these methods are? *i>ing of incoming cooled air .ith .armer air 7.hich re=uires input temperatures to !e lo.er than other.ise necessary to compensate9@ Dispersal of cooled air !eyond the e=uipment that actually needs to !e cooled@ and 5ver)cooling of some e=uipment !ecause cooling units deliver a constant volume of air flo.< .hich is siBed to match the ma>imum calculated cooling load ) as this occurs seldom< if ever< much of the cool air supplied is .asted. +necdotal evidence also suggests that relatively crude approaches to air cooling can also result in higher failure rates of e=uipment at the top of rac3s 7.here cooling needs are greater !ecause hot air rises from lo.er units9. 4hese pro!lems can !e overcome !y?

25

0etter separation of cooled and hot air !y changing layouts 7in a simple .ay through hot aisleHcold aisle layouts< and in a more comple> .ay !y sealing of floors and containment of servers9< and !y air management 7e.g. raised plenums for inta3e air< and ceiling vents or fans9 to dra. hot air a.ay@ /educing areas to !e cooled !y concentrating servers< and !y using !lan3ing panels to cover empty spaces in rac3s@ and *atching cooling to load more effectively through use of supplemental cooling units< andHor varia!le flo. capa!ility. &upplemental cooling units can !e mounted a!ove or alongside e=uipment rac3s< and !ring cooling closer to the source of heat< reducing the fan po.er re=uired to move air. 4hey also use more efficient heat e>changers and deliver only sensi!le cooling< .hich is ideal for the dry heat generated !y electronic e=uipment. /efrigerant is delivered to the supplemental cooling modules through an overhead piping system< .hich< once installed< allo.s cooling modules to !e easily added or relocated as the environment changes. +ir flo. can also !e reduced through ne. designs of air compressor andHor varia!le fre=uency fan motors .hich are controlled !y thermal sensors .ithin server rac3s. ,aria!le drive fans can !e especially !eneficial as a 2"C reduction in fan speed can reduce energy re=uirements !y up to 5"C< giving a pay!ac3 of less than a year .hen they replace e>isting fans. *inimising fan po.er in these and other .ays has a dou!le !enefit !ecause it !oth reduces electricity consumption< and also reduces the generation of heat so that the cooling system has to .or3 less hard. Computational fluid dynamics 7CFD9 can also assist these measures !y modeling air flo.s to identify inefficiencies and optimal configurations 7Chandra3ant et al 2""19.

-.1.2 +dopting :free; cooling


Free cooling occurs .hen the e>ternal am!ient air temperature is !elo. the temperature re=uired for cooling ' .hich for most %E data centres< is the case for most nights< and many days< during autumn< .inter and spring. 4here is therefore the potential to either s.itch conventional refrigeration e=uipment off< or to run it at lo.er loads< during these periods. Cooler am!ient air can !e transferred directly into the data centre< !ut< even .ith filtration< this may create pro!lems from dust or other contamination. 4he t.o main alternatives are :air side economisers; and :.ater side economisers;. In the former< heat .heels or other 3inds of e>changer transfer :coolth; from am!ient air into internal air. In the latter< am!ient air is used to cool .ater< rather than circulating it through chillers. 4he &usteI4 case study of the HEC45/ facility at the %niversity of Edin!urgh provides an e>ample of this 7see !o> 29. 2-

Free cooling is especially effective .hen it is com!ined .ith an e>panded temperature range for operation. 04 no. allo. their 25" or so sites top operate .ithin a range of 5 and 4" degrees Celsius 7compared to a more typical 2")24 degrees Celsius9. 4his has reduced refrigeration operational costs !y 5C< .ith the result that they have less that 4"C of the total energy demand of a tier # data centre< .ith similar or greater relia!ility 75;Donnell 2""29. +lthough there remains considera!le concern amongst smaller operators a!out the relia!ility of such approaches< they are !eing encouraged !y changes in standards< e.g. the 4C(.( standard of +&H/+E 7a %& !ody9 .hich increases operating !ands for temperature and humidity.

22

-.1.# %sing alternative cooling media


+ir is a relatively poor heat transfer medium. 6ater is much more effective< so its use for cooling can greatly reduce energy consumption. Chilled .ater is used to cool air in many C/+C units !ut it can also !e used more directly< in the form of a sealed chilled .ater circuit !uilt into server rac3s. +s the &usteI4 case study on Cardiff %niversity sho.s< this can provide considera!le energy efficiency !enefits over conventional approaches. + less common< and more comple> ) !ut potentially more energy efficient 7as it can !e operated at 14 "C< rather than the oC .hich is normal .ith chilled .ater9 ) is use of car!on dio>ide as a cooling medium< as has !een adopted in Imperial College 74ro> 2""-9.

-.2 *ore Energy Efficient $o.er &upply


In 2""5 the %&E$+ estimated the average efficiency of installed server po.er supplies at 22C 7=uoted in Emerson 2""29. Ho.ever ("C efficient po.er supplies are availa!le< .hich could reduce po.er dra. .ithin a data centre !y 11C 7Emerson 2""29. *ost data centres use a type of %$& called a dou!le)conversion system .hich convert incoming po.er to DC and then !ac3 to +C .ithin the %$&. 4his effectively isolates I4 e=uipment from the po.er source. *ost %E %$&s have a 415, three) phase output .hich is converted to 24", single)phase +C input directly to the server. 4his avoids the losses associated .ith the typical %& system of stepping do.n 4 ", %$& outputs to 2" , inputs. Energy efficiency could !e further increased if servers could use DC po.er directly< there!y avoiding the need for transformation of %$& inputs into +C. 04< the largest data centre company in Europe< does this in its facilities< and have evidence that the mean time !et.een failure 7*40F9 of their sites is in e>cess of 1"<""" years 7!etter than tier 49 and energy consumption has dropped !y 15C as a result 75;Donnell 2""29. Ho.ever< there are fe. suppliers of the necessary e=uipment at present< and so no universities or colleges use this method.

0o> 2 ) Free Cooling at the %niversity of Edin!urgh 4he Hector supercomputing facility 7High End Computing 4erascale /esources9 generates 1 36 of heat per rac3. Free cooling is used for around 22C of the year< and provides all the cooling needed for a!out (C of the year. 4his has reduced energy consumption< !y 2-C annually. Further reductions have come from full containment of the rac3s so that cooled supply air cannot mi> .ith .armer room or e>haust air< and ma>imum use of varia!le speed drives on most pumps and fans. +t early 2"" prices< the measures created annual savings of F45#<(5# compared to an older e=uivalent facility 7see the short and long &usteI4 case studies9.

2(

-.# /educing +ncillary Energy


%sing remote 3ey!oardHvideoHmouse 7E,*9 units can reduce the amount of electricity used in these applications< especially monitors 7IoodClean4ech 2"" 9. Inefficient lighting also raises the temperature in the server room< ma3ing the cooling systems .or3 harder to compensate. %sing energy)efficient lights< or motion)sensitive lights that .on;t come on until needed< can cut do.n po.er consumption and costs 7Hengst 2""29.

-.4 0etter *onitoring and Control


5ne of the conse=uences of rising e=uipment densities has !een increased diversity .ithin the data center. /ac3 densities are rarely uniform across a facility and this can create cooling inefficiencies if monitoring and optimiBation is not implemented. /oom cooling units on one side of a facility may !e humidifying the environment !ased on local conditions .hile units on the opposite side of the facility are dehumidifying. /ac3 level monitoring and control systems can trac3 ' and respond locally to ' spot overheating or humidity issues rather than providing additional cooling to the entire data center 76orrall 2"" 9.

-.5 1e. &ources of Energy Inputs


4here are several synergies !et.een data centres and rene.a!le or lo. car!on energy sources. + considera!le proportion of data centre capital cost is concerned .ith protection against grid failures. &ome of this e>penditure could !e avoided !y on)site generation. Indeed< !oth Ioogle and *icrosoft are said to !e see3ing 1""C rene.a!le energy sourcing< and technical developments in a num!er of areas such as fuel cells< trigeneration 7.hen an energy centre produces cooling< electricity and heat from the same fuel source9 and ground source heat pumps are ena!ling this 7Denegri 2"" 9. Hopper and /ice 72"" 9 have also proposed a ne. 3ind of data centre< co)located .ith rene.a!le energy sources such as .ind tur!ines< .hich act as a :virtual !attery;. 4hey .ould underta3e fle>i!le computing tas3s< .hich could !e aligned .ith energy production< increasing .hen this .as high and decreasing .hen it .as lo.. Data centres also have affinities .ith com!ined heat and po.er 7CH$9< .hich ' although usually fossil fuelled< !y natural gas ' is lo.er car!on than conventional electricity and heat production. 4his is partly !ecause of the relia!ility effects of on) site generation< !ut also !ecause many CH$ plants discharge .aste .ater at sufficiently high temperatures to !e used in a!sorption chillers to provide cold .ater for cooling. 4his :trigeneration; can replace conventional chillers< and therefore reduce cooling energy consumption considera!ly<

#"

0o> # ) *easuring and 0enchmar3ing &erver and Data Centre Efficiency


4he ne. Energy &tar scheme for enterprise servers covers features such as efficiency of po.er supply@ po.er management@ capa!ilities to measure real time po.er use< processor utiliBation< and air temperature@ and provision of a po.er and performance data sheet 4he %& Environmental $rotection +gency claims that it .ill raise efficiency !y around #"C compared to the current average7%& E$+ 2""(a9. Ho.ever< it has !een criticised for ignoring !lade servers< and for only measuring po.er consumption during the idle stage 7Iralla 2""(9. Ho.ever< a forthcoming 4ier 2 is e>pected to set !enchmar3s for the performance of a server across the entire server load 7%&E$+ 2""(!9. In parallel the &tandard $erformance Evaluation Corp. 7&$EC9< a nonprofit organisation< is developing its o.n !enchmar3s for server energy consumption 7&$EC undated9. 4hese may form the !asis for a 4ier 2 Energy &tar 76u 2"" 9. 4he Ireen Irid 72""(9 has also pu!lished several metrics< including the $o.er %sage Effectiveness 7$%E9 inde>. 4his divides the centre;s total po.er consumption 7i.e. including cooling and po.er supply losses9 .ith the po.er consumed .ithin IC4 e=uipment. *easurements of 22 data centres !y Da.rence 0er3eley 1ational Da!oratory found $%E values of 1.# to #." 7Ireen!erg< *ills< 4schudi< /umsey< and *yatt 2""-9. + recent study has argued that 1.2 or !etter no. represents :state of the art; 7+ccenture 2"" 9. 4he ne. +DC facility near &acramento ' said to !e the greenest data centre in the %&< if not the .orld ' achieved 1.12 7see !o> 49. 4he European %nion 7E%9 has also developed a Code of Conduct for Energy

0o> 4 ) 4he 6orld;s Ireenest Data CentreJ


4he +dvanced Data Centers 7+DC9 facility< on an old air !ase near &acramento< com!ines green construction .ith green computing 7Ireener Computing 2"" 9. 4he !uilding itself has provisionally gained the highest< $latinum< rating of the %.&. Ireen 0uilding Council;s Deadership in Energy and Environmental Design 7DEED9 scheme 7the rough e=uivalent of 0/EE+* E>cellent in the %E9. Eey factors included reuse of a !ro.nfield site< and high use of sustaina!le materials and recycled .ater. Computing energy consumption has !een reduced !y :free cooling; 7using am!ient air to cool the ventilation air stream< rather than chillers9 for 25C of the year@ pressurising cool aisles and venting hot aisles to minimiBe air mi>ing@ using (2C energy efficient universal po.er supply 7%$&9 units@ and rotating them in and out of the cooled space so that< #1

2. 1et.or3ing Issues
+s noted a!ove< routers and other e=uipment connected .ith net.or3s account for around C of IC4)related electricity consumption at the %niversity of &heffield. In addition< there .ill !e additional energy consumption related to &heffield;s use of the national 8+1E4 net.or3. Ienerally spea3ing< net.or3)related energy and environmental issues have received less attention than those .ith regard to computing and printing !ut it is clear that there is considera!le scope for improvement 70aliga et al 2"" @ Ceuppens< Eharitonov and &ardella 2"" 9. + ne. energy efficiency metric has also !een launched for routers in the %& 7EC/ 2"" 9.

2.1 4he Environmental Impacts of ,5I$ 4elephony


5ne net.or3)related issue of gro.ing importance in universities and colleges is Internet $rotocol 7I$9 telephony< Conventional telephony involves dedicated circuits. Its phones operate on lo. po.er typically a!out 269 and< .hilst telephone e>change e=uipment consumes large amounts of energy< this has !een reduced through decades of improvement. 0y contrast< I$ telephony< .hich uses the Internet 7and therefore a variety of different circuits9 to transmit calls< can !e more energy intensive< .hen !ased on specialiBed phones. 1 4hese have relatively high po.er ratings 7often 126 or higher9< largely !ecause they contain microprocessors. It has !een estimated that on a simple per)phone !asis< running I$ telephony re=uires roughly #"C to 4"C more po.er than conventional phones 7Hic3ey 2""29. In institutions< their energy is usually supplied !y a special :$o.er over Ethernet; 7$oE9 net.or3 .hich operates at higher ratings than conventional net.or3s< and .hich has therefore has greater energy losses through heating as a result of resistance. 4he current $oE standard has roughly 156 per ca!le< and a proposed ne. standard could increase this to 45)5"6 .atts 7Hic3ey 2""29. 4he volume of calls also increases data centre energy usage< !oth .ithin the institution< and at those of its I$ telephony supplier< .hich ' as discussed a!ove ' is relatively energy intensive. 5verall< therefore< installing an I$ telephone system as the main user of as $oE net.or3 in a university or college is li3ely to increase electricity consumption. +s noted< the energy consumption of I$ telephony can !e reduced !y ma3ing ma>imum use of :softphones;< i.e. simple< lo. po.er< handheld devices .hich connect to a computer< .hich in turn underta3es call processing activities. Ho.ever< care is needed as the connections on a $C can a9 interfere .ith po.er management< and !9 potentially result in the $C !eing s.itched on< or in active mode< more than .ould other.ise !e the case. 6aste can also !e minimised !y adapting some conventional phones for ,5I$ use 7CItel undated9. 4his can avoid the need to replace .iring< and to operate $oE.
1

I$ 4elephony is also 3no.n as Internet telephony< 0road!and telephony< 0road!and $hone and ,oice over 0road!and and ,oice over Internet $rotocol 7,5I$9.

#2

4he relative impacts of $oE can also !e reduced if its full potential to replace mains po.er for some other devices is adopted 7Ilo!al +ction $lan 2""29. 4he energy overheads can also !e shared .ith other applications< such as :intelligent; !uilding services 7see main report9.

2.2 6iring and Ca!ling


Even a small university .ill have hundreds< possi!ly thousands< of miles of .ires and ca!les to transmit data !et.een< or supply po.er to< devices. +lthough often over)loo3ed< this electrical and electronic nervous system has a num!er of environmental impacts. &everal impacts arise from their !ul3< .hich can !e considera!le< especially for high capacity data transmission 7Category -9 or po.er supply ca!le. +n I4)intensive university !uilding< e.g. one containing a data centre< may .ell have sheaths of Category - ca!les .ith a cross section of several s=uare metres< for e>ample. +s .ell as consuming considera!le amounts of energy)intensive resources 7mainly copper and plastics9< and generating heat< these can reduce the efficiency of cooling and ventilation if they are located in .ays .hich disrupt air flo.s. $oorly organised .iring and ca!ling can also ma3e it difficult to reconfigure facilities< or to trou!leshoot pro!lems. 4his can ma3e it more difficult to introduce some of the cooling approaches identified in section -.1< and also result in considera!le do.ntime< there!y reducing overall operational 7and therefore energy9 efficiency of the infrastructure. &tructured .iring solutions< .hich provide common !ac3!ones for all connections< and route them in systematic .ays< can reduce these pro!lems< and are therefore an important element of sustaina!le I4. 2

. Conclusions
It is clear that there are many proven technical options to ma3e data centres much more energy efficient than is currently the norm. Ho.ever< a crucial re=uirement to achieving this .ill !e effective colla!oration !et.een Estates and I4 departments< as cooling and po.er issues clearly involve !oth. In the longer term< there is real potential to achieve :Bero car!on; data centres. Indeed< this may !e re=uired any.ay in a fe. years. 4he %E Ireening Iovernment I4 initiative re=uires Bero car!on in Iovernment offices ' and therefore in IC4 and< in many cases< data centres ' !y 2"12 7Ca!inet 5ffice 2"" 9. 4he 6elsh +ssem!ly Iovernment also re=uires all pu!licly funded ne. developments in 6ales to !e
2

Data .iring and ca!ling is categorised !y its transmission speed< .ith the lo.est< Category 1< !eing used for standard telephone or door!ell type connections< and the highest< Category -< !eing used for very high capacity connections< such as are re=uired in data centres or for high performance computing.

##

:Bero car!on; from 2"11. Hence< a goal of Bero car!on data centres could !e a =uestion more of !ringing the inevita!le for.ard< than of radical trail!laBing. Qero car!on data centres .ould fit .ell .ith the drive for more shared services .ithin IC4. 4he greater freedom of location .hich could result from this could ena!le optimal siting for rene.a!le energy and other relevant technologies such as tri)generation and underground thermal storage< there!y achieving Bero car!on targets in an e>emplary fashion .ithout e>cessive rises in capital cost.

#4

0i!liography
0angeman< E.< 2""2. "artner# $irtuali%ation to rule server room b& '()(. +/& 4echnica< *ay 2""2. R5nlineS +vaila!le at? http?HHarstechnica.comHne.s.arsHpostH2""2"5" )gartner)virtualiBation)to)rule) server)room)!y)2"1".html R+ccessed 2 8uly 2"" S. 0arroso< D. and HolBle< %.< 2""2. 4he Case for Energy)$roportional Computing * I+++ Compute*r Decem!er 2""2. R5nlineS +vaila!le at? http?HH....!arroso.orgHpu!licationsHieeeTcomputer"2.pdf R+ccessed #1 Decem!er 2"" S. 0ro.nstein< *.< 2"" . 4ips for 0uying Ireen. Processor< ,ol.#" Issue #< 1 8anuary 2"" . R5nlineS +vaila!le at? http?HH....processor.comHeditorialHarticle.aspJ articleUarticlesC2Fp#""#C2F22p"#C2F22p"#.asp R+ccessed 1 5cto!er 2"" S. Ca!inet 5ffice< 2"" . "reenin, "overnment ICT- R5nlineS Dondon. +vaila!le at? http?HH....ca!inetoffice.gov.u3HVHmediaHassetsH....ca!inetoffice.gov.u3Hpu!licati onsHreportsHgreeningTgovernmentHgreeningTgovernmentTictC2"pdf.ash>. R+ccessed 2 8uly 2"" S. Cartledge< C.< 2"" a. Sheffield ICT .ootprint Commentar&. /eport for &usteI4. R5nlineS +vaila!le at? http?HH....susteit.org.u3 7under tools9. R+ccessed 2" 1ovem!er 2"" S. Cartledge< C. 2"" !. $ersonal Communication !et.een Chris Cartledge< formerly %niversity of &heffield and $eter 8ames< 2# 1ovem!er 2"" . Ceuppens< D.< Eharitonov< D.< and &ardella< +.< 2"" . Power savin, Strate,ies and Technolo,ies in /etwor!- +0uipment 1pportunities and Challen,es* Ris! and Rewards. &+I14 2"" . International &ymposium on +pplications and the Internet< 8uly 2 ) +ug. 1< 2"" . Chandra3ant D. $atel< Cullen E. 0ash< 0elady C.< &tahl< D.< &ullivan D.< 2""1. Computational .luid &namics 2odelin, of 3i,h Compute ensit& ata Centers to Assure S&stem Inlet Air Specifications- /eprinted from the proceedings of the $acific /im +&*E International Electronic $ac3aging 4echnical Conference and E>hi!ition 7I$+CE 2""19. +vaila!le at? http?HH....hpl.hp.comHresearchHpapersHpo.er.html R+ccessed 2" 1ovem!er 2"" S. Cohen< &.< 5ren< I.< and *aheras I.<2"" . Empo.ering IT to 1ptimi%e Stora,e Capacit& 2ana,ement . 1et+pp 6hite $aper. 1ovem!er 2"" . R5nlineS +vaila!le at? http?HHmedia.netapp.comHdocumentsH.p)2"-")empo.ering)it.pdf R+ccessed #1 Decem!er 2"" S. #5

Citel< undated. 4 steps to a ,reen $1IP mi,ration . R5nlineS +vaila!le at? http?HH....citel.comH$roductsH/esourcesH6hiteT$apersH5Tsteps.asp R+ccessed 5 8une 2"" S. Climate Iroup< 2"" . Smart '('( 5 +nablin, the 6ow Carbon +conom& in the Information A,e. Ilo!al e&ustaina!ility Initiative.

http?HH....theclimategroup.orgHassetsHresourcesHpu!licationsH&mart2"2"/e port.pdf R+ccessed 1 +ugust 2"" S.


Courtney< *.< 2""2. Can server virtualisation gain .ider appealJ IT 7ee!< 22 1ov 2""2. +vaila!le at? http?HH....computing.co.u3Hit.ee3HcommentH22"4#21Hvirtualisation) gain).ider)#--(R+ccessed 21 8uly 2"" S. Energy Consumption /ating 7EC/9 Initative< 2"" . +ner,& +fficienc& for /etwor! +0uipment# Two Steps Be&ond "reenwashin, .R5nlineS +vaila!le at? http?HH....ecrinitiative.orgHpdfsHEC/T)T4&0IT1T".pdf R+ccessed 1 Decem!er 2"" S. Emerson< 2""2. +ner,& 6o,ic* Reducin, ata Center +ner,& Consumption b& Creatin, Savin,s that Cascade Across S&stems- +vaila!le at? http?HH....lie!ert.comHcommonH,ie.Document.asp>JidU " R+ccessed 5 8une 2"" S. European Commission 8oint /esearch Centre< 2"" . Code of Conduct on ata Centres +ner,& +fficienc&* $ersion )-(. #" 5cto!er 2"" . R5nlineS +vaila!le at? http?HHsun!ird.Arc.itHenergyefficiencyHhtmlHstand!yTinitiativeTdataC2"centers.htm R+ccessed 1" 1ovem!er 2"" S. Fichera< /.< 2""-. Power And Coolin, 3eat 8p The *arch 2""-. ata Center < Forrester /esearch<

FuAitsu &iemens Computers and EnWrr< 2""2. +ner,& +fficient Infrastructures for ata Centers. R5nlineS 6hite $aper. 8uly 2""2. +vaila!le at? http?HHsp.fuAitsu) siemens.comHdmspHdocsH.pTenergyTefficiencyT3nuerrTfsc.pdf R+ccessed 2# 8une 2"" S. Ilo!al +ction $lan< 2""2. An Inefficient Truth. Decem!er 2""2. +vaila!le at? http?HH....glo!alactionplan.org.u3HeventTdetail.asp>JeidU2-(-e"e")2 fe)4121) !d#-)#-2"c"2eda4( R+ccessed 2# 8une 2"" S. IoodClean4ech< 2"" . .ive "reen IT Tips for /etwor! Admins. $osted !y &teven ,olynets< 24 8uly 2"" . R5nlineS +vaila!le at? http?HH....goodcleantech.comH2"" H"2H3vmTfirmToffersTgreenTitTtips.php R+ccessed 5 1ovem!er 2"" S. Iralla< $. 2""(. :Energy &tar for &ervers? 1ot 1early Iood Enough;< Ireener Computing< 21 *ay 2""(. R5nlineS +vaila!le at? http?HH....greenercomputing.comH!logH2""(H"5H21Henergy)star)servers)not)nearly) good)enough R+ccessed 22 *ay 2""(S. #-

Ireen!erg< &.< *ills< E.< 4schudi< 0.< /umsey< $.< and *yatt. 0.< 2""-. Best Practices for ata Centers# 6essons 6earned from Benchmar!in, '' ata Centers . $roceedings of the +CEEE &ummer &tudy on Energy Efficiency in 0uildings in +silomar< C+. +CEEE< +ugust. ,ol #< pp 2-) 2. R5nlineS +vaila!le at? http?HHeetd.l!l.govHemillsH$%0&H$DFH+CEEE)datacenters.pdf. R+ccessed 5 1ovem!er 2"" S. Ireener Computing 2"" . /ew ata Center from A C to +arn 6++ Platinum Certification. 5 +ugust 2"" . R5nlineS +vaila!le at? http?HH....greenercomputing.comHne.sH2"" H" H"5Hadc)data)center)leed)platinum R+ccessed #1 5cto!er 2"" S. Ireen Irid 2""(. &ee ....greengrid.org. Henderson< 4. O Dvora3< /.< 2"" . Dinu> captures the XgreenX flag< !eats 6indo.s 2"" po.er)saving measures. /etwor! 7orld< - &eptem!er 2"" . R5nlineS +vaila!le at? ....net.or3.orld.comHresearchH2"" H"-"(" )green).indo.s)linu>.html R+ccessed 5cto!er 2"" S. Hengst< +.< 2""2. Top )( 7a&s to Improve Power Performance in 9our atacenter . 4 5cto!er 2""2. +vaila!le at? http?HH....itmanagement.comHfeaturesHimprove)po.er) performance)datacenter)1""4"2H R+ccessed 2# 8une 2"" S. Hic3ey< +.< /.< 2""2< Power over +thernet power consumption# The hidden costs . 2" *arch 2""2. R5nlineS +rticle for 4ech4arget +1Q. +vaila!le at? http?HH....searchvoip.com.auHtopicsHarticle.aspJDocIDU124 152 R+ccessed 21 5cto!er 2"" S. Hills< *. 2""2. DondonXs data)centre shortage. : /et 1 *ay 2""2. R5nlineS +vaila!le at http?HHresources.Bdnet.co.u3HarticlesHcommentH"<1"""""2( 5<#(2 21#(<"".htm R+ccessed 2( 8uly 2"" S. Hopper< +. and /ice< +.< 2"" . Computing for the Future of the $lanet. Philosophical Transactions of the Ro&al Societ&* + #--71 19?#- 5'#-(2. R5nlineS +vaila!le at? http?HH....cl.cam.ac.u3HresearchHdtgHpu!licationsHpu!licHacr#1Hhopper)rs.pdf Raccessed 2( 5cto!er 2"" S. I0* Ilo!al 4echnology &ervices< 2""2. ;"reen IT<# the ne=t burnin, issue for business. 8anuary 2""2. +vaila!le at? http?HH...) (#5.i!m.comHservicesHu3HigsHpdfHgreenitTpovTfinalT"1"2.pdf R+ 1 *ay2"" S. 8ames< $. and Hop3inson< D.< 2""(a. Sustainable ICT in .urther and 3i,her +ducation > A Report for the Joint Information Services Committee (JISC) . R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed #1 8anuary 2""(S. #2

8ames< $. and Hop3inson< D.< 2""(!. +ner,& and +nvironmental Impacts of Personal Computin,. + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed 22 *ay 2"" S.8ames< $. and Hop3inson< D.< 2""(c. +ner,& +fficient Printin, and Ima,in, in .urther and 3i,her +ducation . + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. R5nlineS +vaila!le at? ....susteit.org.u3 R+ccessed 2( *ay 2""(S. 8ames< $. and Hop3inson< D.< 2""(c. Results of the '((? SusteIT Surve&- + 0est $ractice /evie. prepared for the 8oint Information &ervices Committee 78I&C9. 8anuary 2"" R5nlineS. +vaila!le at? ....susteit.org.u3 R+ccessed 22 *ay 2""(S. Eoomey< 8.< I.< 2""2< +stimatin, Total Power Consumption b& Servers in the 8S and the 7orld. Fe!ruary 2""2. R5nlineS +vaila!le at? http?HHenterprise.amd.comHDo.nloadsHsvrp.rusecompletefinal.pdf R+ccessed 2# 8une 2"" S. Da.rence 0er3eley Da!oratories< undated. Data Center Energy *anagement 0est $ractices Chec3list. R5nlineS +vaila!le at? http?HHhightech.l!l.govHDC4rainingH0est) $ractices.html R+ccessed 21 5cto!er 2"" S. *odine< +. 2""2. /esearchers? +*D less po.er)hungry than Intel. The Re,ister< #1 +ugust 2""2< R5nlineS +vaila!le at? http?HH....theregister.co.u3H2""2H" H#1HnealTnelsonTassociatesTclaimTamdT!eatsTi ntelH R+ccessed #" 8uly 2"" S. 1eal 1elson and +ssociates< 2"" . A2 Beats Intel in @uad Core Server Power +fficienc&- 5nline 6hite $aper. R5nlineS +vaila!le at? http?HH.....orlds) fastest.comH.fB( -.html R+ccessed 8uly #" 2"" S. 1e.com!e D.< 2"" . ata Centre Coolin,- A report for SusteIT b& "rid Computin, /owA< 5cto!er 2"" . R5nlineS +vaila!le at http?HH....susteit.org.u3 R+ccessed 22 *ay 2""(S. 5;Donnell< &.< 2""2. The ')st Centur& ata Centre. $resentation at the seminar< Information +ge< Eco /esponsi!ility in I4 "2< Dondon< 1ovem!er 2""2. R5nlineS +vaila!le at? http?HH....information) age.comHTTdataHassetsHpdfTfileH"""5H1 4-4(H&teveT5TDonnellTpresentationT) TE/T"2.pdf R+ccessed 2# +pril 2"" S. /elph)Enight< 4. 2"" . :+*D and Intel differ on Energy &tar server specifications;< 3eise 1nline< 8une 2 2"" . +vaila!le at http?HH....heise)online.co.u3Hne.sH+*D) and)Intel)differ)on)Energy)&tar)server)specifications))H111"11 R+ccessed? 8uly #" 2"" S.

&chulB< I.< 2""2a. Business Benefits of ata .ootprint Reduction < &torageI5 Iroup< 15 8uly 2""2. +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$T"215"2.pdf R+ccessed 5 +ugust 2"" S. &chulB< I.< 2""2!. Anal&sis of +PA Report to Con,ress< &torageI5 Iroup< 14 +ugust 2""2. +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$TE$+T/eportT+ug14"2.pdf R+ccessed 5 +ugust 2"" S. &chulB< I.< 2"" . 2aid '-( 5 +ner,& Savin,s 7ithout Performance Compromises < &torageI5 Iroup< 2 8anuary 2"" . +vaila!le at http?HH....storageio.comH/eportsH&torageI5T6$TDec11T2""2.pdf R+ccessed 5 +ugust 2"" S. 4ro> 2""-. $roAect Imperial College Dondon. http?HH....tro>aitcs.comHaitcsHserviceHdo.nloadTcenterHstructureHtechnicalTdocume ntsHimperialTcollege.pdf R+ccessed 5 +ugust 2"" S. %& Environment $rotection +gency 7%&E$+9< 2""2a. +/+R"9 STARB Specification .ramewor! for +nterprise Computer Servers- R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJcUne.Tspecs.enterpriseTservers R+ccessed 2# 8une 2"" S. %& Environment $rotection +gency 7%&E$+9< 2""2!. Report to Con,ress on Server and ata Center +ner,& +fficienc& . +ugust 2""2. R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJcUprodTdevelopment.serverTefficiencyTstudy R+ccessed 2# 8une 2"" S. %& Environment $rotection +gency 7%&E$+9< 2"" . 8S +nvironmental Protection A,enc&* +/+R"9 STAR Server Sta!eholder 2eetin, iscussion "uide < 8uly (< 2"" . R5nlineS +vaila!le at http?HH....energystar.govHiaHpartnersHprodTdevelopmentHne.TspecsHdo.nloadsH&er verTDiscussionTDocTFinal.pdf R+ccessed #" 8uly 2"" S. %& Environmental $rotection +gency 7%&E$+9< 2""(a. E1E/IY &4+/Z $rogram /e=uirements for Computer &ervers< 15 *ay 2""(. R5nlineS +vaila!le at http?HH....energystar.govHiaHpartnersHproductTspecsHprogramTre=sHcomputerTserve rTprogTre=.pdf R+ccessed 22 *ay 2""(S. %& Environmental $rotection +gency 7%&E$+9< 2""(!. E$+ *emo to &ta3eholders< 15 *ay 2""(. R5nlineS +vaila!le at? http?HH....energystar.govHinde>.cfmJ cUne.Tspecs.enterpriseTservers R+ccessed 22 *ay 2""(S.

#(

6orrall< 0.< + Ireen 0udget Dine. .orbes< 2 8uly 2"" . R5nlineS +vaila!le at? http?HH....for!es.comHtechnologyH2"" H"2H22Hsun)energy)crisis)tech)cio) c>Tr.T"22 sun.html R+ccessed 5 +ugust 2"" S

4"

You might also like