Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
Musings on data centres – Volume 2

Musings on data centres – Volume 2

Ratings: (0)|Views: 24 |Likes:
Published by quocirca

More info:

Published by: quocirca on Jan 07, 2014
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

05/15/2014

pdf

text

original

 
 
Copyright Quocirca © 2014 Clive Longbottom Quocirca Ltd Tel : +44 118 9483360 Email: 
Musings on data centres – Volume 2
 
This report brings together a series of articles first published through ComputerWeekly
January 2014
2013 Continued the era of big changes for data centres. Co-location services
continued to increase; cloud computing became more of a reality; “software defined” was applied to anything and everything. Keeping up to speed with
the changes is difficult at the best of times
 –
 and that is why Quocirca has pulled together a set of articles written by Quocirca for ComputerWeekly throughout 2013 as a single report.
 
 
Musings on data centres
 – 
 Volume 2
© Quocirca 2014 - 2 -
Musings on data centres – Volume 2
 
This report brings together a series of articles first published through ComputerWeekly
Data centre resolutions
 –
 make them, and stick to them
 
Any New Year brings in the opportunity to review and draw up a new list of priorities. Here’s Quocirca’s main resolutions for data centre managers that should help in creating a more fit
-for-purpose IT platform within an organisation.
Data centre environmental monitoring and metrics
 
While IT equipment is covered by many vendors and systems management specialists, the implementation, monitoring and management of the environmental situation within a data centre is often less of a priority. This article looks at the various areas and what to watch out for.
Oh dear. Our provider has gone bust.
The cloud seems like a good idea. Just like hosting did. Or the application service provider (ASP)
market. However, a “Plan B” has to be in place to cover what your organisation has to do if your
provider goes bust.
Moving data and applications in the cloud
 
If you do go to the cloud, you will want as much flexibility as you can possibly get. For this, you must understand what can, and what cannot, be done when it comes to moving data and applications around in the cloud.
The Software Defined Data Centre
 –
 is it all rosy?
 
Software defined environments are all the rage. Networks, servers and storage each have their
own “SDx” m
oniker, and now the software defined data centre (SDDC) has been mooted. Is the world ready for this?
Sweating IT assets
 –
 is it a good idea?
 
When times are hard, it is very tempting to try and get more life out of your IT equipment. However, this may not be a cost-effective way of managing your ITC platform.
Managing the software assets of an organisation.
Even when you have a good level of control over the management and refresh of your hardware assets, there still remains the software. Many organisations have allowed their software to get out of control
 –
 here are some tips about regaining control
 –
 and gaining money in the process.
How to check the health of your data centre
Just how “healthy” is your data centre? With the way that techno
logy and the way it is used has
morphed, an organisation’s IT platform is now far more like a living body. How can you carry out
a proper health check across the complete platform?
The BYOD-proof datacentre.
Bring your own device (BYOD) is taxing the minds of many IT director and data centre manager at the moment. Just how can a data centre be architected so as to embrace BYOD to the benefit of the organisation and users alike?
Is your datacentre fit for high performance computing and high availability?
Do you need high performance computing (HPC)? More to the point, would you know if you
didn’t need it? If you do need it, are you sure that your data centre is up to housing a
continuously running HPC system?
Data centre highlights of 2013
As 2013 closed, it was time to look back across the year and discuss what Quocirca saw as being the major happenings and news.
 
 
Musings on data centres
 – 
 Volume 2
© Quocirca 2014 - 3 -
Data centre resolutions – make them, and stick to them
 
It’s that time of year again, where resolutions are made and
generally broken within a few days.
However, from an IT perspective, maybe it’s time to make some that can be stuck to –
 not on an individual basis, but at a level that can help you better serve your business. Data centre infrastructure is critical, however it is provisioned; completely self-owned and operated, sourced from a co-location facility or procured via on-demand services from cloud service providers. Ensuring that the overall IT platform remains fit for purpose and supports the business is an imperative
 –
 so here are six resolutions that will ensure this is the case.
First resolution
 –
 find those lost items
Like searching down the back of the sofa to find that lost change, it’s amazing what you can find lost in a data centre.
Previous research
carried out by Quocirca shows that it is common for an organisation’s asset database to be out by
+/-20% on server numbers alone. So, if you have a data centre with 1,000 servers, there could be 200 that are missing or wrongly identified
 –
 and so are over or under-licensed. Cost savings can be made by carrying out a proper asset audit
 –
 and the best way to do this is to implement an automated asset tracking system so that this can be carried out on a continuing basis, rather than as one-off, high cost activity on an ad-hoc basis.
Second resolution
 –
 shed a few pounds
In many cases, the way data centres have been run over the long term has led to massive inefficiency in how equipment is utilised. Again, Quocirca research shows that many servers are running at less than 10% of their potential capacity, and storage systems are often less than 30% utilised. Consolidation of applications and virtualisation of IT platforms can drive usage rates up markedly. If a target of 50% for servers is set and achieved, that could free up 80% of existing physical servers. If nothing else, these can be turned off, saving large amounts on the electricity bill. Better still, decommission them and sell them on, saving on licensing and maintenance costs, maybe keeping some of the more modern servers mothballed so that new server purchases can be put back for a while.
Third resolution
 –
 exercise more control
Organisations that have consolidated and virtualised still find that things can get out of control. The biggest promise of virtualisation is that it is easier to provision new images of applications and functions than it was before. However, this is also its biggest issue, as developers and even system administrators in the run time environment can find it very easy to provision a new virtual image
 –
 
and then forget to decommission it after it has been used. Such “virtual sprawl” can lead to false reading as to overall systems utilisation, as cpu and storage being used by these images is
perceived as being part of the
“live” load, yet they are carrying out no useful work. On top of this, every live image is
using up licences that could be used elsewhere or not paid for in the first place. Putting in place application lifecycle management (ALM) tools will help in ensuring that such virtual sprawl is controlled.
Fourth resolution
 –
 get out more
The self-owned and operated data centre is no longer the only option. Co-location facilities and cloud computing have expanded the options for how IT functions can be provisioned and served. The mantra for the IT department
should no longer be “how can we do this within the data centre?”, but should be “how can this be best provisioned?”
 In many cases, this will mean that new applications and functions will be brought in from outside third parties
 –
 and this will mean that overall network availability has to be more of a concern. Multiple connections to the internet are becoming more the norm, ensuring that overall systems availability is not compromised through the network connection being a single point of failure when connecting to the outside world.

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->