WAN Speak Musings – Volume II

Over the last months, Quocirca has been blogging for Silver Peak System ’s independent blog site, http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report.
May 2013

Further to WAN Speak musings - Volume I, more aggregated blog articles from the Quocirca team covering a range of topics.

Clive Longbottom Quocirca Ltd Tel : +44 118 948 3360 Email: Clive.Longbottom@Quocirca.com

Bob Tarzey Quocirca Ltd Tel: +44 1753 855794 Email: Bob.Tarzey@Quocirca.com

Copyright Quocirca © 2013

WAN Speak Musings – Volume II

WAN Speak Musings – Volume II
Over the last months, Quocirca has been blogging for Silver Peak System’s independent blog site, http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report. The Great 2013 race – ready, get net, go… Every platform needs good foundations It is not a strategy to just stick a pin in a map The end of the enterprise app OMG – MWC could lead to more problems, LOL. Venn shall we three meet again? 5 steps to mitigating OBEM syndrome
This first piece for 2013 provides some ideas for New Year ’s Resolutions – did you set any for your IT goals? If so did you keep them? Maybe time to set some mid-year goals based on this piece. The focus from many seems to be at the top end of the IT stack, yet if the underlying foundations are not solid, the whole IT platform can come crashing down. SDN and OpenFlow may provide this solidity at the network level. The global positioning of data centres is becoming more of an issue as focus on the impact of the US Patriot Act and FISMAA brings issues of data ownership and overall security to the front of mind. Care has to be taken in where a data centre is physically located.

Cloud computing promises much, but can only deliver on this promise if the monolithic enterprise application dies to be replaced with the composite application built dynamically from functional components available on-demand. Mobile World Congress is over – and much that was there was consumer focused. However, enterprises should watch what comes out of shows like MWC, as it will have an impact on what their employees do around bring your own device (BYOD)

Consumerisation of IT seems to have led to too much focus being placed on the individual – anything that they want to do via BYOD must be supported by the IT department. This needs to be balanced – the individual’s, IT’s and the organisation’s needs must be balanced carefully. Security is a major concern for the vast majority of organisations, and much is spent on IT security. However, the majority of problems are not caused by super-intelligent hyper-ninja nerds, but by the average employee just getting something wrong. Putting controls in place to avoid such natural security leakage can be done. The internet has shrunk the globe, with small organisations being able to operate globally through the reach it gives them. Unfortunately, it also gives global reach to any idiot with a view. Being able to sift the gold nuggets of truly useful information from the dross of the idiots is a tough job. Las Vegas – that surreal apparition in the middle of the Nevada desert – used to be the home of cutting edge security systems and behavioural monitoring. With money in short supply, is Vegas running the risk of falling behind – and does this place any risk on those who use the hotels and frequent the casinos there? Will what happens in Vegas always stay in Vegas? It is a fact that FedEx has the capability to carry more data than the internet on a daily basis. It may not be able to meet service levels when it comes to data latency, however. With growing data volumes, the logistics companies may still have a role to play in how we deal with big data.

Is the net creating the Global Village? Betting on a “good enough” network? Big Data – a job for a logistics company? Cloud standards – if you wait long enough, the one you

Every day seems to spawn a new group looking at what standards are needed to support cloud computing. The problem is that the majority of these groups seem to believe that they are the only ones working on any standards – and this is leading to the problem of too many standards that aren’t compatible with each other.

© Quocirca 2013

-2-

WAN Speak Musings – Volume II want may come along. The future of careers in IT
Are you a nerd, a geek or an IT worker? If the first two, get your CV polished up – the right place for you may not be outside of a technical organisation. For the latter – brush up on your skills and become part of the business.

The Great 2013 Race – Ready, Get Net, Go.
So here we are – 2013 already. No matter what New Year’s resolutions you have made, can I a dvise a few more that you take on so that your organisation is ready to better face what promises to be a tough year ahead? Firstly – review your current approach to data centre networking. Over time, it is likely that the data centre network topology has morphed into a bit of a monster – hierarchical systems of switches leading to complex routing of packets of data with all the associated latency issues that this can lead to. Moving to a fabric-based approach can flatten the network, reducing not only the latencies involved with East-West traffic of different enterprise applications trying to share data between them, but also helping in the North-South traffic between the data centre and the various different types of devices being used to access the applications. Bringing in top and end of rack switches where the network can be virtualised gives far greater control over what is happening at a transport level. However, this may not be enough, and so secondly, you should look software defined networks (SDN). SDN provides a means of abstracting many of the network functions from the proprietary environments of network operating systems, providing a more standardised capability to deal with areas such as security, availability, load balancing and so on. As these functions are moved away from dependencies on any specific vendor, SDN also enables a far more open network hardware approach, and also allows for advances in networking functionality to be more rapidly adopted without the need for forklift upgrades at the hardware level. Thirdly, prepare for the data centre to move further away from being the centre of your organisation’s overall IT platform. With mobility and the rise of BYOD (bring your own device) meaning that employees are using a mix of devices in a range of ways with the network connectivity outside the direct control of your organisation, it is increasingly important to ensure that access to applications is maintained (wherever they are). This may mean that a more strategic approach is required to the use of external networks – multiple, redundant connections from the data centre to the internet and multiple wireless and broadband plans for employees so that they can be pretty much assured of connectivity at all times. This will become even more important as organisations continue to embrace cloud computing. An increasing amount of functionality will be provided from outside your own data centre, and ensuring that these disparate and decentralised bits of functionality are all available and performing well should be an imperative. So fourthly, you should ensure that the means to measure end-to-end performance are in place, to report in a predictive manner the trends and issues users are having, and to be able to deal with any issues arising in as automated a way as possible so that users remain unaware of any major problems. Such application performance monitoring (APM) tools are available from many vendors, as are tools to manage performance across multiple different external networks through load balancing, caching, packet shaping, compression and so on. Globally, 2013 is likely to remain a tough year. Consumers are struggling with their finances and this will feed all the way through to business-to-consumer (B2C) and business-to-business (B2B) organisations. The past couple of years have been more around the survival of the fittest – 2013 has got to be more than this. It will be a race beyond survival

© Quocirca 2013

-3-

WAN Speak Musings – Volume II
to the defining of the winners in vertical and geographic markets. Much of this will be predicated on how effective an organisation’s networks can deal with the pressures placed on them. Prepare now – and plan to win.

Every platform needs good foundations
As plans for 2013 kick in, more organisations are looking at adopting a new IT “platform” on which to carry their applications, functions and services. 2012 was the year of cloud computing – at least from a vendor point of view, talking the subject up but seeing mainly proof-of-concept (PoC) projects and some toe-in-the-water stuff going on. It is now likely that 2013 will be when we see a lot of “real” cloud projects being implemented – true elasticity of resources, with these resources being shared amongst multiple workloads hosted on the platform. There is a key thing that is implicit – but often glossed over - in the previous paragraph. Elasticity of resources includes all resources – yet the focus is often just on servers and storage, avoiding the knotty problem of what needs to be done at the network level. Historical network topologies have been hierarchical, with all the problems that this can bring to a data centre. Much of the north-south traffic (that between an end-user’s device and a single application) is dealt with without too much of a problem, and the business therefore tends to believe that everything is fine. The problems start when trying to deal with the east-west traffic – that which flows between different applications in the data centre itself. In a hierarchical network configuration, data from one application has to flow up to a common switch and then back down to the other application leading to high levels of latency, as well as major physical limitations that can be introduced through the use of the spanning tree protocol (STP), which can rapidly lead to the loss of physical ports as these are shut down to prevent data loops. Link aggregation groups (LAGs) can be put in place to get around some of STP’s issues, but is really only suitable for a physical architecture – things get too complicated once virtualisation is thrown in the mix. A move to a cloud platform involves massive virtualisation and a need for rapid changes in how the network supports the dynamic changes at the server and storage levels. STP and LAGs are not the solution to this – this is where software defined networks (SDN) really kick in. SDN enables the abstraction of the management and control planes of switches from the data plane. This enables networks to be flattened, cutting back on north-south hierarchical traffic, and enabling physical ports to be aggregated and partitioned, as virtual ports at will. A cloud platform built on SDN can therefore be far better optimised – available network bandwidth can be allocated and used to higher utilisation levels; higher priority traffic can be given greater bandwidth and different data types can have quality and priority of service applied through software rules, rather than through more constraining firmware-based switch operating systems. The main problem at the moment is that the awareness of SDN is low in IT circles – and is essentially unheard of at the business level. The good news is that the vast majority of network vendors have adopted SDN – mainly through an industry standard called OpenFlow – and therefore, just through natural replacement of existing network equipment, SDN capabilities will be available. SDN is also not a forklift upgrade – it has the capability to be backward compliant with existing switches, so it can be used in a mixed environment of old and new switches.

© Quocirca 2013

-4-

WAN Speak Musings – Volume II
Cloud computing is probably the most important change to enterprise computing since client/server came in: however, if the basic foundations of the network are neglected, it will just be a move from one bad platform to another. SDN provides the needed foundations – and is available to all, now.

It is not a strategy to stick a pin in a map
Cloud computing is touted as the answer – pretty much to anything and everything. For example, if you need elastic resource provision, use cloud. If you don’t want the cost of a facility and hardware plus maintenance, use cloud. If you want greater flexibility in on-boarding new employees and getting rid of old ones, use cloud. All told, much of this is true. But cloud is no a silver bullet, and one big issue is the location of the data centre in which the main part of the cloud resides. What? I almost hear you cry. Surely, cloud makes location immaterial – with modern approaches, latency can be minimised and performance optimised. Location is not the issue it once was. Again, true – to a certain extent. The problem is no longer the technology behind it all, but the politics and cultures. In earlier posts (Politics and networks may not mix and Avoiding a high speed data breach), myself and Bob Tarzey touched on some of the issues here. The biggest issue for the burgeoning global could computing market is not lack of standards or in problems with overall performance or with the multitudinous different platforms that can be chosen – it is down to how much trust you can place in the company, and the country, in which the data centre facility, and therefore the data resides. There are certain areas of the world where the majority of Western organisations would steer clear – it may not be a good idea to store mission critical information in North Korea for example, and places such as Russia (where encryption is illegal) and China should be worrying enough to make these a non-choice. However, the Patriot Act and the Foreign Intelligence Surveillance Amendments Act (FISAA) in the US send shivers down the spine of those who could be impacted by it – which is essentially any organisation operating in the US, any organisation using facilities operating in the US and any organisation using facilities operated by a US headquartered organisation. Such worries about a “friendly” nation are beginning to have impact on how things are done – look at the number of facilities around the world which may have what looks like a US company’s name on them, yet are set up as completely separate, local organisations to be able to bypass the Patriot Act and FISAA reach. Other companies, such as Calligo, operating out of the British Crown Dependency of Jersey in the Channel Islands, take yet a different view. Jersey is in the European Union (EU), but has its own law making capabilities which are often more pragmatic and business-friendly than is the case in larger countries, while still offering strong data security and audit capabilities. Therefore, through operating facilities through such island nations, a more targeted approach to data security can be offered. The “big country, big brother” data laws can be bypassed to an extent, although in Jersey’s case, any EU data law would still apply. At the moment, the EU seems to be erring on the side of anything that is similar to the Patriot or FISAA acts would require a warrant to be provided by a court, which makes “fishing expeditions” where investigations are carried out based on dodgy intelligence a little harder to initiate. In the end – your data’s security is only as good as the trust you know you can place in the facility, its owners and the country it is in. Make sure that you ensure that your organisation will not find itself suddenly captured in a data trawling net that could bring the business down.

© Quocirca 2013

-5-

WAN Speak Musings – Volume II

The end of the enterprise app
How well is your organisation supported by its existing enterprise apps? Do your ERP and CRM systems fully support the dynamic nature of the business, enabling processes to be changed as and when the business needs – or is it far more a case of that your organisation has had to change its processes in order to fit in with what the application can do? I’ve seen this time and time again. A given organisation finds it has a problem. IT researches into what solutions are available and finds that there are, at best, solutions embedded in enterprise applications that solve 80% of the problem. Eventually, it is agreed that one of these will have to do, the application is procured, tested, implemented and finally run. This could be a year or more after the problem was first raised by the business – the “solution” is already out of date and constrains the business, rather than enabling it. However, we now have cloud computing. Some see “cloud” as the same as “hosting” – just move the ERP or CRM application into the “cloud” and everything will be fine. Actually, without good network planning and engineering to cover availability and end-to-end systems response, just do this and the situation could be a lot worse than it was before. No, cloud – when exploited well - should be a lot more than this. True cloud offers discrete sets of functionality that can be integrated as required. For example, take how many applications have typically worked in the past: each starts out as a highly targeted system, then the functionality increases (“bloatware”) until there is a high degree of crossover between what one application does and another. Information is duplicated in different data stores, and this can lead to errors – for example, one customer record in a CRM system being for “Mr. A.B. Person, 123 High St, Somewhere”, and the other being for “Alan Person, 123 High Street, S’Where”. Computers find it difficult to see that these are actually the same – and chaos can ensue. Cloud can break these problems down. A single data set – or at the very least, a master data management approach around the data – can be implemented. Discrete functions, such as a function to deal with stock availability, another for item ordering and one for delivery, can be wrapped around this, with each function being targeted at a specific issue. By pulling together functions as required, a “composite application” can be created that is far more dynamic. By swapping out functions as required, the business is back in charge – it can change its processes to meet the demands of the market and IT can reflect these needs in almost real time. IT staff have to change their mind-set – the IT department it has to know where these functions can be sourced, at what price and with what capabilities. It has to be understand how these functions can be integrate into the composite application rapidly and effectively, and create and manage a full audit trail of what has happened along the process itself. The IT team must also ensure that the right network capabilities are in place – multiple connections to the outside world to deal with any connectivity failures; performance monitoring tools and network acceleration technologies to ensure that the end user experience is as good as many IT users have become accustomed to receiving as consumers. Support for mobility and BYOD is crucial. Cloud computing is the biggest change in how IT can be provisioned and enables IT and the business work together effectively. But, it needs a joined up approach – not just at the technology level, but between the business and IT itself. Grab the opportunity – plan at a functional level, and start supporting the process needs of the business.

© Quocirca 2013

-6-

WAN Speak Musings – Volume II

OMG – MWC could lead to more problems, LOL.
As Mobile World Congress (MWC) passes on its weary way in Barcelona, those responsible for technology in organisations should be looking on to see what the goings-on at the event may mean for them – even if it is with a slight look of bemusement. Firstly, even though MWC remains big, its direct impact on an organisation may be fading slightly. As MWC become more like the Consumer Electronics Show (CES), the use for many of the technologies on show may appear to be peripheral in an enterprise environment. However, its indirect impact can still be felt. Google’s announcements on Google Glasses and on the new ChromeBooks, and more guesswork on what an Apple Watch might actually look like (without Apple even being there), along with other things ranging from the boringly believable to the unfortunately unbelievable may impact the enterprise in ways that may be unexpected. What is sure is that MWC (and the associated goings on) are showing how the hardware vendors are making more of a play to capture the new millennials – those end users coming through who (we are led to believe) wouldn’t recognise an email if it jumped out of a device and bit them, and see the use of a phone for anything other than instant messaging and running apps as a quaint action for the dinosaurs aged, well, over 30. The advent of bring your own device (BYOD) is a tide that no one should try and hold back – preventing use of an employee’s own equipment has never worked in the past, and with the cost of end user devices falling dramatically, BYOD will only grow in use. But, is there a line that shouldn’t be crossed when an upstart of a 21 -year-old VP comes in with their pair of Google Glasses and says “Make this work with SAP, will you? Oh, and while you’re at it, this watch isn’t synching with my Facebook account properly”? Of course there should be – but again, a stiff response of “there is no place for those in our organisation” is probab ly not the best way to keep the employee on board – more likely just bored. Far better to sit down with these people and try and figure out if they are on to something – and if not, to be able to discuss with them why embracing every new technological advance is not good for the organisation. One of the areas that I find tends to get missed out when individuals look at how technology helps them is that this is where the perceived productivity improvement stops – with ‘me’ the individual, not ‘us’ the organ isation. Devices with their own app and cloud ecosystem can lead to yet another information silo, with some important data being stored off on a cloud somewhere because that is the way the new device works. This can then lead to bad decisions being made at a corporate level as that piece of critical information could not be included in the corporate ‘big data’ world. If this is down to the dogmatism of the individual, it can be a case of time to polish up the resume – this is generally quite good in focusing the mind on whether the device is really useful or just a bit of bling. Other areas such as information security and impact on the overall performance of the organisation’s applications can also be brought to bear – but expect the glazed look of an “I don’t care” to spread on these subjects. So – better to be prepared. Look at the stuff that is being talked about on the internet around MWC. Try and understand why anyone would want to use the new device and services being announced within your organisation – and prepare to be able to see if they are really going to be useful and worth supporting in the long term, or more likely a flash in the pan.

© Quocirca 2013

-7-

WAN Speak Musings – Volume II

Venn shall we three meet again?
I was at an event recently and the vendor up on stage was trying to show how their company was taking the idea of the consumerisation of IT seriously. From Quocirca’s point of view, consumerisation is important – the continued growth of bring your own device (BYOD) combined with bring your own software (BYOS) through the downloading of apps to these devices means that IT has to be far more intelligent in how it deals with this aspect of shadow IT – and what it can mean to the business. To show how the vendor was paying attention to the consumerisation issue, a Venn diagram was shown, as such:

Employee

IT

This looks pretty good – the employee is the one driving consumerisation, and IT is the one that is left with the mess if they cannot control it. However, surely there is a missing circle in this Venn? Sure – employees are crying out for the capability to use their own device in their own way with their own tools, and the IT department cannot hold back the BYOD tide. But, just where is the actual organisation in all of this? Quocirca research has shown time and time again how there is a chasm between the IT function and the business: IT is often technically focused and is trying to keep a platform of hardware, operating system(s), application server(s) and applications running. The business doesn’t r eally care about this – it will complain like mad if the platform isn’t working, but while it is, then it really couldn’t care less if it is being run by a group of elves chanting incantations in caves under the organisation’s HQ. What the business cares about is revenue, customer loyalty, monetising its intellectual property and so on. Meanwhile, the employee is bothered about keeping a roof over their head, paying for food, saving up for the next vacation and so on. These three groups – users, IT staff and the business - are very disparate, yet have to work together for everything to be successful. Where a business has concerns over who can see certain information and requests multiple layers of security, the user sees a block to the way they want to work. IT sees yet another demand from the business for something that needs scoping out, developing, retro-testing, rolling out and supporting. It’s not surprising, therefore, that many systems end up in a bit of a mess.

© Quocirca 2013

-8-

WAN Speak Musings – Volume II
Therefore, the more accurate Venn diagram has to be as below:

Employee

IT

Business

Ensuring that any chosen system or solution hits the correct point of the Venn diagram – i.e. the intersection of the three circles – means that all three groups should be happy. Concentrating on any one or two groups will lead to something where at least one group is unhappy – and so to a system that is bypassed or misused. So, if a business has worries about who can see certain information, then it should talk with IT to ensure that any complexities of a multi-level security system is hidden from the employee. This can be done through, for example, the use of single sign-on systems tied into a corporate directory (say, Active Directory) that defines who has what access rights to which digital assets (see a Quocirca report on this subject here). The employee sees little difference in how they work; IT gets a funded SSO project to put in place something that cuts down on help-desk calls; and the business gets greater control over its intellectual property. It seems to me that many vendors seem to have moved too far over toward a two-circle Venn diagram of IT and the employee. Bringing the business back into the equation is not just a “nice thing to do”, but an imperative to ensure that technology, business and the individual work well together.

© Quocirca 2013

-9-

WAN Speak Musings – Volume II

5 steps to mitigating OBEM syndrome
Organisations tend to worry about security based on thoughts of Ninja Blackhats doing the technical equivalent of rappelling through the roof of Fort Knox and running off with several billion dollars’ worth of gold bullio n. In technical terms, the Ninja Blackhat manages to bypass security and makes good their escape with the organisation’s intellectual property and personally identifiable information. However, it is far more likely for the average organisation that the aggregate financial loss to them will be through the equivalent of lots of individuals losing their change down the back of the sofa. It is not the targeted, malicious attacks that matter but the drip, drip, drip of accidental information loss through emails being sent through to the wrong person, information accidentally escaping through social media outlets or loss of end-user devices where the problem lies. The issue is – how best to deal with this? 1) Educate: the new employee is coming in from a world where exchange of information at most levels is a given – amongst friends and even to unknown people. Organisations have a responsibility to educate their employees in issues that matter to everyone. Information loss is not just embarrassing – it impacts the performance of the organisation, and could make it that the individual’s job is under pressure due to the poor performance of the company. 2) Centralise: BYOD is pretty much a given, now. To get around this, centralising desktops to a server-based approach can be used to reduce the chance of company data ending up stored on personal devices. This also means trying to ensure that everything the employee needs is also centralised – use of enterprise equivalents of app-based software that is just as easy to use (for example, using Box instead of Dropbox). The desktop can then be run in a sandoxed environment, giving much greater levels of control. 3) “Open” access: employees will use social media no matter what you do. You cannot block them, so you have to embrace them. Let them use the tools – based on the general guidelines you will have provided them with in step 1. But this is where the “OBEM” comes in. No matter how much education you do, you will come up against the “one born every minute” employee. You know the sort – a bit ditzy, tends to flap around a bit having pressed “send” on the email when they shouldn’t have done, or spends the first half hour of the day worrying about “that” text they sent last night. Education isn’t too good with them – it tends to be in one ear, rattle around and drip out the other ear without much having been taken in. Therefore, we need step 4) Information Management Technology: use data leak prevention (DLP) tools to be able to recognise when something is happening that shouldn’t be. Use hard blocks or advisory messages (the CEO tends to get upset if you directly block them from doing something – let them know, nicely, that what they are about to do is being logged), but stop whatever you can. Centralising through server-based desktops with all the tools an employee could want gives you enough control over their day work to be able to mitigate the number of possible issues. This then leaves the final step: 5) Nuclear deterrent: which is really about dealing with those who have managed to get through all the above. In most cases, this will no longer be “accidental” – we are in the world of “malicious” or “stupid” now. It has to be in the terms of employment and the contract of work that should someone share or otherwise divulge data from the organisation that is valuable or could damage its reputation, it is a disciplinary matter that could result in termination of employment. It is unfortunate that OBEM actually seems to be moving more towards OBES – one born every second. No single approach will help in dealing with the problems of information security: a blended approach including the five steps above is required.

© Quocirca 2013

- 10 -

WAN Speak Musings – Volume II

Is the net creating the Global Village?
There is no gainsaying that the use of the internet has made the world a smaller place. A small company in one part of the world can make its domain expertise available on a global basis by having the right web presence in place. Possible customers can see its capabilities and get in touch, broa dening the company’s reach and therefore market. To this extent, we have a global marketplace, and large and small companies can mix just as they would in any village or town market. Logistics still remain a real world problem, but electronic goods, know-how and goods that can be shipped cheaply all apply here. But, is this the total comparison that can be made between the internetted world and a small village? In many cases, I feel the answer has to be “unfortunately, not”. Village life can often be summed up as having a community where everyone knows what everyone else is up to and where there is a lot of talk about what this could possibly mean to others. Facebook and Twitter have taken the village community from a few hundred to the hundreds of millions – what someone saw someone else doing is now capable of being spread around the world in short time. The art of Chinese whispers is also widespread – in a village, the chains are relatively short but can still end up with bad misunderstandings about what is actually happening. With hundreds of millions of people capable of chatting about areas from which they are several times removed, misunderstandings are bound to be rife, and can escalate into major issues when the main way of dealing with them is 140 characters or a swift poke in the Facebooks. Then we have crowd sourcing. A wonderful idea based on the ‘fact’ that a larger pool of resource will always lead to a better decision being made. Very clever people have discussed this at length and have done research that shows that the final answer received from crowd sourcing does tend to be better than the answer received from a smaller group. However, the research does tend to have its faults. Most has been carried out where there is a defined answer based on fact at the end – the village show type competition of stating how many sweets there are in the jar, or guessing the weight of the cake are the sort of things that are used, and the average of all responses tends to be closer to the real answer than is got from a smaller sample. But, if a different subject is chosen, what sort of response would be obtained? Let’s try something like “Are there aliens walking amongst us, and are they after our intelligence?” or “How did JFK REALLY die?” Would crowd surfing give us a better, more reasoned response than we would get over a pint at the local bar, or is it more likely that the global nutters and weirdoes would take over the discussions and we’d all end up wearing tin foil hats and calling for the indictment of all of the FBI for presidenticide? The idea of a global village is great – it offers all the convenience of everything being close together and easy to get at. The problem with a global village is that it does seem to offer everything else that a village offers – the sniping, the gossip, the misunderstanding and everything else that goes with it. But, the biggest problem is that it can also give us the village idiot. Unfortunately, this will not be just the one idiot for the village, but a straight-line correlation (hopefully not an exponential one) where the internet just leads to a lot of very loud people with little real value overwhelming the value that those with real domain expertise can offer. There is a strong need to filter the rubbish out – the real value often lies in those who are less vocal.

© Quocirca 2013

- 11 -

WAN Speak Musings – Volume II

Betting on a “good enough” network?
As I write this blog, I’m sat 27 storeys above the Las Vegas Strip, looking down on the concrete jungle that makes up this surreal place. From my window, I can see several of the major casino hotels; several smaller ones hide in the depths. Around me are six out of ten of the world’s largest hotels, with a little under 33,000 rooms between them. For the majority who turn up here, the idea is a little bit of fun at the tables, leaving a little poorer than when they arrived, but with a feeling that it was all worth it. What they don’t see is the massive amount of technology underpinning Las Vegas. The hospitality market has long been a technology hot spot – taking hotel bookings, ensuring that guests are kept happy, ensuring that the hotel runs to a semblance of order requires a technology platform that has spawned countless vendors in the space. Add a casino into the mix alongside the hotel, and everything gets incredibly more complex. If you have ever been in a casino, you may have noticed the dome cameras dotted around the ceilings. These are planned to ensure that as much of the casino floor as possible is covered, and it used to be the case that the live analogue video streams were monitored just by people watching out for things out of the ordinary. Things have moved on; alongside government security forces, it is in the casino trade where you will find some of the most advanced technology available. These video streams have been brought into the digital era and are being monitored automatically – facial and other pattern recognition systems are in place to be able to monitor for to rapidly monitor for anomalies, raising events for humans in the casino’s security staff to then concentrate on. Incoming data from the gaming tables and slot ma chines are fed as streams to high power compute clusters that track for possible fraud. Details of guests that have been warned away from a casino are shared along the Strip so that those who are not welcome in one casino are easily identifiable at the other casinos. This cannot be done on the cheap – behind the kitsch façade of the casino floor lie large data centres and complex networks on which the casino has to depend to ensure that they can fully track what is going on. However, that Vegas has hit a sticky patch. The guests have fallen away; those that do come don’t spend as much as they did. Hotels on the Strip lie with empty rooms, new builds are frozen. Building a new casino/hotel on the Strip comes in at more than $5b – the money has to come from somewhere. Without the money coming in, Las Vegas is putting on a good act - but it has deep problems. These problems will feed on to the technology capabilities – a lack of money will hamper adoption of newer technologies. The cheats are progressing and have better technology themselves; the external technology blackhats would love to break the casinos’ systems. In such an internecine battle, money is required to keep systems at a level that can at least put up a good fight. OK – it is not like the Oceans films, where a suitcase laptop computer can be used to cause mayhem in a casino by taking over the complete platform, allowing the baddies to waltz off with millions, but Las Vegas is running the risk that it will not be able to claim to be one of the most secure environments on the planet any more. Personally identifiable information could be increasingly at risk; financial details may be compromised.

Las Vegas is not quite running on empty, but it needs to see the good times roll again. If not, then against the old saying, what is left in Vegas by the guests may not stay in Vegas.

© Quocirca 2013

- 12 -

WAN Speak Musings – Volume II

Big data – a job for a logistics company?
There was a thread going round on Facebook a few days back. The discussion was around the “fact” that FedEx had more bandwidth for dealing with big data than the internet itself had. OK – its sounds strange, but if all of FedEx’s vehicles were stuffed to the max with disks full of data, it could move more data in one day than the internet can manage. On top of this, because storage densities are increasing faster than the capabilities of the internet, it didn’t look like FedEx was going to lose this (theoretical) crown any time soon. All very interesting – apart from the latency, of course. Even if FedEx can do same-day delivery of the disks from the London, UK to Christchurch, New Zealand or from the San Francisco, USA to Johannesburg, South Africa, the latency of the data delivery (24 hours, or around 86 million milliseconds) does tend to mess up the chances of using FedEx for transactional work. But, there is a point to all this (honest, there is). Many organisations are looking at moving data to the cloud – either for functional reasons (that’s where the app is going to be) or for information availability reasons (it’s where the archive/backup is going to be). If the data volume is relatively small, then no problem: a simple movement of the data across the internet will generally be OK. But what happens when your data volume is tens of Terabytes? A few Petabytes? Set off a backup and hope that by the time it has finished transferring across, the next synchronisation of c hanged data isn’t so big that all that happens is a continuous spiral of trying to get the data up to date? No – although data speeds are improving on the internet, they are not keeping up with a business’ capability to breed data like a virus. This is where the logistics companies come in. The alternative is a “data pig”. This is a pure storage unit: a large block of disks put together to create a vault big enough for the temporary storage of large volumes of data. The pig is dropped off at the source data centre site by a logistics company, and the high speed LAN is used to transfer the data over onto the pig as if it were a standard image backup to a site storage system. I would, of course, recommend that the data is encrypted on the pig – you don’t want all the company’s intellectual property being stolen during transportation. As soon as the data is on the pig, our dear logistics friends rush the pig over to the target external facility, where the data is then retrieved onto the main storage systems there, again at LAN speeds. Even from one side of the globe to another, this should be less than 48 hours from start to finish. Then, all that is needed is to synchronise the 2-days’ worth or less of changes between the two systems – which can be done via the internet in a short period of time. I find it nice to see that old-world approaches can still make new-world technology work at times.

Cloud standards – if you wait long enough, the one you want may come along.
Cloud computing – the provision of IT services through a massively shared, elastic platform – holds much promise. However, like many of the silver bullets that have come and gone in the past (client/server, CORBA, web-services, service oriented architectures), everything only hangs together if everyone plays the game and doesn’t try to be too clever.

© Quocirca 2013

- 13 -

WAN Speak Musings – Volume II
This is why standards are important – if cloud platforms cannot be integrated, then cloud becomes JAPP (just another proprietary platform), no different to the long-running debates around mainframe v. UNIX v. Windows v. whatever else a vendor wants to go on about. The wonderful thing about cloud computing is that there are good standards out there. The really bad thing about cloud computing is that there are lots of standards out there. Take any area of cloud computing – the server, storage and network layers; security; software implementation; workload management; inter-application messaging – and you will find an industry body that is coming up with a standard (or three) around it. Ask how they are working together, and a perplexed look comes over their faces. Why would they have to look at what anyone else is doing? Surely, the way that sold state drives (SSDs) work in the cloud is the very heart of the matter? Why should the body that is looking at the heuristic mapping of read/writes to SSDs worry about workload management and data security? The answer is, of course, that everything in the cloud is connected and that such an approach to standards is like blindfolding 10 builders, placing them 10 feet apart and asking them to build you a single, solid wall. Things won’t match up; the wall will be incomplete and anyone who wants to ruin things will easily be able to bring everything down. Ah, but surely that is where the higher-levels groups come in, as guardians of all things IT related? The institute of electrical and electronics engineers (IEEE) is, overall, the one group that could say yeah or nay to any proposed standards, and is in charge of putting out lots of requests for comments (RFCs) and defining top level standards. However, the World Wide Web Consortium (W3C) tends to come up with its own standards as well, and although these do tend to match up with IEEE standards, it is sometimes the case that not everything matches up quite as well as hoped for. Other bodies, that comprise a list of acronyms too long to define here, such as OASIS, SNIA, CSA, ODCA and the OGF (via the OCCI) are all bringing other standards to the mix. Even with the IEEE sitting over the top of everything, reacting in a coherent manner to all these bodies is like wading through mud – everything slows down and ceases to react fast enough for the markets. So, then we get to the vendors. In the dash for open standards, they are signing up left, right and centre to be seen as supporting the standards. However, reverting to type, many are then adding bits to the standards either to support legacy functions in their own systems or to reflect what they see as the needs of their customers. So comes about the “120% standard” – the vendor nominally supports the actual standard, but its additions make the modified standard more functional in its own homogeneous environment. However, the base levels of how standards interoperate are getting better. My advice is to stick wherever possible with the unmodified standards, unless there is a pressing need to adopt any variations. Be aware that the standards are still a moveable feast; that there will be changes as the cloud matures. However, a lot of the work in cloud standards is still down to the user plugging them together. Unfortunately, the cloud is still a rough place, with lots of turbulence.

© Quocirca 2013

- 14 -

About Silver Peak Systems
Silver Peak software accelerates data between data centres, branch offices and the cloud. The company’s software defined acceleration solves network quality, capacity and distance challenges to provide fast and reliable access to data anywhere in the world. Leveraging its leadership in data centre class wide area network (WAN) optimisation, Silver Peak is a key enabler for strategic IT projects like virtualisation, disaster recovery and cloud computing. Download Silver Peak software today at http://marketplace.silver-peak.com.

WAN Speak Musings – Volume II

REPORT NOTE: This report has been written independently by Quocirca Ltd to provide an overview of the issues facing organisations seeking to maximise the effectiveness of today’s dynamic workforce. The report draws on Quocirca’s extensive knowledge of the technology and business arenas, and provides advice on the approach that organisations should take to create a more effective and efficient environment for future growth.

About Quocirca
Quocirca is a primary research and analysis company specialising in the business impact of information technology and communications (ITC). With world-wide, native language reach, Quocirca provides in-depth insights into the views of buyers and influencers in large, mid-sized and small organisations. Its analyst team is made up of real-world practitioners with first-hand experience of ITC delivery who continuously research and track the industry and its real usage in the markets. Through researching perceptions, Quocirca uncovers the real hurdles to technology adoption – the personal and political aspects of an organisation’s environment and the pressures of the need for demonstrable business value in any implementation. This capability to uncover and report back on the end-user perceptions in the market enables Quocirca to provide advice on the realities of technology adoption, not the promises.

Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocirca’s mission is to help organisations improve their success rate in process enablement through better levels of understanding and the adoption of the correct technologies at the correct time. Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of long term investment trends, providing invaluable information for the whole of the ITC community. Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that ITC holds for business. Quocirca’s clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, along with other large and medium sized vendors, service providers and more specialist firms. Details of Quocirca’s work and the services it offers can be found at http://www.quocirca.com Disclaimer: This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have used a number of sources for the information and views provided. Although Quocirca has attempted wherever possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors in information received in this manner. Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented here, including any and all consequential losses incurred by any organisation or individual taking any action based on such data and advice. All brand and product names are recognised and acknowledged as trademarks or service marks of their respective holders.