Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
Optimising performance and security of web-based software

Optimising performance and security of web-based software

Ratings: (0)|Views: 14|Likes:
Published by quocirca

More info:

Published by: quocirca on Dec 11, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

12/11/2013

pdf

text

original

 
 
Optimising performance and security of web
-
based software – December 2013
 
Bob Tarzey, Analyst and Director bob.tarzey@quocirca.com, +44 7900 275517
 
Optimising performance and security of web-based software
 – 
 December 2013 http://www.quocirca.com © 2013 Quocirca Ltd
On-demand applications are often talked about in terms of how independent software vendors (ISVs) should be adapting the way their software is provisioned to customers. However, these days the majority of on-demand applications are  being provided by end user organisations to external users: consumers, users from customer or partner organisations and their own employees working remotely.
A recent Quocirca research report, “In demand:
the
culture of online services provision”
1
 found that 58% of northern European organisations (from the UK, Ireland and Nordic region) were  providing on-demand e-commerce service to external users. Not surprisingly, financial services topped the list with 84% of organisations doing so (showing how ubiquitous the provision of online banking etc. is now). This was followed by the technology, utilities and energy and the retail, distribution and transport sectors with 79% and 70% providing on-demand applications respectively. However, there was plenty of such activity in other sectors. 61% of manufacturers were  providing on-demand applications, most often to other businesses (think connected supply chain systems). For professional services it was 56%, again most often to other businesses. For educational organisations it was 37%. The public sector trailed with just 17%, surprising given the commitment of many governments to so called e-agendas. At one level this is good news, more direct online interaction with consumers, partners and other businesses should speed up processes and sales cycles and extend geographic reach, those that do not do so will be less competitive. However, there are two big caveats. 1.
 
These benefits will only be gained if these applications perform well and have a high  percentage of uptime (approaching 100% in many cases). 2.
 
Any application exposed to the outside world is a security risk, vulnerable to attack, either
as a way into an organisation’s IT
infrastructure through software vulnerabilities or to stop the application itself from running effectively (application level denial of service/DoS), thus limiting a given
organisation’s ability to carry on business
and often damaging its reputation. So, how does a business ensure the performance and security of its online applications?
The performance of online applications
Two things need to be achieved here. First there needs to be a way of measuring performance and second there needs to be an appreciation of, and investment in, the technology that ensures and improves performance. Testing the performance of applications before they go live can be problematic. Development and test environments are often isolated from the real world and whilst user workloads can be simulated to test performance on centralised infrastructure, the real world network connections users rely on, which are increasingly
 
 
Optimising performance and security of web-based software
 – 
 December 2013 http://www.quocirca.com © 2012 Quocirca Ltd
mobile ones, are harder to test. The availability of public cloud platforms helps as run-time environments can be simulated, even if the ultimate deployment platform is an internal one. This saves an organisation having to over invest in its own test infrastructure. So, upfront testing is all well and good, but, ultimately the user experience needs to be monitored in real time after deployment. This is not just because it is not possible to test all scenarios before deployment, but because the load on an application can change unexpectedly, due to rising user demand or other issues, especially over shared networks. User experience monitoring was the subject and title of a 2010 Quocirca report
2
, much of which is still relevant today, however the biggest change since then has been the relentless rise in the number of mobile users. Examples of tools for the end to end monitoring of the user experience which covers both the application itself and the network impact on it include CA Application Performance Management,
Fluke Network’s Visual
Performance Manager, Compuware APM and ExtraHop Networks (which has just released specific support for Amazon Web Services/AWS). It is all well and good being able to monitor and measure performance, but how do you respond when it is not what it should be? There are two issues here; first the ability to increase the number application instances and supporting infrastructure to support the overall workload and second the ability to balance that work load  between these instances. Increasing the resources available is far easier than it used to be with the virtualisation of infrastructure in-house and the availability of external infrastructure-as-a-service (IaaS) resources. For many, deployment is now wholly on shared IaaS platforms where increased consumption of resources by a given application is simply extended across the cloud service
 provider’s infrastructure
. This can be achieved  because with many customers sharing the same resources, each will have different demands at different times. Global providers include AWS, Rackspace, Savvis, Dimension Data and Microsoft. There are many local IT service providers (ITSPs) with cloud platforms, for example in the UK, Attenda,  Nui Solutions, Claranet and Pulsant. Some ITSPs partner with one or more global providers to make sure they too have access to a wide range of resources for their customers. Even those organisations that choose to keep their main deployment on-premise can benefit
from the use of ‘cloud
-
 bursting’ (the movement
of application workloads to the cloud to support surges in demand) to supplement their in-house
resources. Indeed, in Quocirca’s “In
-demand
1
report, those organisations providing on-demand applications to external users were considerably more likely to recognise the benefits of cloud- bursting than those that did not. Being able to measure performance and having access to virtually unlimited resources to respond to it is in one thing, but how do you  balance the workload across them. The key technologies for achieving this are application delivery controllers (ADCs). ADCs are basically next generation load  balancers and are proving to be fundamental  building blocks for advanced application and network platforms. They enable the flexible scaling of resources as demand rises and/or falls and offload work from the servers themselves. They also provide a number of other services that are essential to the effective operation of on-demand applications, these include:
 
 
Optimising performance and security of web-based software
 – 
 December 2013 http://www.quocirca.com © 2012 Quocirca Ltd
 
 Network traffic compression
 – 
 to speed up transmission
 
Data caching
 – 
 to make sure regularly requested data is readily available
 
 Network connection multiplexing
 – 
 making effective use of multiple network connections
 
 Network traffic shaping
 – 
 a way of reducing latency by prioritising the transmission of workload packets and ensuring quality of service (QoS)
 
Application-layer security
 – 
 the inclusion of web application firewall (WAF) capabilities to protect on-demand applications from outside attack, for example application-level denial of service (DOS)
 
Secure sockets layer (SSL) management
 – 
 acting as the landing point for encrypted traffic and managing the decryption and rules for on-going transmission
 
Content switching
 – 
 routing requests to different web services depending on a range of criteria, for example the language settings of a web browser or the type of device the request is coming from
 
Server health monitoring
 – 
 ensuring servers are functioning as expected and serving up data and results that are fit for transmission The best known ADC supplier was Cisco; however, Cisco recently announced it would discontinue further development of its Application Control Engine (ACE) and recommends another
leading vendor’s product
instead
 – 
 
Citrix’s NetScaler. Other suppliers
includes F5, the largest dedicated ADC specialist, Riverbed, Barracuda, A10, Array  Networks and Kemp. So, you can measure performance, you have the resources the meet demand and the means to  balance the workload across them as well as off-load some of the work, with ADCs; but what about security?
The security of online applications
The first thing to say about the security of online applications is you do not have to do it all yourself. Use of public infrastructure puts the onus on the service provider to ensure security up to a certain level. Most have a shared security model, for example AWS states:
 
AWS takes responsibly for securing its facilitates, server infrastructure, network infrastructure, virtualisation infrastructure
 
The customer is free to choose its operating environment, how it should be configured and set up its own security groups and access control lists. However, regardless of where the application is deployed, it will be open to attack. A 2012 Quocirca report
3
 underlined the scale of the application security challenge. The average enterprise tracks around 500 mission critical applications, in financial services organisations it is closer to 800. The security challenge increases as more and more of these applications are opened up to external users. Beyond ensuring the training of developers, there are three main approaches to testing and ensuring application security: 1.
 
Code and application scanning: thorough scanning aims to eliminate software flaws. There are two approaches; the static scanning of code or binaries before deployment and the dynamic scanning of binaries during testing or after deployment. On-premise scanning tools have been relied on in the past
 – 
 IBM and HP bought two of the main

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->