You are on page 1of 14

January 2013

Watch a webcast summary of the report findings at: http://www.What are the top concerns and challenges performance engineers face in 2013? In which stages of the software development life cycle (SDLC) should you engineer for performance? Which teams should focus on performance? What are the expected top challenges of your 2013 performance strategy? Our respondents answered these questions and more.7% of the 316 IT job titles included the word “performance. The results were released during a Developer. developers.com/resources/on-demand-webinars.” Other 4% Analyst 4% The following pages detail the findings from each of the questions posed to our participants. . 2013. architects. Confirming a growing trend regarding an increased business focus on network virtualization and application performance engineering. Based on the Engineer IT Specialist 27% 16% responses from 316 IT specialists. 17. and engineers.shunra. managers. the survey uncovered how performance engineers plan to include IT Manager performance in application development 41% IT Executive and deployment plans. and how to account for production network conditions before deployment. important 4% elements to include in 2013 performance strategies. webcast on January 14. This report reveals findings from a Shunra survey about the top concerns and QA 4% challenges performance engineers face in Architect. 2013.

Respondents could select multiple stages of the application life cycle.13% 10. QA/Test and Staging/Production.12% 8. These particular companies show an understanding of the need to consider performance early.Responses to this question were categorized across three core areas of the SDLC: Design/Development. but do test for performance in the later stages of the SDLC.06% 61.91% 16.80% 5.91% of survey participants only consider performance in the Design and/or Development stages. 61% of the survey participants do not design or develop for performance today. Design or Development only During QA/Test only Staging or Production only All stages Not at all Do not design or develop for performance 7. Conversely. but a persistent failure to consider performance requirements in the earliest stages of the application life cycle. but do not take the necessary steps to validate those efforts.00% . This demonstrates recognition of the need to ensure performance in deployed applications. 7.

It also remains the number one cost when it comes to developing.30% 83. assumptions made regarding capacity requirements will be misleading.70% 28. thus accounting for the large number of applications that fail in production but “work ed fine” in test. When asked when performance is considered in different types of testing. Application performance is the number one factor impacting application success and end user adoption.Only 8. managing and maintaining applications. and are therefore not reflective of real end-user experience. However. They do not account for the dynamic nature of today’s global networks. load testing was most frequently selected. over 90% consider performance in at least two different test types. load testing in a pristine test environment will provide some insight into system scalability. especially considering the impact of performance on mobile app users. not surprisingly. Traditional testing methodologies are incomplete if conditions affecting application performance are not incorporated. it is .8% of companies surveyed showed the maturity to consider performance throughout the entire SDLC.90% Of concern. If one of these virtual user groups is a mobile audience. is that less than half of survey participants consider performance with functional testing. For example.20% 75. Less than 10% of companies surveyed have made performance a policy and built performance into their development and deployment best practices. but without an understanding of the network conditions experienced by different (virtual) user groups. as will test results. Capacity/Scalability Load testing Functional testing Single unit test 42. This represents a significant and costly gap which is demonstrated by recent media and research indicating that applications continue to fail to live up to end user expectations. demonstrating the growing trend to incorporate performance throughout testing methodologies.

single user. The seat of the stool represents any test environment – load. functional.65% chose all four types.imperative to account for the longer mobile user sessions that result from the inherent latency of mobile networks and how system resources will be affected. it virtualizes only specific slices of dependent behavior critical to the execution of development and testing tasks. and violations of those thresholds should be considered a functional fail. end users may not wait long enough to see the results. These services can number in the hundreds for larger and more complex applications. possible or affordable to bring those services into the test environment. applications often rely on third party resources or data feeds. Services virtualization: In production. but if that transaction takes 30 seconds or longer in the real world due to network constraints. Rather than virtualizing entire systems. The test platform is held up by three legs:  Test Platform  User virtualization: Test environments should emulate the users or load that will be accessing the system. Functional test results may confirm that button A leads to transaction B. In order to create a reliable testing environment. functional testing in a pristine test environment is not indicative of how a user will experience an application or website. Service virtualization emulates the behavior of specific application dependencies that developers or testers need to exercise in order to complete end-to-end transactions in test environments. but only 12. Performance thresholds must be set. a “three-legged stool” approach is often employed. etc. yet performance of those services is critical to end user experience. Industry experts tell us that testing in all four categories is the best practice. Services . Some organizations apply single user test automation or even have manual users to perform tests as real users in the production environment. It is not always practical. This is accomplished via load generators and load testing tools. In the same manner.

and Network Virtualization No legs 4% Users – 9. dependencies and end users to be accurately emulated in the test environment. Network virtualization: As with services.06% At least one leg 21% Users. However. and jitter.97% of survey respondents apply the industry best practice of accurately emulating the production environment by virtualizing all three legs of the stool. and Network 38% At least two legs 37% Users and Services – 18. improving the accuracy of test results.70% of organizations surveyed address two of the three legs. over 20% address just one leg of the stool. up to 70% of end user experience is dependent on network conditions. and an unfortunate 4. Services. packet loss. 36. Showing some promise.11% do not address even a single leg.16% . are all critical factors that must be taken into account when testing or validating application performance. Only 37. What happens if you try to rest your test platform on this stool without all three of the legs in place? You fall over. services. network conditions play a critical role in user experience.33% Networks – 5. Services. upstream and downstream bandwidth. for example.87% Services and Network – 3. With mobile apps. A test environment without all three aspects is incomplete.81% Services – 6. Network virtualization enables connections between applications. Network conditions such as latency. Network virtualization is a pre-production process for recreating network conditions from the production environment for use within the test environment. This combination of “legs” stands up any test environment. Users.67% Users and Network – 14.

and as mobile apps continue to be initiated by individual business units.4% of businesses identified line of business owners as having a focus on performance. Also worth noting from this question is that less than half (45. Over 95% of organizations have assigned performance responsibility to at least one team. . 16. This is a top challenge for performance engineers in 2013.30%) consider testing the end user device. developers. the large majority understands the importance of load testing with 81. operations. One out of five respondents considers performance across the board (choices included architect.29% do not plan to test for network conditions or are going to test in production only.89% are going to wait until they deploy. Only 24. that team was QA/Testers who comprised nearly 65% of all responses. a larger percentage of line of business owners is expected to be assigned application performance responsibilities. line of business owners). As the mobile wave continues to grow. More often than not.30% factoring users/load testing into their environments today. Testing in production only is not enough. However. A higher percent was expected as line of business owners have a responsibility for their business units’ bottom line which is heavily influenced by application adoption and use.4% do not plan to test for networks conditions at all in 2013 and 21. 38. only 19% of our respondents have a best practice in place regarding team focus on performance. Industry best practices dictate that all teams have a focus on performance.Regardless of the number of “virtualized legs” you use. QA/test.

database. Having the ability to understand inefficient use of network resources (chatty latencydependent applications. such as use of external web services.01% Understanding of how apps scale during peak usage periods 71. Of the responses related to these constraints.20% of our respondents.01%.80% of respondents.44%) chose all the answers as important to their 2013 performance strategies. end-user network conditions 72.20% Understanding inefficient use of network resources 68. in addition. etc. and storage.20% All are important 35. duplicated/redundant requests. and assessing real-world network conditions and the impact on their performance is important to 72.90% of respondents. Knowing how applications scale during peak usage periods is important to 71. having the ability to understand internal system constraints.44% Assessing realworld. such as contention. Approximately one-third (35. uncompressed content.Understanding internal and/or external constraints was selected by 81.) was chosen by 68. Which of these did respondents feel would be challenges in 2013? Understanding internal and/or external system constraints 81. and cloud-based services was deemed important to 58. was selected by 72.40% . CDNs. the ability to understand external system constraints.20% of participants. memory usage.40% of participants.

10% 45. 66. A year ago.Top Challenges Performance Engineers Face in 2013 Assessing real-world. Why should you engineer for performance? • • Identifying and remediating production issues prior to deployment saves hundreds of thousands. and now the growing realization is that functional testing will only capture part of the problem and the impact of the network must be considered.20% 29. which could be related to the maturation of the mobile market. of dollars. Accurate performance testing enables organizations to meet and validate SLAs and SLOs.60% 25.80% The top two responses are related to the network.13% chose assessing real-world. • . Reliably knowing how an application will behave empowers IT professionals with the foresight and knowledge needed to make informed decisions about application deployments. end user network conditions and their impact on application performance and/or understanding the inefficient use of network resources as top challenges.30% 40. if not millions. performance engineers focused more on the device and the functional aspects of the application. end-user network conditions Understanding inefficient use of network resources Understanding external system constraint Understanding internal system constraints Understanding of how apps scale during peak usage… 30.

03% Yes. employee productivity and customer satisfaction. This cost includes multiple factors that impact the business.01%. like Forrester. The impact to the business’ bottom line varies on the function and criticality of the affected application. . revenue loss. and budget has remained the same. 6. and budget has decreased. we know that an incident that requires one day to remediate can cost a business $360. In another Shunra survey. 37. but this did not consider the impact on the business like brand damage. including the cost of resources to resolve the issue along with the impact of the issue on business factors like revenue.Investments in performance engineering are shifting left in the development life cycle.000.33% Yes.02% do not have budgets in 2013. 25. A three-day remediation quickly turns into more than $1M.000 per hour. or customer dissatisfaction. and a six-day remediation can cost a business $2.01% The Cost of Poor Performance Industry analysts. our customers estimated the cost per incident at $80.96% have increased budgets or their performance budget has remained intact from 2012. Based on ROI input from customers and the average cost of a production incident as reported by industry analysts. No budget for designing for performance.63% Yes. 6. A very small amount. and budget has increased.1 million. agree that the average cost to a business of a production incident can exceed $45. 56. In this survey. as organizations continue to better understand the critical importance of designing and developing for performance. 31. 37.000 for remediation. EMA and others. had a budget decrease from 2012 to 2013.

published in conjunction with HP.Mobile is rapidly becoming important to employee productivity. one of the world’s foremost providers of consulting. with only one-third of those surveyed currently formally testing their mobile applications. While concerning. and Sogeti. the expertise required and availability of necessary tools. 67. for example. this data is also not entirely surprising. resulting in decreased revenue (in the case of mcommerce apps) or loss of productivity (for internal app use). Capgemini. More troubling is the fact that only 44. When you add a requirement to have a mobile load scalability test. technology and outsourcing services.70% Multiple problems 20. there may be disparity between the recognition of the need.3% have a 2013 plan in place to proactively assess the performance of these applications. Only 6% of respondents reported “critical problems.” though over 60% cited the occurrence of at least a few problems. The report. This also brings the light the A few problems 41. revealed that organizations are struggling to manage the challenges of the mobile era. Testing mobile apps and ensuring mobile performance is challenging. recently released the findings of the fourth annual World Quality Report.30% No problems 32. A sign of inefficient testing practices and performance assurance. Just a fraction of a second delay in transaction response times means increased abandonment of mobile users. Critical problems 5. its local professional services division.8% of responding organizations reported experiencing problems related to the performance of critical business applications accessed by employees via mobile devices. Even these few problems are a concern when it comes to mobility.80% .30% So why are less than half of survey respondents doing something about this issue? The World Quality Report supports these findings.

CRM deployment SAP deployment DCR Cloud Mobile 17. Studies have shown that if you have a revenue-generating application.interpretation of severity.4% 65. so it’s not surprising respondents chose multiple projects of concern for 2013. Of note. may actually be much more meaningful. Furthermore. according to a survey from CA Technologies. Mobile projects are of highest concern for performance engineers in 2013.5% Other projects of concern include Cloud migrations and third-party software deployments. with 65. resulting in an average of 552 people hours lost per year per company. .5% of participants.1% 16. again reflecting a likely expertise or tool gap that exists with mobile testing. to an end user. the loss associated with IT issues has been quantified as a 63% reduction in productivity. underscoring the need to consider performance not just with application deployments but also with infrastructure change. data center relocations were among the top three project concerns.0% 60. and frustrating. Most performance engineers are juggling several projects in any given year. a failure can impact up to 10% of your revenue.4% 42. Shunra research shows that up to 39% of applications fail to meet user expectations after a data center move or consolidation. or minor nuisance to an organization. What may seem to be a trivial problem.

This can only be accomplished by both a top-down and bottom-up approach.6% have Agile organizations have performance as part of their requirements. only 25. For more insight and explanation around the Agile-performance dilemma.com/PerfTestandAgile.10% choosing this response as a reason.html . According to National Institute of Standards and Technology. please view the Shunra webcast: How does performance testing fit into Agile? http://ape. approximately 80% of the total cost of ownership of an application is spent and directly attributable to finding and fixing performance issues post-deployment. Agile methodologies continue to gain traction with development teams. Less than 14% fully automate performance and can compare each build to prior builds for response time and resource utilization. The second most popular reason survey respondents are not proactively testing for performance is due to an inadequate testing environment at 49. with all areas of the business recognizing the financial implication of poor performance and an enterprise-wide commitment to end user experience. one-third of those costs could be avoided with better testing practices.The largest response to this question indicated that a limited to no budget is the hindrance. Of the 316 survey participants. The gap which exists between knowing what must be done to ensure end user experience and actually employing best practices must be closed. According to the survey.70% as the reason companies are not proactively testing for performance. 57. costing businesses $60 billion annually in the United States alone.shunra.9% are using While Agile adoption is still growing. Lack of management support came in third with 26.20%.5% integrate performance acceptance tests into automated tests. much has been written about the challenges of Agile and how performance fits into an Agile environment. The impact and cost of failure for performance issues actually far outweighs the budget required for proactive performance testing. if at all. Just 15. with 57.

500 organizations use Shunra Network Virtualization solutions to proactively test and validate application performance with real-world. and Shunra has been fortunate to help our customers implement network virtualization solutions for software testing and incorporate application performance engineering policies that encompass best practices and proven strategies for improving and ensuring application performance. virtualize network impairments (such as high jitter rate and limited available bandwidth) in the test lab so that the effect of the network on transaction response time can be reliably and accurately measured. creating a significant competitive advantage for those organizations that are most successful in building performance in to their application design. Shunra has witnessed a dramatic shift in awareness and focus on application performance and seen this gap continue to shrink.500 organizations globally and with some of the most complicated application infrastructures in the world. limited bandwidth. With Network Virtualization. and then remediate or optimize performance.com for more information. you discover the real-world network conditions affecting end users and application service. are all critical factors that must be taken into account when testing or validating application performance. 80% of the costs associated with application development occur in remediating failed or underperforming applications after deployment. One-third of those costs could be recovered with better testing practices. Shifting left in the software development lifecycle is the key – testing early and often will help organizations avoid the negative impact of performance failure while affording the opportunity to capitalize on end user demands for immediate and ubiquitous access to data. For over a decade. . Network conditions such as latency. Great progress is being made. However. More than 2. Shunra has the benefit of working with more than 2. especially when considering the multiple network connections required to support 3rd party services and external resources. packet loss.shunra. and jitter. virtualized network conditions before deploying to production. development and deployment efforts.Application performance is a competitive differentiator and a clear indicator of business success. but greater awareness and performance policy across the enterprise are still needed in order to truly achieve performance Zen. Visit www. New methodologies and approaches like Agile and DevOps continue to move companies towards performance optimization. when the ineffective application has already had a negative impact on the end user or customer experience.