January 2013

webcast on January 14. Watch a webcast summary of the report findings at: http://www. Based on the Engineer IT Specialist 27% 16% responses from 316 IT specialists. developers.7% of the 316 IT job titles included the word “performance. 2013.What are the top concerns and challenges performance engineers face in 2013? In which stages of the software development life cycle (SDLC) should you engineer for performance? Which teams should focus on performance? What are the expected top challenges of your 2013 performance strategy? Our respondents answered these questions and more.com/resources/on-demand-webinars. Confirming a growing trend regarding an increased business focus on network virtualization and application performance engineering. The results were released during a Developer. important 4% elements to include in 2013 performance strategies. architects. .shunra. This report reveals findings from a Shunra survey about the top concerns and QA 4% challenges performance engineers face in Architect. the survey uncovered how performance engineers plan to include IT Manager performance in application development 41% IT Executive and deployment plans. 2013. and how to account for production network conditions before deployment.” Other 4% Analyst 4% The following pages detail the findings from each of the questions posed to our participants. managers. 17. and engineers.

00% . Design or Development only During QA/Test only Staging or Production only All stages Not at all Do not design or develop for performance 7.06% 61. This demonstrates recognition of the need to ensure performance in deployed applications.80% 5. These particular companies show an understanding of the need to consider performance early. but do test for performance in the later stages of the SDLC. but a persistent failure to consider performance requirements in the earliest stages of the application life cycle. QA/Test and Staging/Production. Conversely. 61% of the survey participants do not design or develop for performance today. Respondents could select multiple stages of the application life cycle.13% 10. but do not take the necessary steps to validate those efforts.12% 8.91% 16.91% of survey participants only consider performance in the Design and/or Development stages.Responses to this question were categorized across three core areas of the SDLC: Design/Development. 7.

Only 8.70% 28. and are therefore not reflective of real end-user experience. assumptions made regarding capacity requirements will be misleading. For example. as will test results. is that less than half of survey participants consider performance with functional testing. over 90% consider performance in at least two different test types. It also remains the number one cost when it comes to developing. Application performance is the number one factor impacting application success and end user adoption.30% 83. thus accounting for the large number of applications that fail in production but “work ed fine” in test. However. They do not account for the dynamic nature of today’s global networks. Less than 10% of companies surveyed have made performance a policy and built performance into their development and deployment best practices.20% 75. If one of these virtual user groups is a mobile audience. When asked when performance is considered in different types of testing. This represents a significant and costly gap which is demonstrated by recent media and research indicating that applications continue to fail to live up to end user expectations. but without an understanding of the network conditions experienced by different (virtual) user groups. especially considering the impact of performance on mobile app users. not surprisingly. it is . Traditional testing methodologies are incomplete if conditions affecting application performance are not incorporated.8% of companies surveyed showed the maturity to consider performance throughout the entire SDLC.90% Of concern. managing and maintaining applications. demonstrating the growing trend to incorporate performance throughout testing methodologies. load testing in a pristine test environment will provide some insight into system scalability. Capacity/Scalability Load testing Functional testing Single unit test 42. load testing was most frequently selected.

65% chose all four types. possible or affordable to bring those services into the test environment. The seat of the stool represents any test environment – load. These services can number in the hundreds for larger and more complex applications. In the same manner. Industry experts tell us that testing in all four categories is the best practice. Functional test results may confirm that button A leads to transaction B. it virtualizes only specific slices of dependent behavior critical to the execution of development and testing tasks. functional testing in a pristine test environment is not indicative of how a user will experience an application or website. Service virtualization emulates the behavior of specific application dependencies that developers or testers need to exercise in order to complete end-to-end transactions in test environments. etc. Performance thresholds must be set. The test platform is held up by three legs:  Test Platform  User virtualization: Test environments should emulate the users or load that will be accessing the system. a “three-legged stool” approach is often employed.imperative to account for the longer mobile user sessions that result from the inherent latency of mobile networks and how system resources will be affected. and violations of those thresholds should be considered a functional fail. end users may not wait long enough to see the results. but if that transaction takes 30 seconds or longer in the real world due to network constraints. but only 12. Services virtualization: In production. In order to create a reliable testing environment. This is accomplished via load generators and load testing tools. applications often rely on third party resources or data feeds. Services . It is not always practical. Rather than virtualizing entire systems. yet performance of those services is critical to end user experience. Some organizations apply single user test automation or even have manual users to perform tests as real users in the production environment. single user. functional.

06% At least one leg 21% Users. However. packet loss. and Network 38% At least two legs 37% Users and Services – 18.70% of organizations surveyed address two of the three legs.97% of survey respondents apply the industry best practice of accurately emulating the production environment by virtualizing all three legs of the stool. dependencies and end users to be accurately emulated in the test environment. Showing some promise.16% . Network virtualization is a pre-production process for recreating network conditions from the production environment for use within the test environment. What happens if you try to rest your test platform on this stool without all three of the legs in place? You fall over. for example. A test environment without all three aspects is incomplete.33% Networks – 5.87% Services and Network – 3. This combination of “legs” stands up any test environment. improving the accuracy of test results. With mobile apps. Network virtualization: As with services. Network conditions such as latency. Only 37. over 20% address just one leg of the stool. and jitter. upstream and downstream bandwidth. Users.11% do not address even a single leg. Network virtualization enables connections between applications. and Network Virtualization No legs 4% Users – 9. up to 70% of end user experience is dependent on network conditions. Services. and an unfortunate 4. network conditions play a critical role in user experience. are all critical factors that must be taken into account when testing or validating application performance. 36.81% Services – 6. Services.67% Users and Network – 14. services.

16. that team was QA/Testers who comprised nearly 65% of all responses.89% are going to wait until they deploy. Only 24. Industry best practices dictate that all teams have a focus on performance.29% do not plan to test for network conditions or are going to test in production only. Testing in production only is not enough. the large majority understands the importance of load testing with 81. .30%) consider testing the end user device. As the mobile wave continues to grow. Also worth noting from this question is that less than half (45.4% of businesses identified line of business owners as having a focus on performance.Regardless of the number of “virtualized legs” you use. This is a top challenge for performance engineers in 2013. only 19% of our respondents have a best practice in place regarding team focus on performance. However. QA/test. A higher percent was expected as line of business owners have a responsibility for their business units’ bottom line which is heavily influenced by application adoption and use. a larger percentage of line of business owners is expected to be assigned application performance responsibilities. 38. and as mobile apps continue to be initiated by individual business units.4% do not plan to test for networks conditions at all in 2013 and 21. line of business owners). Over 95% of organizations have assigned performance responsibility to at least one team. One out of five respondents considers performance across the board (choices included architect. operations.30% factoring users/load testing into their environments today. More often than not. developers.

Knowing how applications scale during peak usage periods is important to 71.20% All are important 35.20% Understanding inefficient use of network resources 68. and cloud-based services was deemed important to 58. etc.44% Assessing realworld. in addition.01%. CDNs. was selected by 72.80% of respondents. uncompressed content. having the ability to understand internal system constraints. end-user network conditions 72. memory usage. duplicated/redundant requests. Approximately one-third (35.Understanding internal and/or external constraints was selected by 81.01% Understanding of how apps scale during peak usage periods 71.) was chosen by 68.20% of our respondents.44%) chose all the answers as important to their 2013 performance strategies.20% of participants. database. Which of these did respondents feel would be challenges in 2013? Understanding internal and/or external system constraints 81. such as contention. Having the ability to understand inefficient use of network resources (chatty latencydependent applications. such as use of external web services. the ability to understand external system constraints. and assessing real-world network conditions and the impact on their performance is important to 72.40% of participants.40% . and storage.90% of respondents. Of the responses related to these constraints.

end user network conditions and their impact on application performance and/or understanding the inefficient use of network resources as top challenges.20% 29. Why should you engineer for performance? • • Identifying and remediating production issues prior to deployment saves hundreds of thousands. A year ago.30% 40.13% chose assessing real-world. of dollars. which could be related to the maturation of the mobile market.10% 45. 66.60% 25.Top Challenges Performance Engineers Face in 2013 Assessing real-world. Reliably knowing how an application will behave empowers IT professionals with the foresight and knowledge needed to make informed decisions about application deployments. if not millions. Accurate performance testing enables organizations to meet and validate SLAs and SLOs.80% The top two responses are related to the network. performance engineers focused more on the device and the functional aspects of the application. • . and now the growing realization is that functional testing will only capture part of the problem and the impact of the network must be considered. end-user network conditions Understanding inefficient use of network resources Understanding external system constraint Understanding internal system constraints Understanding of how apps scale during peak usage… 30.

revenue loss. 31. as organizations continue to better understand the critical importance of designing and developing for performance. 37. and budget has decreased. our customers estimated the cost per incident at $80. but this did not consider the impact on the business like brand damage.33% Yes.000 for remediation. including the cost of resources to resolve the issue along with the impact of the issue on business factors like revenue. employee productivity and customer satisfaction.01% The Cost of Poor Performance Industry analysts. A very small amount. 6. In this survey.000. and budget has increased. and budget has remained the same.96% have increased budgets or their performance budget has remained intact from 2012.Investments in performance engineering are shifting left in the development life cycle.1 million. or customer dissatisfaction.01%. had a budget decrease from 2012 to 2013. agree that the average cost to a business of a production incident can exceed $45. and a six-day remediation can cost a business $2. Based on ROI input from customers and the average cost of a production incident as reported by industry analysts.000 per hour.63% Yes. 37. No budget for designing for performance. 6. like Forrester.02% do not have budgets in 2013.03% Yes. In another Shunra survey. The impact to the business’ bottom line varies on the function and criticality of the affected application. 25. EMA and others. This cost includes multiple factors that impact the business. . 56. we know that an incident that requires one day to remediate can cost a business $360. A three-day remediation quickly turns into more than $1M.

When you add a requirement to have a mobile load scalability test. published in conjunction with HP. with only one-third of those surveyed currently formally testing their mobile applications. More troubling is the fact that only 44. this data is also not entirely surprising. recently released the findings of the fourth annual World Quality Report.80% . Critical problems 5. there may be disparity between the recognition of the need. one of the world’s foremost providers of consulting. Only 6% of respondents reported “critical problems. The report. its local professional services division. A sign of inefficient testing practices and performance assurance.30% So why are less than half of survey respondents doing something about this issue? The World Quality Report supports these findings. This also brings the light the A few problems 41.Mobile is rapidly becoming important to employee productivity.3% have a 2013 plan in place to proactively assess the performance of these applications.30% No problems 32. the expertise required and availability of necessary tools. revealed that organizations are struggling to manage the challenges of the mobile era. technology and outsourcing services. Just a fraction of a second delay in transaction response times means increased abandonment of mobile users. 67.” though over 60% cited the occurrence of at least a few problems.8% of responding organizations reported experiencing problems related to the performance of critical business applications accessed by employees via mobile devices. Capgemini. and Sogeti. While concerning. for example.70% Multiple problems 20. resulting in decreased revenue (in the case of mcommerce apps) or loss of productivity (for internal app use). Even these few problems are a concern when it comes to mobility. Testing mobile apps and ensuring mobile performance is challenging.

Furthermore. the loss associated with IT issues has been quantified as a 63% reduction in productivity. a failure can impact up to 10% of your revenue. with 65. to an end user. again reflecting a likely expertise or tool gap that exists with mobile testing. What may seem to be a trivial problem.4% 42. may actually be much more meaningful.5% of participants.1% 16. Most performance engineers are juggling several projects in any given year. or minor nuisance to an organization. Studies have shown that if you have a revenue-generating application. so it’s not surprising respondents chose multiple projects of concern for 2013.0% 60. . Of note. resulting in an average of 552 people hours lost per year per company.4% 65. and frustrating. according to a survey from CA Technologies. Mobile projects are of highest concern for performance engineers in 2013. underscoring the need to consider performance not just with application deployments but also with infrastructure change. CRM deployment SAP deployment DCR Cloud Mobile 17.5% Other projects of concern include Cloud migrations and third-party software deployments.interpretation of severity. Shunra research shows that up to 39% of applications fail to meet user expectations after a data center move or consolidation. data center relocations were among the top three project concerns.

According to the survey. Of the 316 survey participants.10% choosing this response as a reason.The largest response to this question indicated that a limited to no budget is the hindrance. with all areas of the business recognizing the financial implication of poor performance and an enterprise-wide commitment to end user experience. approximately 80% of the total cost of ownership of an application is spent and directly attributable to finding and fixing performance issues post-deployment. if at all. 57.shunra. According to National Institute of Standards and Technology. costing businesses $60 billion annually in the United States alone.html . please view the Shunra webcast: How does performance testing fit into Agile? http://ape. only 25.20%. Agile methodologies continue to gain traction with development teams. For more insight and explanation around the Agile-performance dilemma. This can only be accomplished by both a top-down and bottom-up approach. Less than 14% fully automate performance and can compare each build to prior builds for response time and resource utilization.com/PerfTestandAgile.70% as the reason companies are not proactively testing for performance. one-third of those costs could be avoided with better testing practices.9% are using While Agile adoption is still growing. The second most popular reason survey respondents are not proactively testing for performance is due to an inadequate testing environment at 49. with 57.5% integrate performance acceptance tests into automated tests. The gap which exists between knowing what must be done to ensure end user experience and actually employing best practices must be closed. Just 15. Lack of management support came in third with 26. much has been written about the challenges of Agile and how performance fits into an Agile environment.6% have Agile organizations have performance as part of their requirements. The impact and cost of failure for performance issues actually far outweighs the budget required for proactive performance testing.

500 organizations globally and with some of the most complicated application infrastructures in the world. Network conditions such as latency. 80% of the costs associated with application development occur in remediating failed or underperforming applications after deployment. development and deployment efforts. New methodologies and approaches like Agile and DevOps continue to move companies towards performance optimization. .500 organizations use Shunra Network Virtualization solutions to proactively test and validate application performance with real-world. but greater awareness and performance policy across the enterprise are still needed in order to truly achieve performance Zen. Shunra has the benefit of working with more than 2. when the ineffective application has already had a negative impact on the end user or customer experience. One-third of those costs could be recovered with better testing practices. More than 2. virtualized network conditions before deploying to production.shunra. packet loss. Great progress is being made.com for more information. you discover the real-world network conditions affecting end users and application service.Application performance is a competitive differentiator and a clear indicator of business success. Shifting left in the software development lifecycle is the key – testing early and often will help organizations avoid the negative impact of performance failure while affording the opportunity to capitalize on end user demands for immediate and ubiquitous access to data. With Network Virtualization. creating a significant competitive advantage for those organizations that are most successful in building performance in to their application design. and then remediate or optimize performance. and Shunra has been fortunate to help our customers implement network virtualization solutions for software testing and incorporate application performance engineering policies that encompass best practices and proven strategies for improving and ensuring application performance. However. limited bandwidth. are all critical factors that must be taken into account when testing or validating application performance. especially when considering the multiple network connections required to support 3rd party services and external resources. virtualize network impairments (such as high jitter rate and limited available bandwidth) in the test lab so that the effect of the network on transaction response time can be reliably and accurately measured. and jitter. Visit www. Shunra has witnessed a dramatic shift in awareness and focus on application performance and seen this gap continue to shrink. For over a decade.

Sign up to vote on this title
UsefulNot useful