Professional Documents
Culture Documents
S T
oftware esting
MAGAZINE
April 2013 $10.95
Navigating Continuous
and
Developer Tools
A Tester In An Ocean of Developer Tools
Change
Making the transition from waterfall to agile
TestKIT
If you cant be live, be virtual
OD
ONDEMAND
OD
ONDEMAND Sessions
Anywhere, anytime access to testing and test automation sessions
>>>> Available anywhere you have an internet connection <<<< >>>> Learn about automation tools, frameworks & techniques <<<< >>>> Explore mobile, cloud, virtualization, agile, security & management topics <<<< >>>> Connect & communicate with testing experts <<<<
ONDEMAND.TESTKITCONFERENCE.COM
A utomated S T
oftware esting
April 2013, Volume 5, Issue 1
Contents
Continuous Change and Developer Tools
A test automator in a fast-paced environment is faced with little time for automation and unclear information about non-standard, non-GUI systems for which they have no comprehensive tool for dealing with. This issue is dedicated to approaches necessary for successful automation in these types of environments, including close coordination with developers and the use of tools that may not traditionally be used by testers.
Features
12
This article describes one teams journey from a waterfall environment to an agile-like environment where testers were more
I blog to u 36
Go On A Retweet 38
Read featured blog posts from the web.
40
Open sourcery
10
www.automatedtestinginstitute.com
Editorial
GUI systems for which they have no comprehensive tool for quickly or effectively dealing with. Test automation
processes and tools. Next, we are plunged even further into an environment that is often less conventional for software
A test automator is therefore faced with little time for automation, unclear information about non-standard, nonGUI systems for which they have no comprehensive tool
implementation in this type of situation often relies on a little ingenuity, close coordination with developers and the use of tools that may not traditionally be used by testers. This issue of the magazine focuses on automation under these circumstances. The first feature entitled A Tester in An Ocean of Developer Tools by Michael Albrecht describes one teams journey from a waterfall environment to an agile-like environment where testers were more greatly exposed to developer www.automatedtestinginstitute.com testers. Entitled Overcoming the Unique Challenges of Test Automation on Embedded Systems, this feature written by David Palm addresses how to effectively adjust your automation techniques when faced with a nonconventional system such as an embedded system. Finally, we address adjusting your automation approaches to fit into agile development where multi-tasking is critical. In this article, Bo Roop offers a roadmap for test automation implementation when test automation is not your only task. April 2013
5 Annual
th
Michael Albrecht
A utomated S T
oftware esting
Managing Editor Dion Johnson Contributing Editors Donna Vance Edward Torrie Director of Marketing and Events Christine Johnson
A PUBLICATION OF THE AUTOMATED TESTING INSTITUTE
Darren Madonick
David Palm
Bo Roop
is employed as a senior software quality assurance engineer at the worlds largest designer and manufacturer of color measurement systems. He is responsible for testing their retail paint matching systems, which are created using an agile-based software development methodology. Bo helps gather and refine user requirements, prototypes user interfaces, and ultimately performs the software testing of the final product. Testing is a passion of Bos, and he is involved with local software groups as well as a few online forums. Hes an advocate for software that meets the customers needs and expectations and can frequently be heard trying to redirect teams toward a more customer-centric point of view.
September 23-25
TestKIT 2013 Conference
http://www.testkitconference.com
The Automated Software Testing (AST) Magazine is an Automated Testing Institute (ATI) publication. For more information regarding the magazine visit http://www.astmagazine.automatedtestinginstitute.com
www.automatedtestinginstitute.com
April 2013
Shifting Trends
[
orillaLogic is no stranger to victories in the ATI Honors with its FlexMonkey tool winning first place in the Best Functional Automated Test Tool Flash/Flex subcategory in both 2010 and 2011, while also being named the runner up in the Best Function Automated Test Tool Overall subcategory in 2011, coming in just behind the ever popular Selenium. This organization seems to have truly hit their stride however, in the 4th Annual awards with FlexMonkeys successor tool known as MonkeyTalk. MonkeyTalk not only picked up where FlexMonkey left off by winning Best Functional Automated Test Tool Flash/Flex subcategory, it also swept all subcategories in the newly added Best Mobile Automated Test Tool Category. This included the Android, iOS, and Overall subcategories. oogle lost. How many times do you get to say that? Well, in this years ATI Honors, this is an accurate statement as Google lost the crown it held for two years in the Best Open Source Unit Automated Test Tool C++ subcategory. ATF, a tool that entered the fray as the runner up in this category last year, pulled an upset by beating Google for the number one spot. Im sure Google is not too concerned by this minor setback (if they are aware of it at all), but our community has spoken and made their voices clear. 8 Automated Software Testing Magazine
here seems to be no definitive favorite in the Best Open Source Functional Automated Test Tool Java/Java Toolkits subcategory. FEST, a finalist that made its first appearance this year, has taken the top spot, but it better watch its back, because there has been a different winner each year since this subcategory was introduced. The jury is still out on whether the community will eventually lock into a long-term favorite, but for now, FEST is the champion.
The ATI Honors tells us a lot about current and future tool trends
uch like the Best Open Source Functional Automated Test Tool Java/Java Toolkits subcategory, the Best Commercial Functional Automated Test Tool Web subcategory has also experienced its fair share of turnover. This seems largely due to the in-and-out nature of QuickTest Professional (QTP) aka (HP Functional Tester). Since this tool seems to only have an eligible release every other year, it has only been named a finalist in the awards every other year. In years that it has been a finalist, it has dominated this and other subcategories. Years that QTP has not been a finalist seems to be open season for multiple other tools to shine. The 2nd Annual awards saw SilkTest win this subcategory in the absence of QTP. This year, it saw new comer, Automation Anywhere win the subcategory. Congratulations to Automation Anywhere at least for now. his is the first year that the top performance tool LoadRunner has not been a finalist due to the fact that it had no eligible release. SilkPerformer was the clear benefactor of the high profile absence. SilkPerformer has had a steady assent to the top over the years, coming in as the Runner Up to LoadRunner in the Best Commercial Performance Automated Test Tool Overall subcategory in the 2nd Annual and 3rd Annual awards. With LoadRunner out of the picture, SilkPerformer was unrelenting in its quest for number one and it finally achieved the spot this year. April 2013
www.automatedtestinginstitute.com
Crowdamation
Crowdsourced Test Automation
It
operate out of different locations, address the challenges of different platforms introduced by mobile and other technologies, all while still maintaining and building a cohesive, standards-driven automated test implementation that is meant to last.
Open Sourcery
Mobile Open Source Tools that Made Their First Appearance in the ATI Honors
As the testing community gears up for the 5th Annual ATI Automation Honors, lets take a look at the current open source finalists and winners that made their first entry into the Honors during the 4th Annual Awards.
obile automated test tool categories were added for consideration during the 4 th Annual ATI Automation Honors, which brought several new tools to the forefront for recognition in the awards. These tools are highlighted in this article.
MonkeyTalk
While MonkeyTalk has not technically been in the ATI Honors before, it
is not totally new to the awards. MonkeyTalk was in the awards under one of its former names: FlexMonkey. Its other former name was FoneMonkey. FoneMonkey and FlexMonkey have now combined to form a tool known as MonkeyTalk and is a free and open source, crossplatform, functional testing tool from GorillaLogic that supports test automation for native iOS and Android apps, as well as mobile web and hybrid apps. Its name change was apparently well received as it claimed the top prize in each of the
www.automatedtestinginstitute.com
Open Sourcery
Frank
The dog inside of a bun trotted into the ATI Automation Honors as the runner-up in the Best
is an iOS integration test framework that leverages undocumented iOS APIs for easy automation of iOS apps.
Calabash
Most of the mobile tools in the ATI Honors either supported iOS or Android, but not both. Although only nominated in the Android category, Calabash is one of the few tools that supports
Zucchini
Mobile automated test tool categories were added for consideration during the 4th Annual ATI Automation Honors, which brought several new tools to the forefront for recognition
Open Source Mobile Automated Test Tool iOS subcategory. Frank is a tool for writing structured acceptance tests and requirements using Cucumber and have them execute against an iOS application mentioned finalists, with our next finalist, Zucchini, through a series of associations. While discussing the Frank automated tool, a tool by the name of Cucumber was mentioned. As you are probably already aware of, a cucumber is not only a tool, but the name of food as well. A food that is often mistaken for a cucumber is a Zucchini. Tada! Maybe this association is how our next tool was able to make its first appearance in the ATI Honors. Or maybe its because the community likes the way this tool uses natural language for interacting with and developing automated tests for iOS based applications. both mobile platforms. In addition, like a couple of the other tools, this LessPainful supported tool also supports Cucumber for developing automated scripts.
Robotium
The final mobile tool that entered the ATI Honors in the Best Open Source Mobile Automated Test Tool Android subcategory is Robotium. With Robotium, test automators can write functional, system and acceptance test scenarios for testing multiple Android activities. Robotium was not only a finalist in the Android subcategory, but was also the Runner-up in the Overall subcategory behind MonkeyTalk. 11
April 2013
www.automatedtestinginstitute.com
A Tester In an
Of Developer Tools
For years wed been walking in the protected world of waterfall projects far away from requirement negotiations and acceptance tests and suddenly awoke to a project with short iterations (two weeks) and in the lap of the customer
OCEAN
,
by
Michael Albrecht
www.automatedtestinginstitute.com
April 2013
Imagine an organization used to slowpace, annual deliveries, always with a Graphical User Interface (GUI), and very little test automation. Suddenly that organization is faced with short iterations (two weeks), no GUI and very high performance requirements. This was the situation my test team faced, and forcing us to deal with the fact that our testing approaches were no longer going to be effective. As a result, we formed a small, technical group to assess our approaches and identify new quality assurance tactics. This article follows our journey to agility prior to the modern popularity of agile.
April 2013
www.automatedtestinginstitute.com
13
All requirements started with a picture describing the basic flow, followed by short informative text explaining the flow throughout the system. The requirements were divided into groups: Basic flow Alternative flows Error flows XML structure expected from the system
Writing requirements for a technical automator is very easy; finding all that knowledge in one person is very hard. So we discarded the traditional project approach, and got everyone together
www.automatedtestinginstitute.com
April 2013
et
ro
sp
ec
ti
very good business knowledge. The lack of development skills within the test group was cured by getting expert help from the developers within the team.
now had very precise and demanding transaction time requirements. No environment except for production
big issue since we agreed to limit the performance tests to search functions, and not updates. Doing the tests at night limited outside interference. The challenge was in building our own performance tool. Once again we could thank team spirit for solving this. The developers implemented extended logging in databases and APIs together with a simple GUI to control parameters such as the number of concurrent users, time intervals between executing tests and sequence order. As testers, we developed functional tests and scenarios that could be run independently and as part of load/performance scenarios. The test cases were both created in Java code, as well as saved batches in our SOAP test tools. The tricky part came when we wanted to measure transaction times throughout the system during performance test execution. In Excel, we collected data from the database, API and GUI logging, connected each transaction to the loggings, and created a macro in Excel that calculated the duration for each and every one. While we did not spend any money on tools, we did spend a LOT of time. In our situation it was easier to explain (hide) then tool costs.
The absence of a GUI increased the need for more technical skills
was sufficient for executing the tests. Running tests in production was not a
The company had no earlier experience from performance tests, and the customer
April 2013
www.automatedtestinginstitute.com
15
Newsflash
Newsflash
Newsflash
Newsflash
Training Announcement!
ATI Europe is organizing a TABOK training class in Rotterdam from June 17-18! If you are interested in this training please contact us at training@automatedtestinginstitute.com
Newsflash
Newsflash
www.automatedtestinginstitute.com
April 2013
w w w. a u tom atedtestinginstitute.com
Help provide comprehensive, yet readily available resources that will aid people in becoming more knowledgeable and equipped to handle tasks related to testing and test automation
Offer training and events for participation by people in specific areas around the world
ATIs Local Chapter Program is established to help better facilitate the grassroots, global discussion around test automation. In addition, the chapter program seeks to provide a local based from which the needs of automation practitioners may be met.
www.automatedtestinginstitute.com
17
Challenges of, techniques for, and return on investment from automation of embedded systems
By David Palm
est automation on an embedded system presents a unique set of challenges not encountered when automating tests in more conventional computing environments. If these differences are recognized and managed, the benefits of such automationseen in terms of both expanded test coverage and time savingscan be great
Speaking broadly, an embedded system is a computer system designed to interface with, and control, some sort of electromechanical device(s). That amalgamation of computing power with an interface to external devices creates special challenges when it comes time to test the system software. Most software shares certain potential anomalies
in common: incorrect logic, math, algorithm implementation, program flow [branching, looping, etc.], bad data, data boundary issues, initialization problems, mode switching errors, data sharing, etc. Techniques to discover these software anomalies are well documented in the software testing field. Embedded systems, however, are unique.
April 2013
www.automatedtestinginstitute.com
April 2013
www.automatedtestinginstitute.com
19
What Makes Test Automation on Embedded Systems Unique? Embedded systems introduce many factors that can result in anomalous system behavior. These factors include: Processor loading Watchdog servicing Power modes (low, standby, sleep, etc.) Bad interfacing to external peripherals Peripheral loading (e.g. network traffic, user interface requests) Signal conditioning anomalies (e.g. filtering) Thread priority inversion Noise conditions
Special Tools First, choose a tool for test automation on an embedded system. This tool should include provisions to manipulate physical analog and binary inputs and outputs interfaced to the system being tested. And often an embedded system will utilize one or more communications protocolsthe automation tool will need to be able to support these as well. This may, in fact, require a separate automation tool. For example, the tester might evaluate the portions of code that manipulate hardware input/output (I/O) using one automation tool and then utilize a different automation tool to test portions of code that communicate using communications protocols such
manageable and maintainable test artifacts. Test automation is, after all, just software created to test software, so it runs into many of the same maintenance difficulties faced in more conventional software development. Specifically, there are a number of test automation tools that utilize graphical programming languages. These can be extremely useful for rapid prototyping and easy comprehension of specific test steps. But more than one test developer has found that once these graphical programs grow beyond a certain size, the task becomes daunting even for the original developer - let alone somebody else - to understand, modify and extend. The way inputs and outputs are handled in the test tool should be
This tool should include provisions to manipulate physical analog and binary inputs and outputs interfaced to the system being tested.
While, it is necessary to address more conventional software defects, it is also necessary to consider these additional factors when creating tests. Otherwise the test coverage will be inadequate and the system will likely ship with an unacceptable number of potentially serious defects. While most, if not all, of these obstacles can be overcomegiven enough time and resourcesthe fact remains that addressing them requires effort above and beyond what would be required in a more conventional computing system. Test automation on an embedded system requires three things: special tools, a customized interface or test harness between the tester and the system under test, and special automation techniques to cover not only common software defects but also those that are unique to embedded systems. 20 Automated Software Testing Magazine as BACnet or ModBus. The ideal situation, however, is when a given automation tool can handle all of the system inputs and outputswhether hard wired or communicated together. In a similar vein, if the embedded system includes a user display it may be possible to automate here as well, but frequently this will require a separate tool specifically designed to test user interfaces. Debugging an automated test script for an embedded system presents many of the same challenges as debugging the system software. So the same kind of tools that the software developers use to manipulate system inputs and view the outputs in real-time will be needed. An automation tool must also produce www.automatedtestinginstitute.com abstracted from their particular implementation in hardware. A test script should not care whether a temperature setpoint, for example, comes from a thermocouple, a thermistor, or an RTD. It should not care if a communicated value comes from a BACnet network, a LAN, or the Internet. Otherwise, a change in system implementation will break all of the scripts. It is also useful for post-test analysis if the tool bundles both the script and the results from a specific run of that test and archives them in a single file. This eliminates any confusion that might arise as the test script is updated or expandedthere will always be a record of exactly what test steps yielded a given set of results. A number of commercial tools on the market cover some or all of April 2013
Embedded Systems
these criteria. But the embedded test automation market has some notable gaps and could be better served with specialized tools.
Special Interfaces Because embedded systems represent an amalgamation of a computer system with external devices, a complete test automation system will require some sort of interface or harness between the automated test tool and the system under test. Developing this test harness can be both complex and expensive. This time and monetary cost has to be factored into the project in order to get an accurate test schedule and return-on-investment calculation. It is necessary to work closely with both software and hardware engineers to design this test harness, particularly if expertise in those disciplines is insufficient. Be sure to consider one important, overarching principle before planning and beginning work: An automated test harness should not, if at all possible, require any special hooks in the software or any special modifications to the hardware. Both software hooks and hardware modifications automatically mean that what is being tested is not the same as what the customer will be using. Special software hooks add overhead and therefore affect the performance of the system under test. They also can result in a Catch 22 if the hooks have to be taken out of the software just before shipment, the software has been changed in a fundamental way while the ability to test it has been lost. And hardware modifications to facilitate interfacing to an automated test system mean that standard, production hardware cannot be used for tests. This can open the door to shipping the product with subtle defects that appear on production hardware but not on the modified April 2013
So the same kind of tools that the software developers use to manipulate system inputs and view the outputs in real-time will be needed.
system. They also require spending precious time and money acquiring and modifying hardware for use in the test system. This can become especially burdensome if the hardware itself is going through numerous revisions. Sometimes software hooks and hardware modifications cannot be avoided, and the payoff may be more than sufficient to justify their useas long as the potential pitfalls are fully understood. But in general, try to avoid these technical compromises. Here are some more challenges that may be encountered when designing a test harness for embedded automation: High voltages and currents in the system require due attention to the safety of both human beings and the system under test. The interface to each input or output from the system under test may need to be conditioned in order to interface with the available test hardware. For example, an analog voltage may need to be divided down before it is applied to an analog input, or there may need to be
optical isolation on some or all connections between the test harness and the system under test. Non-linear sensors such as thermocouples can be notoriously difficult to mimic, especially if a very high degree of accuracy is necessary. Achieving accuracy to 0.5 C over the entire operating range may not be too difficult, but 0.01 C is probably going to be very difficult. Presenting a system with a simple DC voltage (e.g. 0-10 VDC) or current (e.g. 4-20 mA) is not difficult with off-the-shelf hardware. But presenting it with high voltage, variable resistance, or variable capacitance will be significantly more difficult and will likely require some custom hardware development. End-points and extreme values may be difficult to reproduce with the test harness. For example, when using a simple resistor voltage divider to condition an analog output to interface with an analog input, it often is not possible to drive the input all the way to its extremes (especially on the high side) to simulate a shorted or open input condition. Complex and fast communications protocols are a challenge to automate. User-intervention is often still necessary via key pads, touch screens, etc. You can automate these things, but may not be cost effective. On the other hand, the user interface may be the only part of an embedded system that can be cost-effectively automated and this may be well worth doing. While the subsystems may be manageable on a case-by-case basis, the ability to service all of the system inputs and outputs simultaneously can require a prohibitive amount of processing power in the automated test tool. 21
www.automatedtestinginstitute.com
April 2013
www.automatedtestinginstitute.com
23
That last point brings up yet another factor that must be considered when designing a complete embedded test automation system. The automation system will have to run fast enough to sample inputs at a sufficient rate and assert outputs in a timely fashion. What that means varies from system to system. In the HVAC industry, for example, being able to respond within one second is usually quite sufficient, with many events taking place in the 5 to 10 second range. This makes test automation very feasible. On the other hand, something like an automobile engine controller or a flight guidance system may need to process inputs hundreds or thousands of times per second and assert outputs within milliseconds of detecting a given condition. A test automation system capable of that level of performance may be prohibitively difficult and expensive. But even faced with such a scenario, can useful testing
be done at reduced speeds? If so, some automation may still be possible and warranted. The bottom line is that it is necessary to factor in test harness development, fabrication, and testing of the harness itself into the project schedule. It is an added bonus if the test harness is designed to be generic and/or expandable, so that it can be applied to more than one product. This can enhance the long-term return on investment, so watch for these opportunities. And given that there may be technical obstacles that would prevent test automation on the entire system, it may still be worthwhile to automate even a portion of a project, provided that the return on the investment of time and effort promises a payoff. Special Automation Techniques: Some Typical Gotchas in Embedded Software Test Automation Once appropriate automation tools have been selected and designing and
building a test harness for the embedded system has been completed, it is time to create some test scripts. Here again there are a number of special considerations that should be factored in to the automation effort on an embedded system. First, embedded systems can be vulnerable to initialization problems. You can write scripts and have them pass ordinarily, just because some system input is typically sitting at a given value. But if a prior test script left that system input at a nonstandard value, suddenly a subsequent script may fail. So the same test on different test set-up/facility/etc. can fail unexpectedly because a less than comprehensive initialization has been performed. To address this, try to have a comprehensive initialization sequence that can be called by all test scripts. Make it a matter of policy that this initialization sub-script is called at the start of each script. Yes, people are going to complain that it seems to be a waste of time to execute all of these steps at the start of every single test script. But in the end the time will be well spent, since chasing errant conditions caused by initialization
Managing tolerances is crucial to successful embedded automation. Real-world systems do not lend themselves well to absolutes.
www.automatedtestinginstitute.com
April 2013
Embedded Systems
problems will be avoided. Managing tolerances is crucial to successful embedded automation. Real-world systems do not lend themselves well to absolutes. It is not useful for a system requirement to say that a system needs to control to a setpoint of 72 F. It is only useful to say that the control must control to the setpoint plus or minus some tolerance. Automated tests need to be written to handle the tolerances rather than absolutes. Otherwise numerous testing errors will be logged when the real world system deviates, even slightly, from those absolutes. Race conditions are caused specifically by timing tolerances. A race condition is a flaw in an electronic system or process whereby the output or result of the process is unexpectedly and critically dependent on the sequence or timing of other events. The term originates with the idea of two signals racing each other to influence the output first ( h t t p : / / e n . w i k i p e d i a . o rg / w i k i / Race_condition). In the case of test automation, it most often manifests itself in a condition in which the test script execution gets to a check point firstperhaps even by just a millisecondand fails the step because the process its checking has not caught up. Conversely, the process on the embedded system may have just completed and moved onso the test script fails to detect the desired process state because it has already moved on. Fortunately, race conditions are relatively easy to avoid. Tests can use a simple Wait While followed by a Wait For construct. As long the timing requirements for the event thats being tested are understood, this combination will not only prevent false errors because of the race condition, but will also verify that the system is working inside of its formal timing requirements. A very big potential gotcha in automated testing on an embedded April 2013
Something like an automobile engine controller or a flight guidance system may need to process inputs hundreds or thousands of times per second and assert outputs within milliseconds of detecting a given condition. A test automation system capable of that level of performance may be prohibitively difficult and expensive.
www.automatedtestinginstitute.com
system occurs when the test is not comprehensive enough to catch an unexpected glitch on a system output that might seem to fall outside of the specific test case. The difficulty is that a given system may have dozens or even hundreds of outputs. It is usually impossible to check the status of all of them in every test step. At the very least, be sure that each test case explicitly checks the status of all known critical values. But on the flip side, so as not to add unnecessary execution overhead, if its truly a dont care then dont include it. Formal script reviews are the solution hereother test engineers, hardware engineers and software developers may identify system outputs that were not considered, but that really should be included in the test case.
To Automate or Not to Automate: Finding the Return on Investment The first question is whether the embedded system testing should be fully automated. The answer is no, generally not. At the very least, relying completely on automated tests is probably a bad idea. As mentioned above, in a system of any significant complexity there are simply too many inputs and outputs for the tests to be absolutely comprehensive. Many times manual tests run by individuals with significant understanding of the system will catch defects that would have been missed by a more narrowly scripted automated test. Total reliance on automated testing will generally not result in sufficient coverage. There are aspects to most embedded systems that will defy full coverage through automation without enormous effort. And in certain embedded systems there are human and machine safety considerations in these cases, although the safety tests can be automated, they should also be run manually so that a human being verifies the safety of the system. 25
if the software is defective. For example, signal conditioning algorithms such as piece-wise filtering and linearization applied to analog inputs can have bugs at the transition points that are relatively difficult to detect but can throw the input value wildly out of range. It is easy to create a test that sweeps the entire range of analog values in small increments looking for these anomalies. Such a test would be daunting to run manually, is easy to automate, and can catch software defects that could have catastrophic problems in the embedded system. (But note that, in this example at least, a good code inspection would go pretty far in eliminating the risk of such a software defect.) Another way embedded system test automation can have a huge payoff is to reproduce faults that require large numbers of iterations to occur, so many that manual testing would be impractical or impossible. For example, I once worked on a serious field issue that occurred very infrequently and at just a few job sites. The software engineers eventually came up with a set of conditions they thought could reproduce the problem. An automated test was developed to repeatedly present those conditions to the system and it turned out that on average the error would occur approximately every 300 presentations. The ability to reproduce the error, even that infrequently, enabled the software engineers to craft a fix. The test was then run for thousands of cycles and we were able to calculate, to a statistically exact level of confidence, just how certain we were that the remediation actually fixed the problem. The payoff of the automation was a little difficult to quantify in dollar terms, but the payoff in increased management confidence in the competence of the engineering group was very high. www.automatedtestinginstitute.com
Remember that partial automation of a given test may still be worthwhile. Even if an automated test has to stop execution to prompt a user for certain intervention, the test might still provide better coverage, better reporting, better consistency, and be less mind-numbingand therefore more prone to being run accurately during regression than a fully manual test.
Conclusion Test automation on an embedded system presents a unique set of challenges not encountered when automating tests in more conventional computing environments. Test automation on embedded systems requires a unique set of software tools. And since embedded systems involve an amalgamation of hardware and software, a specific tester-tocontroller interface is required. Developing this interface can be complicated, challenging, and costly. The test professional must factor in the cost and time needed to create the automation interface, or the testing schedule is incomplete. Because of the real-time nature of embedded systems, the test professional must also employ specific automation techniques. Being aware of these unique challenges will greatly decrease the time needed to debug automated tests, which will result in successful automation attempts and greater likelihood of management satisfaction. Test automation on an embedded system can greatly expand the scope of testing and eliminate defects that would have been virtually impossible to identify using manual testing alone. Awareness of the unique challenges posed by embedded systems can help the test professional to decide on an appropriate scope of automation, avoid pitfalls during test development, and deliver a successful product. April 2013
Public Courses
April 2013
27
www.automatedtestinginstitute.com
April 2013
Development
April 2013
www.automatedtestinginstitute.com
29
ow does automated software testing fit into the big picture of agile software development? In my case, it was more about the tester than it necessarily was about the testing. I was the newest member of an existing eXtreme Programming (XP) team. This software team had been working together for a few years, but had just begun its transition into using the agile methodologies. The company I worked for had a standardized testing team that was used as a shared resource among each of the individual software development teams. Its members (re)learned each software package as it came, but they were following the developers at the end of the software development cycle only. There was no early testing integration and the software quality was suffering because of the waterfall approach. I was asked to join this new agile team as a tester, but quickly found that my role would be so much more. Once I got up to speed on using the software and understanding our customers goals, I found that I had a better understanding of the whole system than the developers who were focused on just the small areas for which they were writing code. So I transitioned into a role of product champion and customer advocate. I worried about the whole product and how our customers would use it, like it, and recommend it. Our development team had a dedicated XP customer in our marketing person, who knew what he wanted the software to do, but he couldnt write a realistic requirement. So I spent many hours each iteration taking his broad requirements and changing them into achievable software tasks for the team to implement When we started using him as a customer, wed receive requirements like, make
the software save faster. This was not truly helpful, nor necessarily achievable. While we could have made it save faster, it still might not have met his desired speed improvement since it was never clearly defined. So I was tasked with morphing that ambiguous requirement into something achievable: Make the saving of new customer records complete in less than 50 percent of the current rate. We benchmarked results from the existing software to establish a baseline, and then aimed at making the software faster. Once the new software pieces were implemented, I then performed adhoc and exploratory testing on the new builds, and I found bugs. This testing was performed within the two week iteration, and the feedback to the development staff members was almost instantaneous. The developers
modify the automated scripts or I would work with the developers to help correct the regression issues. By sharing the results of the automated test runs with the developers in the early stages of the new iteration, I could get bugs fixed quicker. Those regression issues were knocked out before we built on them and made them unmanageable. We sometimes found incomplete areas of the software that automated testing was able to reveal even though the application hid it from the user. Finding Todo: comments in the code always raised a red flag, and set me off on a more in-depth hunt. Our development team manager also appreciated the prompt feedback which included metrics-like code coverage and pass/fail statistics. Those are the visible items that upper management liked to track, even though we constantly reminded everyone that those were just a few metrics of interest, not all. Within the first three days of the new iteration I would have automated the last iterations newly created features, passed the results on to management and the rest of the team, and then begun looking at the new features the developers were working on over the past few days in the current iteration. We found that running with the software automation a week behind gave us a few benefits. The code to be automated had already been manually tested and verified (more on that later), and it was already in a shippable state at the end of the last iteration. Remember that we were following strict XP practices. Stable software is always easier to automate than software thats in a state of flux. These automation tasks also filled that gap when the new code hadnt been developed yet, and manual testing would simply be repeating tests from the previous iteration. April 2013
immediately corrected the problems and moved on to the next task, and I, of course, verified their fixes.
At the beginning of a new iteration, I would spend the first two to three days running the existing automated testing suite to verify we didnt have any regression issues during the previous iteration and once all the existing tests passed Id take off automating the new features from the last two weeks. If they didnt all pass, either because of regression issues or changes in the way the code behaved, Id go through and www.automatedtestinginstitute.com
The Beginning
Agile Automation
After the software automation tasks were completed, I utilized exploratory testing models to find the bugs in the new code. The new features were tested manually before any of the existing code was regression tested again. The developers needed the feedback on newly implemented features while that code was still fresh in their minds. During our daily standup meetings, I received information from the developers telling me which areas of the code were in flux or in need of greater attention. We were working together toward a common goal.
our internal XP customer, the marketing guy, to refine his requirements into something usable. I had a number of hurdles to overcome with him as our internal customer: He had no background in software development and didnt understand how engineers implemented new code. While he was great at creating brochures and marketing campaigns, he was not as gifted with time management and interpersonal skills. In his haste to get the new requirements completed, he would rush, meaning we would frequently receive concept-only requirements. Wed hear things like we need to make the software pretty. Pretty? Really?
While we were working to get requirements refined, I would still grab the new builds and manually test the new features. On the project board, anything that was moved into the ready for test column was fair game. Sometimes it was complete and ready, and other times it was not. So that was another balancing act I had to learn. Some developers have thicker skin than others, and appreciate the immediate feedback, and others despise being told their code is broken (especially if its not 100 percent featurecomplete). After the next iterations requirements were delivered, Id change
New features/controls
Once the new software pieces were implemented, I then performed adhoc and exploratory testing on the new builds, and I found bugs.
I then performed ad-hoc testing as a hostile user. We had customers who were forced to use our software by their managers, so they would try to find problems with our software or create reasons to not use it. Since our software provides feedback on the quality of their work in a production environment, many viewed the software as a threat to their jobs instead of as a tool to make their jobs easier. It was a difficult task for our team to improve the perception of our software with those customers. So if they were going to try to break the software, I had to try to beat them to the punch. I had to try to find the areas where the problems existed before they did. By using the software in the same fashion as our destructive customers, it became very solid. So I would get started with him during the second week of the iteration and figure out what his make it pretty requirement really meant. He didnt like the way the software looked and he wanted it to be more closely aligned with the Windows operating system. So we changed his requirement into, make the software use some of the newer Windows look and feel. Stuff like rounded buttons, gradients and transparency. This was better, but it was still very ambiguous. But by removing some of our developers creative license, the software started looking prettier (in his eyes and ours); and by taking baby steps toward a real requirement, we were at least moving in the right direction. My goal was to get all of the requirements from the marketing guy translated into developer-speak, and ready to share during the next iteration planning meeting. In the beginning, it took about five to eight days to get everything defined. Toward the end, we were able to knock out the requirements much faster. over to running the regression test suite, and continue my manual testing of new builds. These last few days of the iteration allowed me to test some of the undocumented requirements that needed attention. Before beginning automation of the new controls and interfaces, Id take a dry run at learning the new controls into the automated software package. I needed to ensure that controls could be scriptable. They needed to be properly and consistently named in order to keep the automated scripts as readable as possible. Hotkeys and shortcuts needed to be unique, there needed to be stability in the code, and the software needed to be ready to be released at the end of each iteration. I would also pick a few specialized types of testing to focus in on during the tail end of the iteration. Sometimes I would focus on the ease of use of the overall software, finding out if it was easy to learn, contained clear and useful warning and/or error messages. Other times I would focus on Windows Automated Software Testing Magazine 31
After a few days of performing manual testing of the new builds, Id change gears and begin working with April 2013
Requirements
www.automatedtestinginstitute.com
specific platform support. While much of my automation was done on one or two platforms, the software had to run on many other computer systems. Windows
These items were specifically called out from the XP customer or the software manager as corporate goals, but were not usually given individual story or
testing requirements And then wed start over again. I loved it as a tester because I had complete
You want it faster? How much faster? Would you like it two seconds or 30 seconds faster? What if the team cant achieve that speed?
2000, XP and Vista, as well as a large list of foreign languages, were all officially supported. It was my job to maintain these test environments, and to test the software on each of the platforms with the latest service packs, running each of the languages. Now throw different hardware configurations into the mix, and my testing matrix continued to grow. More or less RAM, larger hard drives, filled disk space, no virtual memory, authenticated on a corporate domain or not, missing drive letters, none of these things were ever specifically called out in the testing requirements. They were all sourced from customers desires or problems reported from the field. task cards. We had to just work them in whenever we could. knowledge of the requirements (how should the software behave); I was helping write them! The developers loved it because I went through the hassle of refining the vague, ambiguous or incomplete requirements into something useful. Our development team rule was the software developers didnt have to work on a story or task if it was ambiguous. The pressure to make difficult decisions was on the marketing guy. You want it faster? How much faster? Would you like it two seconds or 30 seconds faster? What if the team cant achieve that speed? Shall we timebox this research to a half day? Whats it worth to you? By talking about these issues ahead of time, during the previous iteration, there was time to fill the gaps, make the necessary changes, and get the team useful requirements. It made for some great requirements and very happy team members. The marketing guy loved it because he got exactly what he wanted with little-to-no arguing with the developers. I had a calmer demeanor with him and could get the answers out of him without bringing in a lot of people ... which meant the developers could continue developing while I went to the meetings. Finally, our manager loved it because everyone on his team was happy and engaged and the customers were getting great software from us. Win-win-win. April 2013
So to recap, my two week iterations (10 working days per iteration) would look something like this: Days 1-3: Automate the new features that were coded over the past two weeks Days 4-5: Perform manual testing of the newly added features as they are completed Days 6-7: Refine requirements with marketing
Conclusion
If these areas were found to be working at the end of the iteration, Id switch over to worrying about how the documentation team was being integrated into the development team. Could the software be quickly and easily translated? Would it work within the confines of their translation tools? Could we support field translations? Did our documentation staff understand how the new features worked? If not, I would provide them with training. If we werent ready for documentation and translation, I could test the softwares logging capabilities to ensure that our technical support team would be able to support the product in the field. 32 Automated Software Testing Magazine
ocumentation
Teamwork
Days 8-10: Run regression tests. Continue working to strive toward shippable software at the end of the iteration focusing on the non-defined www.automatedtestinginstitute.com
Crowdamation
Crowdsourced Test Automation
It
operate out of different locations, address the challenges of different platforms introduced by mobile and other technologies, all while still maintaining and building a cohesive, standards-driven automated test implementation that is meant to last.
www.automatedtestinginstitute.com
April 2013
April 2013
www.automatedtestinginstitute.com
35
I BLog To U
Automation blogs are one of the greatest sourc automation information, so the Automated Tes decided to keep you up-to-date with some of th posts from around the web. Read below for so posts, and keep an eye out, because you never will be spotlighted.
Blog Name: ATI User Blog Post Post Date: January 2, 2013 Post Title: Top 5 things you should consider Author: Sudhir G Patil
Blog Name: ATI User Blog Post Post Date: February 4, 2013 Post Title: .Think Your Automation Framework is Better Author: Patrick Quilter
Software changes are more frequent and demand stringent Quality parameters which enforce a highly efficient and automated development and quality process. Test Automation for this reason has seen a sea change in its adoption levels in the recent years. Though there are multiple factors that are responsible for delivering successful test automation, the key is selecting the right approach.
You think your framework is better than mine? This is the result of the framework stage or those that expand on structured programming to build robustness into their programming efforts. This stage produces the best results by modularizing code into reusable functions, components, and parameterizing test data. User-friendliness is an important characteristic so it can be handed off to system analysts and alleviate the amount of expensive programmers.
www.automatedtestinginstitute.com
April 2013
Blogosphere
ces of up-to-date test sting Institute has he latest blog ome interesting know when your post
Blog Name: MyLoadTest Post Date: December 29, 2012 Post Title: LoadRunner Password Encoder Author: Stuart Moncrieff Blog Name: Agile Testing Post Date: January 29, 2013 Post Title: IT stories from the trenches #1 Author: Grig Gheorghiu
If you ever need to disguise a password in a VuGen script, you will no doubt have used the lr_decrypt() function. If you have stopped to think for a second or two, you will have realised encrypting the password in your script doesnt make it more secure in any meaningful way. Anyone with access to the script can decode the password with a single line of code
On one of these (production) servers I typed ci /etc/passwd instead of vi / etc/passwd. This had the unfortunate effect of invoking the RCS check-in command line utility ci, which then moved /etc/passwd to a file named /etc/passwd,v. Instead of trying to get back the passwd file, I panicked and exited the ssh shell. Of course, at this point there was no passwd file, so nobody could log in anymore. Ouch. I had to go to my boss, admit my screw-up
Read More at:
http://agiletesting.blogspot.com/2013_01_01_archive.html
April 2013
www.automatedtestinginstitute.com
37
Go On A Retweet
Paying a Visit To
Microblogging is a form of communication based on the concept of blogging (also known as web logging), that allows subscribers of the microblogging service to broadcast brief messages to other subscribers of the service. The main difference between microblogging and blogging is in the fact that microblog posts are much shorter, with most services restricting messages to about 140 to 200 characters. Popularized by Twitter, there are numerous other microblogging services, including Plurk, Jaiku, Pownce and Tumblr, and the list goes on-and-on. Microblogging is a powerful tool for relaying an assortment of information, a power that has definitely not been lost on the test automation community. Lets retreat into the world of microblogs for a moment and see how automators are using their 140 characters.
Twitter Name: CaitlinBuxton2 Post Date/Time: Mar 27 Topic: Dev & Testing
38 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013
The Microblogs
The more I learn about #testing (or anything else really) the more I realise I havent even scratched the surface #keeplearning
Twitter Name: shubhi_barua Post Date/Time: Mar 18 Topic: Test Data Visualization
This is so freaking awesome visualisation of test data coverage. Kind courtesy of @ Hexawise at Moolya! pic.twitter. com/CBXWcfmIGL
This demo clip shows step by step how to design a test with different types of virtual users: h t t p : / / w w w. y o u t u b e . c o m / watch?v=EjZoXwTAELs
Parameterizing Selenium WebDriver Tests using TestNG - A Data Driven Approach http:// wp.me/p2RSUo-jx
Twitter Name: onloadtesting Post Date/Time: Dec 26 Topic: Load Test with Virtual Users
April 2013
www.automatedtestinginstitute.com
39
The second step is to find which pieces or steps of your test cases are reusable between each other, and can accept parameterization to fulfill the task. For instance, automating the selection of an item or link on your main screen of your app or landing page: Maximize reuse by engineering a parameter to accept different values, and reuse it across each test case. Although you may need to individually determine what type of verification you will use to achieve this on a per platform or device level, you will save time in the long run when you write additional test cases. The third step is to then group those pieces or steps together by device screens or pages. This way, as you write the test cases you have an organizational structure that is easy to identify by where you are within the app or site and where you need to navigate to next. Following these steps will provide a structure that can be grown to accommodate new features within an app or new sections within a mobile web site. As mobile devices become easier to automate against, this structure can easily adapt to emerging technologies that allow for greater reuse across platforms. April 2013
www.automatedtestinginstitute.com
As a registered user you can submit content directly to the site, providing you with content control and the ability to network with like minded individuals.
>> Community Comments Box - This comments box, available on the home page of the site, provides an opportunity for users to post micro comments in real time. >> Announcements & Blog Posts - If you have interesting tool announcements, or you have a concept that youd like to blog about, submit a post directly to the ATI Online Reference today. At ATI, you have a community of individuals who would love to hear what you have to say. Your site profile will include a list of your submitted articles. >> Automation Events - Do you know about a cool automated testing meetup, webinar or conference? Let the rest of us know about it by posting it on the ATI site. Add the date, time and venue so people will know where to go and when to be there.
Automation Events
http://www.googleautomation.com
Public Courses
April 2013
43
TestKIT
If you cant be live, be virtual
OD
ONDEMAND
OD
ONDEMAND Sessions
Anywhere, anytime access to testing and test automation sessions
>>>> Available anywhere you have an internet connection <<<< >>>> Learn about automation tools, frameworks & techniques <<<< >>>> Explore mobile, cloud, virtualization, agile, security & management topics <<<< >>>> Connect & communicate with testing experts <<<<
ONDEMAND.TESTKITCONFERENCE.COM
www.automatedtestinginstitute.com
April 2013