You are on page 1of 48

An AUTOMATED TESTING INSTITUTE Publication - www.automatedtestinginstitute.

com

AutomAted ....... S t
oftware eSting
March 2012 $8.95

MAGAZINE

TEST

Automation Experiences
seleniUM vs watir: Selenium and Watirs Merger with WebDriver

So Easy Even A Child Can Use It

how Usable is YoUr software test aUtoMation fraMework?

The Value of Continuous Quality Assurance

bUsiness valUe throUgh increased qUalitY

ATI Automation Honors


Celebrating Excellence in the Discipline of Software Test Automation Nominations Open April 1!
www.atihonors.automatedtestinginstitute.com

4 Annual
th

AutomAted S t
oftware eSting
March 2012, Volume 4, Issue 1

Contents
Cover Story
tEst AutomAtion ExpEriEnCEs 30
This article highlights the importance of test automation case studies, by allowing multiple voices from the community be heard on the subject. Read this article from the contributing authors of Experiences of Test Automation to learn stories behind their case studies and the value they place on automation case studies in general. By Various

featureS
so EAsy EvEn A Child CAn usE it! 16 ,
Is your software test automation architecture so easy, even a child can use it? Get in touch with your inner child, then read this article to find ways to ensure that your framework maximizes its usability potential. By Fredrick Rose

thE vAluE of Continuous QuAlity AssurAnCE 24


Continuous QA uses Continuous Integration to forward the goals of functional correctness; it is a part of Continuous Integration, but with a focus on automated functional testing. Read this article to learn more about how Continuous QA can offer business value for your organization. By Ed Schwartz

ColumnS & DepartmentS


EditoriAl
guidance and Experiences How collecting past experiences can aid in making current day decisions for test automation implementation.

i Blog to u 42

Read featured blog posts from the web.

go on A rEtWEEt 44

Learn about AST authors and upcoming events.

Authors And EvEnts tEstKit tip 8

Read featured microblog posts from the web.

CAsE studiEs ArE hot! 46

Why Atis Case studies Wiki is important Reading and creating automation case studies.

Cells, selenium, xpath and labels Learn how to handle dynamic object behavior seen through practices such as obfuscation. selenium vs. Watir... And the Winner is Webdriver Gain insight into reasons and consequences of two top open source tools merging with the new kid on the block.
March 2012

updAtEs & CorrECtions 46


Updates to previous issues.

opEn sourCEry

12

The Automated Software Testing (AST) Magazine is an Automated Testing Institute (ATI) publication. For more information regarding the magazine visit http://www.astmagazine.automatedtestinginstitute.com

www.automatedtestinginstitute.com

Automated Software Testing Magazine

editorial

guidance and Experiences


by Dion Johnson
What percentage of tests should be automated? What is the best framework to use? What tool should I use? How long should it take to automate a test? What is a reasonable return-on-investment? I can handle all of these questions and other questions like these with a oneword answer. That answer is: Depends. Thats right, it depends. What percentage of tests should be automated? Well, it depends What is the best framework to use? Hmm. It all depends What tool should you use? It definitely depends The answers to these questions depend on a lot of factors. Factors such as what types of systems you are testing, how long it takes to perform the testing manually, what the overall goal for testing is, what the skill level of the team is, what the nature of the testing task is, etc. In truth, once you have all of this information, there still wont be a single, definitive answer to any of your automation questions. There are no hard and fast formulas that you can plug values into for answers to these types of questions. The best that one can do is use guidance

Guidance also comes in the form of training classes and sessions that provide useful information for addressing these types of questions. Guidance may also be obtained from personal experiences. Learning from what worked and/or did not work in the past is an excellent tool for making real-time decisions in the present. Another great thing about experiences is that they dont have to be your own. You can learn from others

cover story highlights the importance of examining test automation experiences by spotlighting the new test automation case studies book by Mark Fewster and Dorothy Graham entitled Experiences of Test Automation. The Test Automation Experiences article, like the book it highlights, has several contributors to it: the same contributors that wrote chapters in the book. Next, The Value of Continuous Quality Assurance is a

Another great thing about experiences is that they dont have to be your own. You can learn from others experiences as well.
and experience to come up with helpful answers to these questions. Guidance can come in the form of things such as the ATI Automation Honors, which provide feedback from the community on the tools to begin considering in a search to find the right choice for your project (by the way, the 4th Annual ATI Automation Honors nominations are opening soon!). 4 experiences as well. Collecting your own experiences along with the experiences of others is a great way to dramatically grow your own personal TestKITs. In honor of this concept, this issue of the AST Magazine is dedicated to a discussion of test automation experiences. The Test Automation Experiences featured article by Ed Schwarz that looks at his experience based thoughts on how to gain business value through employing continuous quality assurance practices. Finally, the So Easy, Even a Child Can Use It article continues our quality attribute series with an experience and research based discussion of framework usability. March 2012

Automated Software Testing Magazine

www.automatedtestinginstitute.com

Software Test Automation Training


www . training . automatedtestinginstitute . com Software Test Automation Foundations Automated Test Development & Scripting Designing an Automated Test Framework Advanced Automated Test Framework Development Virtual courses Automated Test Development & Scripting Designing an Automated Test Framework Advanced Automated Test Framework Development

Training Thats Process Focused Yet Hands On

Public courses

Come participate in a set of test automation courses that address both fundamental and advanced concepts from a theoretical and hands on perspective. These courses focus on topics such as test scripting concepts, automated framework creation, ROI calculations and more. In addition, these courses may be used to prepare for the TABOK Certification exam.

Public and Virtual Training Available

authors and events


Whos In This Issue?
Ed schwartz has more than 25 years of software
development experience. Before co-founding Gorilla Logic, he was founder of the global e-Business consulting organization at Sun Microsystems. Prior to Sun, Schwarz was a principal at Abstraction Programming, Inc. (API), a consulting firm specializing in applying object-oriented technology in the financial industry. He also held a position of Vice President of Municipal Bond Research Systems at Moodys Investors Services, where he was responsible for all technology used by Moodys Municipal Bond Analysts. Schwarz holds a B.A. in Music from Princeton University.

AutomAted S t
oftware eSting
Managing Editor Dion Johnson Contributing Editor Donna Vance Edward Torrie Director of Marketing and Events Christine Johnson
A PUBLICATION OF THE AUTOMATED TESTING INSTITUTE

fredrick rose has 9+ years of experience testing software.


He is currently a test automation engineer and technical lead for the government, responsible for developing and implemented software testing strategies for large Internet applications, and for training and mentoring teams in the development of strategies and processes for testing new technologies. Rose has published several articles and given multiple presentations on various software testing processes and techniques.

CONTACT US AST Magazine astmagazine@automatedtestinginstitute.com ATI Online Reference contact@automatedtestinginstitute.com

Experiences of test Automation Authors


took time to follow up their contributions to Dorothy Grahams and Mark Fewster new book to write for ASTs Test Automation Experiences article. The authors that contributed to the article are as follows:
Dorothy Graham Jonathan kohl Jon Hagar Henri van de Scheur Nick Flynn Bo Roop Simon Mills Christian Ekiza Lujua Celestina Bianco Ken Johnston Stefan Mohacsi Jonathon Wright Seretta Gamba Ursula Friede John Fodeh Lars Wahlberg Lisa Crispin Ross Timmerman

ATI and Partner Events


March 23-24, 2012
http://watir.com/test-automation-bazaar/

Test Automation Bazaar

April 1, 2012
4th Annual ATI Automation Honors Awards Nominations Begin
www.atihonors.automatedtestinginstitute.com

April 5, 2012
The Real Skills of an Automated Test Professional Webinar
http://www.computer.org/portal/web/webinars/lockheed

June 21, 2012


http://testautomationday.com/

Test Automation Day

October 15-17, 2012


http://testautomationday.com/

TestKIT Conference

Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

K The KIT is Coming

March 2012

gnimoC si TIK ehT gnimoC si TIK ehT


October 15-17 2012
http://www.testkitconference.com
www.automatedtestinginstitute.com Automated Software Testing Magazine 7

testkit tip

Cells, selenium, xpath and labels


Using Labels To Identify Dynamic Objects

TestKIT

is the name used by ATI for describing ones testing toolkit. A TestKIT is filled with knowledge, information and tools that go with us wherever we go, allowing our projects and organizations to quickly reap the benefits of the practical elements that weve amassed. This section provides tips to add to your TestKIT.

n the last issue of the AST Magazine, ATIs Dion Johnson provided an article entitled, Obfuscation: Avoiding Friendly Fire in the Battle for Security that discussed a software development technique known as obfuscation and its effects on test automation. In that article, obfuscation was defined as the practice of making something more confusing, unclear, and/or difficult to understand. Obfuscated code was then defined as code that has been modified, typically by a program known as an obfuscator, to be more convoluted and difficult for humans to understand and follow its behavior. The obfuscated code is functionally indistinguishable from the original code, but the details surrounding the implementation of those functions have changed. Obfuscating code adds a level of security that makes it difficult for bad guys to get a hold of your code and reengineer it for purposes undesirable to its producers. A negative by-product of this enhanced security, however, as the obfuscation article revealed, is that the good guys also suffered. The good guys being referred to are the test automators responsible for testing the applications 8

that have obfuscated code. Obfuscation makes the code more dynamic and obscure which is never an easy thing to deal with when implementing test automation. The most common headache associated with such dynamics is constantly changing object properties, particularly if those properties change in a non-standard way. To further explain this phenomenon, lets look at Figure 1, which is also a figure used in the Obfuscation article. This figure has three commonly seen elements: a Username textbox, Password textbox and a Login button. Automated tests identify these three elements by property values that have been assigned to them in the application code.

For example, each object may have an ID property that defines it. The username textbox may have an ID equal to uname, the password textbox may have an ID equal to pword, while the Login button may have an ID equal to login. These property names are descriptive and probably fairly constant, thus helping the page remain relatively automatable and maintainable. An obfuscated user interface (UI) will likely change these IDs in a manner that will negatively affect their ability to be consistently accessed by automated tests. Table 1 reveals how the ID properties may be affected by obfuscation. Not only do the properties follow no set pattern, they may change from build to build or as often as each time the application is invoked. This type of UI instability has brought down many test automation efforts, because it makes the test automation un-maintainable, particularly as the

ID property = uname labels ID property = pword ID property = Login

figure 1: login screen


www.automatedtestinginstitute.com March 2012

Automated Software Testing Magazine

testkit tip
browser. Figure 3 illustrates how the source HTML for the screen in Figure 2 may be constructed. This code reveals that the page elements are contained in an HTML table, and the labels (i.e. Username and Password) are contained within SPAN tags with no special properties associated with them. Therefore, the best way to locate a label is by its text. The XPath for accomplishing this may appear as follows: //span[text()=Username] If there is a concern about leading and/ or trailing spaces or other text in the label

table 1: obfuscated Properties


automated test bed grows. In truth, obfuscation is not the only technique that causes these types of property fluctuations, which is why so many automation projects are plagued by them. The Obfuscation article in the Volume 3, Issue 5 (September 2011) edition of the magazine offered several solutions for handling these types of application dynamics, including: Pre-obfuscation automation Label-based identification Obfuscation map utilization Image-based automation labels 2. Identify the structural relationship between a label and its associated field Construct XPath that will be used for identifying the desired field Use the XPath in the

3.

4.

ID property = erlwlkjf4 ID property = pewtlj90 ID property = alsjfafs3

The Label-based identification was then broken down into the following three approaches: Location-based Identification Cell-based identification Property-based identification

figure 2: obfuscated login screen


appropriate statement. name, the XPath can be written to be less restrictive: //span[contains(text(),Username)] This statement will find the SPAN element as long as Username exists in its text, regardless of what other characters may exist.

This article focuses on offering concrete examples for employing one of those solutions. That solution is the cellbased identification solution and it will be demonstrated in 4 basic steps using Selenium, JUnit and X-Path: 1. Assess the structure of the application pages and how best to locate a label on the page

Step 1. aSSeSS the StruCture of the


appliCation pageS anD how beSt to loCate a label on the page

The way in which the structure of an application page is assessed will largely depend on the type of application it is. In our example, the application is webbased, so the pages can be assessed by viewing the source HTML from the

Step 2. iDentify the StruCtural


relationShip between a label anD itS aSSoCiateD fielD

The next step in the process is figuring out how the field that you wish to manipulate is structurally related to its label. The HTML in Figure 3 shows that that the label and its associated field are in adjacent cells of the table, as illustrated in Figure 4. Therefore, a label and its associated field will have the same table row as a parent. Understanding this hierarchical relationship will make it relatively simple to identify a field, without using the cryptic, dynamic properties of that field. Automated Software Testing Magazine 9

figure 3: login screen htMl


March 2012 www.automatedtestinginstitute.com

table row 1, column 1

table row 1, column 2

identify the input element.

Step 4. uSe the Xpath in the appropriate Statement.


The final step is to use this XPath within the appropriate Selenium statements. When using Selenium with JUnit, the statement is likely appear as follows: driver.findElement(By.xpath(// span[text()=Username]/../..//input)) driver is the variable reference to the instance of the browser that has been opened by the script. A more complete script in which this statement may be placed is illustrated in Figure 5.

table row 2, column 1

table row 2, column 2

figure 4: login screen htMl table cells


Step 3. ConStruCt Xpath that will
be uSeD for iDentifying the DeSireD fielD

The next step in the process involves construction of the XPath that locates the desired field. Part of this work was done in Step 1, where the XPath for locating the label was constructed. Now, this XPath needs to be modified to locate the associated field instead. Step 2 revealed that the common denominator shared by a label and its field is the row element, represented in the HTML by the TR tag. Therefore, constructing the XPATH is a matter of: a. b. c. Locating the label Using the label to find the parent row element Find the desired child field of the row element

the parent row element (TR), we now need to update the statement to get the child textbox of that row, which is represented in the HTML by the iNPuT tag. This is done by modifying the previous XPath statement to appear as follows: //span[text()=Username]/../..//input Since there is only one input element that is a child of the row element, there is no need to use any properties to further

Wait! theres more!

Need a more detailed XPath tutorial? Visit the following site:


http://www.w3schools.com/xpath/

The label can be located with the following XPath statement from Step 1: //span[text()=Username] This XPath can then be modified to get to the parent row element. To do this, it is important to note that the row (TR) element is not the direct parent of the SPAN element. The column element represented by the TD tag is the direct parent of the SPAN element. The TR element is the parent of the TD element. Therefore, to get to the tr element, we must travel two levels up the parent hierarchy. This can be done with the following statement: //span[text()=Username]/../.. Adding /.. moves the reference up one level from the SPAN element (the label) to the SPAN elements direct parent. Since we need to go up two levels, we can do this with /../... Once we have access to

figure 5: selenium/JUnit script


www.automatedtestinginstitute.com March 2012

10 Automated Software Testing Magazine

K The KIT is Coming

if you thought Atis 2011 event was good, wait until you see 2012.
http://www.testkitconference.com
March 2012 www.automatedtestinginstitute.com Automated Software Testing Magazine 11

open sourcery

selenium vs. Watir... And the Winner is Webdriver


A Discussion of Selenium and Watirs Merger with WebDriver
Years ago, there was much talk about the Selenium and Watir open source tools being competing solutions for functional automation of web browsers. This is not to imply that the open source projects that produced the tools themselves were competing, but many in the each respective tools project along with an integration of the two code bases. This almost seemed inevitable given the connection between Selenium and WebDriver that seems to have existed almost from WebDrivers inception. In 2007, the s a m e year

community often pitted one tool against the other as they were attempting to make a personal decision on which tool to use. And while Selenium definitely seems to have reached a higher level of prominence and popularity in recent years (see the sidebar on the following page for Bret Pettichords thoughts on Watirs popularity), both tools have still remained relatively popular. For this reason, we found it extremely curious that both Selenium and Watir have merged with another open source competitor that also offers a solution for functional automation of web browsers. This competitor is known as WebDriver and it is now the driving force behind both Selenium and Watir. This may evoke many questions, such as what is the nature of these mergers?, if WebDriver is so great, then why not just use WebDriver?, and what do these mergers mean for each project moving forward?

WebDriver

t h e initial code for We b D r i v e r was released (according to the The Architecture of Open Source Applications), Simon Stewart, initial committer to WebDriver, and Selenium project founder Jason Huggins even teamed up to conduct a joint talk at the Google Test Automation Conference (GTAC) called Selenium RC vs. WebDriver: Steel Cage Knife Fight (http://www.youtube. com/watch?v=Vlz-WmcrBL8). The connection makes more sense when you recognize that both projects started out of a company known as ThoughtWorks. The Selenium project was started in 2004 while Huggins was working at ThoughtWorks on their in-house Time and Expenses (T&E) system. And according to WebDriver creator and Google Engineer Simon Stewart, in the book The Architecture of Open Source Applications, While Selenium was being developed, another browser automation framework was March 2012

The mergers are different for both Selenium and Watir and also mirror the relationship each of these tools held with WebDriver prior to the mergers. The Selenium-WebDriver combination is much deeper merger than the WatirWebDriver combination, which is in line with the relationship seen among the tools over the years. SeleniumWebDriver (Selenium 2.0) is a merger of

12 Automated Software Testing Magazine

www.automatedtestinginstitute.com

open sourcery

Bret Pettichords Thoughts on Tool Popularity


Recently Bret Pettichord, leading member of the Watir development team and coorganizer of the Watir Bazaar being conducted from March 23-24 in Austin Texas, spoke with Dion Johnson for an ATI Podcast interview. The interview focused on Watir and during the discussion Pettichord addressed questions about whether Watir was increasing or decreasing in popularity. This is what he had to say on the subject I dont really like to look at the popularity. I like to look at how successful your users are. I think we have a very, very high success rate with Watir. I would argue that its higher than with some of the more popular tools; which is to say theres a lot of people using them, but theyre not able to get what they want out of them on a day to day basis. One of the reasons I think we have that, frankly, is because of a lack of a recorder for Watir means that people that dont have the skills to be successful with automation cant be lulled into thinking they might be. And so, thats one of the reasons why I have discouraged development of a recorder. I feel like it helps people fool themselves around what they can do and by not having people fool themselves, you have a higher degree of success.
podcasts.automatedtestinginstitute.com

figure 1: simon stewart on watir-webdriver


brewing at ThoughtWorks: WebDriver. The SeleniumHQ website also identifies ThoughtWorks as the birthplace of WebDriver in a statement reading, Simon Stewart at ThoughtWorks had been working on a different web testing tool called WebDriver (http://seleniumhq. org/about/history.html). The two projects were also connected by virtue of the fact that Stewart has been a core committer for Selenium (http://seleniumhq.org/about/ contributors.html). The Watir-WebDriver combination is not so tightly coupled. There is an integration of the code bases that allows Watirs Ruby API to use WebDrivers engine to drive multiple browser types, but, despite expressed support for the initiative by Simon Stewart in comments such as the one found on a blog at the Watirmelon site (http://watirmelon. com/2010/04/10/watir-seleniumwebdriver/) and in a joint podcast with Stewart and Watir-WebDriver developer Jari Bakken (http://watirpodcast.com/31jari-bakken-and-simon-stewart-on-watir2-0-selenium-and-webdriver-celerityand-htmlunit/), there is little evidence that the same amount of collaboration exists as with Selenium and WebDriver. Just as with the nature of the mergers, the reasoning the mergers also seems to be different for each tool. For Selenium it was a matter of addressing weaknesses in the tool. In a statement released to the March 2012 Selenium and WebDriver communities in 2009, Simon Stewart reveals the following:
Why are the projects merging? Partly because webdriver addresses some shortcomings in selenium (by being able to bypass the JS sandbox, for example. And weve got a gorgeous API), partly because selenium addresses some shortcomings in webdriver (such as supporting a broader range of browsers) and partly because the main selenium contributors and I felt that it was the best way to offer users the best possible framework. (http://seleniumhq.org/docs/01_ introducing_selenium.html#briefhistory-of-the-selenium-project)

Looking a little bit deeper, however, it seems as though the intent was really for Selenium to absorbed the bulk of the changes in the merger with WebDriver. Paul Hammant, credited with beginning the discussion about the open sourcing of Selenium, as well as defining a driven mode of Selenium [Selenium RC] (Selenium History http:// seleniumhq.org/about/history.html), stated in a blog that:
As a side note, when Jason and I charted the course for Driven Selenium (which became Selenium-RC) we noted that it was an ill-advised idea. We were committing to porting driver code to half a dozen languages, and maintaining a JavaScript hairball that was the in-browser core

Find full podcast at

www.automatedtestinginstitute.com

Automated Software Testing Magazine

13

it will continue to use the existing Watir API for its scripting language. The Watir project feels that the strength in Watir is its user-friendly syntax that was designed by testers for testers, thus this syntax will remain intact while still leveraging the power of WebDriver behind the scenes. The WatirMelon blog site sums the merger up as follows:
If youre a Watir user, it doesnt really make that much difference. If you think of automated web testing as a car, Watir is the steering wheel and dashboard, which interact with the engine. Allowing Watir users to use WebDriver is like providing an additional engine choice, but keeping the steering wheel and dash the same.

image from WatirMelon blog (http://watirmelon.com/2010/04/10/watir-selenium-webdriver/)

figure 2: illustration of how watir works with webdriver


runner. We shuddered at the scale of the Continuous Integration build needed to make that. Simons fresh start with WebDriver was no less [illconceived] in terms of the hodgepodge of technologies needed to complete it. The reverse take over nature of the merger back then gave us much relief because we knew that sooner or later the 1.x codeline would be dead and we would toast that overdue demise!

support and the popularity of the tool. In addition, Huggins indicated that Selenium has an active user community, to which Stewart asks, Whats a user community, alluding to the fact that WebDrivers community paled in comparison to Seleniums. Combine these statements with the fact that the WebDriver API is clearly the driving force behind how Selenium now communicates with browsers and the fact that the Selenium-WebDriver (Selenium 2.0) syntax has changed from what it was in Selenium 1.x, it becomes clearer that Selenium brought its community and name to the merger (given that the tool is now largely referred to as Selenium 2.0), while WebDriver set much of the technological direction. The Brief History of the Selenium Project section of the SeleniumHQ site even states that Selenium had massive community and commercial support, but WebDriver was clearly the tool of the future. (http:// seleniumhq.org/docs/01_introducing_ selenium.html) The reasons behind Watirs merger or integration with WebDriver are similar but not the same. Like in Selenium, the Watir project seemed to also feel that the way in which WebDriver communicates with web browsers was very effective and efficient for long term maintenance, and thus decided to leverage that. Unlike in Seleniums case, however, the tool will remain largely unchanged as far as the users are concerned, because

While an engine replacement is no trivial change, it can be asserted that the tools individual identity remains relatively intact since the users experience will remain mostly unchanged. In a recent ATI Podcast interview with Bret Pettichord, leading member of the Watir development team, he had the following to say on the matter:
Watir-WebDriver is a Gem which supports the Watir API, but using the WebDriver technology to actually execute the script. So you get all the benefits of the Watir API, which is I think what people agree is the most appealing part of Watir, and you get the cross-browser capabilities of WebDriver, which people I think also agree is one of the most attractive features of WebDriver. So its really kind of the best of both worlds.

The reverse takeover terminology implies that although Selenium was the more publicly known of the two, in actuality the merger was really just WebDriver absorbing the Selenium project. Paul Hammant goes on to say:
The Selenium-1.x team (Jason Huggins, Pat Lightbody, Dan Fabulich, & many more) and new committers and friends have helped production harden WebDriver to the extent where its an admirable replacement for Selenium-RC (1.x).

This statement further bolsters the notion that the merger is largely a takeover and repackaging of Selenium by WebDriver. This assertion is also asserted by the 2007 GTAC talk by Stewart and Huggins. In it, the two open source tool contributors discuss what sucks and what rocks about each tool. At one point Stewart indicates that WebDriver can solve problems that Selenium has no hope of ever solving because I can use the awesome power of the native operating system. Among the things that Huggins touted as benefits of Selenium were the multi-language and multi-browser

So, in moving forward, Selenium and WebDriver are essentially a single project and tool, while in Watirs case, WebDriver is one of the engines and will likely be the primary engine behind the tool operations. Thus, while the community will likely continue to discuss a battle between Selenium and Watir, it can be argued that ultimately WebDriver won the war.

Wait! theres more!

Interested in hearing more from Bret Pettichord? Listen to the full podcast at
http://www.podcasts. automatedtestinginstitute.com

14 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

Are You Contributing Content Yet?


The Automated Testing Institute relies heavily on the automated testing community in order to deliver up-to-date and relevant content. Thats why weve made it even easier for you to contribute content directly to the ATI Online Reference! Register and let your voice be heard today!

As a registered user you can submit content directly to the site, providing you with content control and the ability to network with like minded individuals.

Community Comments Box

>> Community Comments Box - This comments box, available on the home page of the site, provides an opportunity for users to post micro comments in real time. >> AnnounCements & Blog Posts - If you have interesting tool announcements, or you have a concept that youd like to blog about, submit a post directly to the ATI Online Reference today. At ATI, you have a community of individuals that would love to hear what you have to say. Your site profile will include a list of your submitted articles. >> AutomAtion events - Do you know about a cool automated testing meetup, webinar or conference? Let the rest of us know about it by posting it on the ATI site. Add the date, time and venue so people will know where to go and when to be there.

Announcements & Blog Posts

Automation Events

Learn more today at http//www.about.automatedtestinginstitute.com

How long does it take to learn your automation architecture and become proficient in its use?

16 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

hink about the telephone that you have at home. You hear a ringing, beeping or buzzing, lift the receiver and talk with someone calling from anywhere in the world. The receiver has two ends, and only feels comfortable used one way, with the speaker to your ear and the microphone to your mouth. When my kids were much younger and I would call home with a quick question for my wife, my 2-year-old would answer the phone and want to talk. Telephones are truly so easy a child can use them. Isnt easy-to-use software great! Wouldnt it be nice to have a software automation architecture that is easy to use? Most software test automation projects rely on some type of framework or architecture to operate efficiently, and we the Test Engineers, are often developers of these software test automation architectures. As we design and develop these architectures, we must follow good development practices, which include the following six quality characteristics as identified in the Software Engineering Product Quality (ISO 9126-1): Functionality Reliability Usability Efficiency Maintainability Portability

Usable automation

Software Usability is defined as: The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. By bruno P. kinoshita
http://en.wikipedia.org/wiki/Software usability

So EaSy, EvEn Can USE it!

Child

This article will focus on the quality attribute of usability. Usability is defined by the International Organization for Standardization (ISO) as: A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated implied set of uses. (ISO 9126: 1991, 4.3) Why is usability an important attribute of a software test automation architecture? First and foremost, an automated testing architecture is a tool, but it is also a software package that needs to be easy for the users to set up and quickly become productive and proficient with. We have seen many instances where crude automation architectures have been created and only a skilled automation engineer can use them or worse, some only the developer can understand and use. This

by Frederick Rose

Is Your TesT AuTomATIon ArchITecTure usAble?


hen you hear the salesperson say, Its so easy, a child can use it, they are not implying that you are only as smart as a child, but that their merchandise is so easy to operate; anyone can easily use it without reading or attending a class. Is your software test automation architecture so easy, even a child can use it?
March 2012 www.automatedtestinginstitute.com

Automated Software Testing Magazine

17

Measurable Concept

Attribute
Design Legibility (Readability) Interfaces Understandability

Indicator
Meaningful Names Functional Elements Understandability

Indirect Metric
Proportion of Functional Elements with Meaningful Names Proportion of Functional Elements Used Without Errors Proportion of Exceptions Correctly Understood

and has developed many useful metrics for assessing the usability of software. These usability metrics can and should be applied to our automation architectures since we are building software packages to test software. Table 1 offers insight into determining the complexity of your automated testing architecture. These attributes can be applied to most software packages. Consider those described below. Without instructions and documentation most of us can figure things out, but are not able to take full advantage of a software packages full functionality and features. Instructions and help systems are needed so that users can be more productive and proficient with a tool. Table 2 gives insight into determining the thoroughness of your automated testing architecture documentation. Once again, these attributes can be applied to most software packages. As you look through the chart below, consider the attributes described. Usability can be expressed in the following equation:
Complexity of the Design Quality of the Documentation or Instruction

Complexity of Design

I/O Message Understandability

I/O Understandability

I/O Understandability

Proportion of Arguments Correctly Understood Average Time to Use Correctly the Component Average Time to Master the Component Functionality Configurable Parameters Per Interface Ratio Customizability Customizability Ratio Customizability Ratio

Time to Use Ease of Learning Time to Expertise

Quality of Error Messages

Error Message Suitability Error Message Clearness

Error Message Per Functional Element Density Proportion of Error Messages Correctly Understood Operations Per Interface Density

Usability

Interfaces Complexity

Interface Density

Events Per Interface Density

Table 1: Metrics Associated With Complexity of Design


Adapted from: usability Metrics for Software Components - Manuel F. Bertoa and Antonio Vallecillo

means that these skilled engineers will need to be the sole users of the architecture when ideally an automation architecture should be so easy anyone can use it and be productive allowing the architects to move on and use their time developing new solutions or enhancing the tool. At the highest level, usability can be broken into two major categories: Complexity of the design Quality of the documentation

How complex is your automation architecture?

automated

We always want more functionality in our software but often forget that the more functionality we include the more complex the system becomes. Often, there is a trade-off between functionality and complexity but, many times we cant avoid creating complex systems if we are to include all of the required functionality. The usability and user-centered design profession has been maturing at an everincreasing rate over the past few years

Assume that complexity of design can be from 1 to 10, with 10 being very complex; and the quality of the requirements can be from 1 to 10, with 10 being very high quality. The closer to 0 the usability measure, the more usable the software is. A complex software package requires much more documentation than a simple software package. But, who can say how much is enough? This becomes a judgment call, and it depends on the users, and their level of knowledge. Getting back to the telephone example, a simple phone that you purchase and hang on your wall at home comes with a single page of instructions because it probably only makes and receives calls, and possibly has a few other simple features. Your cell phone, conversely, probably came with a small book with pages of instructions and tips because these phones March 2012

18 Automated Software Testing Magazine

www.automatedtestinginstitute.com

Usable automation
can do everything a small computer can do. A simple architecture may be able to get by with minimal documentation but the more complex the architecture is, the more documentation or instruction is required for users to use it productively. The rest of this article will discuss in detail these sub-characteristics of usability, and how to improve the usability of an automation architecture. Usability has three sub characteristics: Learnability Understandability Operability
Size of Manuals Manual Suitability Effectiveness Ratio Effectiveness of Manuals Understandability Ratio Demonstration Coverage Demonstration Consistency Help System Coverage Contents of Help System Quality of Manuals Contents of Manuals

Measurable Concept

Attribute

Indicator
Manuals Coverage

Indirect Metric
Proportion of Functional Elements Described in Manuals Proportion of Functional Elements Incorrectly Described in Manuals

Manuals Consistency

Completeness of Manuals Difference Between the Component Version and the Manual Version Ratio of Figures Per Manual Page

Manuals Legibility

Ratio of Tables Per Manual Page Ratio of UML Diagrams Per Manual page Average Pages Per Functional Element Proportion of Functional Elements Correctly Used After Reading the Manual Proportion of Functional Elements Correctly Understood After Reading the Manual. Proportion of Functional Elements Showed in Demos Difference Between Demo Version and Component Version Proportion of Functional Events Shown in Help System Proportion of Functional Elements Incorrectly Described in the Help System Completeness of the Help System Help System Word Ratio Proportion of Functional Elements Correctly Used After Using the Help System Proportion of Functional Elements Correctly Understood After Using the Help System Number of Marketing Information Elements Described Proportion of Services Understood After Reading the Component Description Difference Between the Component Version and the Marketing Information Version Proportion of Services Understood After Reading the Component Description

LeArnAbILIT y
Learnability is defined as: Attributes of software that bear on the users effort for learning its application (for example, operation control, input, and output). (ISO 9126: 1991, A.2.3.2) How long does it take to learn your automation architecture and become proficient in its use? Time to become productive may be measured in hours of training, but it is more than just training. Becoming productive requires a help system and documentation assisting the user in figuring things out. This effort may be different depending on the users skill level novice, operator, automator or the automation architecture architects. The learnability of your automation architecture can be expressed and measured in a variety of ways. For these metrics you can measure tasks that the architecture performs, functions that are included in the architecture or even keywords in an automation architecture.
Quality of Help System

Quality of Demos

Content of Demos

Help System Consistency

Size of Help System

Help System Suitability Help System Effectiveness Ratio Help System Usability Ratio Coverage of Marketing Information

Effectiveness of Help System

Quality of Marketing Information

Contents of Marketing Information

Completeness of Marketing information Marketing Information Consistency Marketing Information Understandability Ratio

eAse of leArning
Is your automation architecture easy to learn?

Table 2: Metrics Associated with Quality Documentation


Adapted from: usability Metrics for Software Components - Manuel F. Bertoa and Antonio Vallecillo

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

19

Ease of Learning

Mean time taken to learn the task, function or keyword correctly

Ease of Learning

Total time to learn to perform all tasks, functions or keywords Number of tasks, functions or keywords

The lower the mean time to learn to perform the task, function or keyword, the easier the architecture is to learn.

ComPleteness of the helP system


Does your help system provide the complete help for each task, function or keyword in your architecture?
Number of tasks for which correct online help is available Number of tasks

Completeness of the help system

Figure 1: HTML Help Workshop Help


Listen for dial tone Local Calls number Dial the

The closer to 1 you are, the more complete your help system is.

Long Distance Calls Dial 1 + Area Code + Number

effeCtiveness of helP system


Does your help system provide correct help for each task, function or keyword in your architecture?
Number of tasks successfully completed after using online help Number of tasks

How can you build learnability into your automation architecture? You can increase the learnability of your automation architecture by creating training, and developing a help system. Like any other software package, an automation architecture requires training. You may want to create different training sessions targeted for each user group: architects, automators and operators. The architects are the core development team and may not need much training at all since they developed the architecture, or they may be the trainers of the automators and operators. The automators are experienced Software Test Engineers who know and understand automated testing, but need specific training on the proper usage and standards for building and executing tests most effectively. Finally, the operators may be lower level automators or even manual

testers whose job is to execute automated tests and interpret the results of existing tests. These automators can really play a big part in freeing up the automators and architects to focus enhancing the architecture. The training should include hands-on practice exercises, but it is really your preference depending on the users needs. The help system provides direction or instruction for using the software. You wouldnt buy a software package that didnt come with instructions, so why would you push an automation architecture to your colleagues with no instructions? In the past, many software packages came with big manuals or volumes of books, however, now, they come with electronic versions as part of the software package or even online resources. You can create your help system any way you want but it must be developed, maintained, be complete and useful. There are many documentation software packages that you can use but the one that I like is the Microsoft HTML HELP Workshop which is a free download. Figure 1 is a graphic March 2012

Effectiveness of help system

The closer to 1 you are, the more effective or complete your help system is. Getting back to the telephone example, last year I was deep in the Shenandoah Valley visiting relatives, and I had to use a pay phone since I couldnt get reception in the valley. I went to the phone, and found instructions that were easy to read and understand:
Deposit 25

20 Automated Software Testing Magazine

www.automatedtestinginstitute.com

Usable automation
showing the help screen similar to ones you may have seen in software packages. You can download it from: http://www. microsoft.com/downloads/details. aspx?FamilyID=00535334-c8a6-452f9aa0-d597d16580cc&displaylang=en. The benefits of using a software package like the HTML Help Workshop is that your documentation becomes searchable and indexed so the documents are quickly and easily found, and it looks professional. can identify or find all of the functions and/or features. Do the software users know all of the functionality that the software provides?
Number of functions identified by user Total number of actual functions

How can you build understandability into your automation architecture? There are a few ways to add understandability into your automation architecture including using reusable components, functions and features, code documentation and external documentation. Through the use of reusable components in your automation architecture, you are able to increase understandability since the same parts are reused for multiple tests. There are two ways to accomplish this. First, develop your automation architecture with reusable components such as core driver scripts, functions, and the reporting mechanism that can be common or global and kept constant for all tests. While components such as object repositories, scenario or design documents, and data table will remain application specific, and will not be used for all tests. The second way to build understandability into your automation architecture is through documentation. Documentation can be divided into two parts: internal, which is inside or with the code; and external, which is in the user documentation. The internal documentation is nothing new. It is simply putting to use good coding practices by including comments with your code. The amount of documentation is up to you, but you should develop standards on the detail of comments and content of the comments. Figure 2 is a sample of how you may want to include headers that indicate the purpose of the function, in all of your functions. The information should also include the original author and creation date, and finally information on any modifications to the function, by whom, and when.

Evident functions

The closer to 1 you are, the more functions the user can identify.

Understandability
Understandability is defines as: Attributes of software that bear on the users effort for recognizing the logical concept and its applicability. (ISO 9126: 1991, A.2.3.1) Can the users of your automated architecture understand how your architecture works? Is the architecture intuitive? The understandability of your automation architecture can be expressed and measured in a variety of ways. You can measure tasks that the architecture performs, functions included in the architecture, or even keywords.

funCtion understAndABility
Can the users correctly define the architecture functions that are included in the architecture or even keywords?
Number of interface functions whose purpose is correctly described by the user Number of functions available from the interface

Function understandability

The closer to 1 you are, the more functions the user can identity. Consider again, the phone. When we went looking for a new phone for our house, we shopped around and found there were many different makes, models and styles, and technically, there were many distinctions such as the frequency used to communicate to the base station. But, when it came down to it they all had a touch pad that had the same buttons in the same order, a speaker to put to our ear and microphone to speak into meaning they are all used the same way, making them all very easy to understand and use.

ComPleteness of funCtion understood


Examine how well the architecture users understand the tasks the architecture performs, functions, or even keywords. Are they intuitive to the user?
Number of functions understood

Completeness of function understood

Total number of functions

The closer to 1 you are, the more understandable your architecture is.

evident funCtions
Look at how well the architecture users March 2012

This function is used to press the Continue button on a web page. Author: John Doe Date: 11/1/2010 Modified: Jane Doe Date Modified: 12/1/2010 Added code to send the elapse time to the Results Report.

Figure 2: Sample Code Comments


Automated Software Testing Magazine 21

www.automatedtestinginstitute.com

PUrPose:

This keyword function is used to close a browser, and compares the closed browser to the title of browser that is active.
Keyword Page ObjType Object Value
Page 1

Usage (eXaMPle)

Close_Browser

Page 1

Data Drive

Data File

Recovery

Comments

Figure 3: Sample Keyword Usage example


Documentation from the code can also be external to the actual code. One way to do this is by creating documents such as a keyword glossary or dictionary of keyword definitions and usage. This documentation allows anyone to use these keywords and create tests without looking through the code to see what the keyword does or how to use it. Figure 3 shows an example of how to document keywords or functions to make them more understandable. By creating documentation as such, anyone can quickly use your architecture.
Number of error conditions for which the user proposes the correct recovery action Total number of error conditions tested

CustomizABility
Can your automation architecture easily be customized? This is very important since most tests need some type of customization. This makes the architecture more agile and flexible.
Number of functions successfully customized Number of attempts to customize

Self explanatory error message

The closer to 1, the more helpful your automation architectures error messages are.

Customizability

inPut undo-ABility
Can the user easily understand and correct input errors in your automation architecture?
Number of input errors which the user successfully corrected Number of attempts to correct input errors

Operability
Operability is defines as: Attributes of software that bear on the users effort for operation and operation control. (ISO 9126: 1991, A.2.3.3) Can all of the users of your automation architecture interpret the results and outputs to operate and control test execution? The operability of your automation architecture can be expressed and measured in a variety of ways.

The closer to 1, the easier your automation architecture is to customize. Referring to the telephone, everyone knows the four distinct sounds a phone makes. When you hear a phone ring, you know someone is calling. You pick up the receiver and say Hello. When making a call, the first thing you do is lift the receiver and listen for a dial tone indicating that the line is available. You enter a phone number, and either hear a busy signal or ringing indicating that the call is going through, and you are waiting for someone to answer. These four tones are very clear and distinct prompts to which you react. How can you build operability into your automation architecture? The best way to add operability to your automation architecture is by providing the users with a clear, concise and accurate results report and error messages. By giving the user more insight into how the test execution concluded, or what created an issue, the user can better determine what to do next.
March 2012

Input undoability

The closer to 1, the easier your automation architecture is to correct an input error.

error undo-ABility
Can the user of your automation architecture easily correct errors reported?
Number of errors that were successfully corrected Number of error conditions tested

self exPlAnAtory error messAge


Are the error messages that your automation architecture presented easy for anyone to interpret? Does the user understand what the error message wants them to do?

Error undoability

The closer to 1, the easier your automation architecture is to correct from an error situation.

22 Automated Software Testing Magazine

www.automatedtestinginstitute.com

Usable automation
that gives the user valuable information including the test date, test case name and test description. Further, each step in the test is reported in the following format: the test step; the status; the expected value; and the actual value. Notice that the third step failed. If you look at the actual results, you see that the page title was NOT Welcome Home but was Welcome to Page 2. This piece of information assists the user in troubleshooting the issue. It is clear what the expected and actual results written to a report making troubleshooting easier. Additionally, you can add functionality to your architecture that will save a screen shot upon failure so that the user can know exactly what was going on during the failure, and what was displayed on the screen. In conclusion, we all successfully use telephones daily, but few of us have read the instructions. Most telephones are very usable and intuitive. Your automation architecture should be as well. You can make your test

Figure 4: bad error Message


The Internet is full of graphic software error messages that make no since or give the user nothing useful to troubleshoot the issue. Figure 4 is an example of an error message that does not help the user at all. Taken to the other extreme, Figure 5 shows a message that gives the user details of what happened and how they can troubleshoot and correct the issue.
Providing the user with detailed concise results reports is also a great way to promote operability in your automation architecture. Most automation tools do a poor job of indicating how an automated test has failed and this is an important tool that is needed to troubleshoot the issue. Figure 6 shows an example of a results report

Figure 5: Useful error Message

page was displayed and what page was expected to be displayed. This type of report messaging should be written into the functions and include language indicating

automation architecture more usable by planning for and building in learnability, understandability, and operability so that it can be so easy a child can use it.

Figure 6: Helpful Test results report


March 2012 www.automatedtestinginstitute.com Automated Software Testing Magazine 23

nce * Th ra u

alue of V e

Continuous QA uses Continuous Integration to forward the goals of

functional correctness; it is a part of Continuous Integration, but with a focus on automated functional testing (what is currently done manually for the most part) using UI playback tools like Selenium and FoneMonkey.
www.automatedtestinginstitute.com

24 Automated Software Testing Magazine

lity Ass Qua

QA
Continuous

tinuous Con
by Ed

Schwarz

March 2012

There is a process and Agile project-management tie-out for Continuous QA, having to do with capturing, validating, and maintaining the test suites, and in traceability of changes to stories or defects, which is specific to Continuous QA.
Are there really more

now than ever before?


www.automatedtestinginstitute.com

APPs

being developed

March 2012

Automated Software Testing Magazine

25

I will assert that more applications are under development right now than at any time before 2008, illustrating the mobile-app asteroid impact. Not only that, these applications are targeted at the most challenging and demanding audience there is international consumers, especially young ones. Oh, and did I mention that each app really is a co-operating ecosystem made up of bits of technology connected to one another via an avalanche of configurations? So to say there is a need for better approaches to assuring a great user experience is like saying its time to look into getting something besides a staircase for those skyscrapers were building. Were way past the time when manual testing approaches have the slightest hope of keeping up. Enter Continuous QA, which isnt the whole enchilada, but it does start to use the same kind of tool-driven, cycleoriented, end-to-end practices that have made Continuous Integration a basic building block of modern, agile systems delivery. Basically, Continuous QA focuses on using automated functional testing recorded user interactions, with validated results as a basic building block of the buildtest-deploy cycle. Moving functional testing way up in the development cycle - close to where unit testing typically is intended to fit - has dramatic benefits in quality. It also frees critical team members to focus on understanding and testing new features and functionality, and provides a way to increase the surface area of the tests radically without significant cost increases. These ideas arent all new. What has made this a key time to revisit them, though, is a confluence of trends. The first is the explosion in the need to test as just discussed. Another is the availability - and scriptability - of low-cost, on-demand, dynamic, virtual, cloud-based instances of entire system infrastructures. A third is the widespread adoption of software practices designed to manage continuous production of software by

re there really more apps being developed now than ever before?

Zarro boogs is a facetious metastatement about the state of development. bug trackers, used to monitor the state of problems with a software project, readily describe how many bugs are outstanding. the response "zarro boogs" (instead of "zero bugs") is intended as a buggy statement itself, implying that even when no bugs have been identified, software is still likely to contain bugs that haven't been identified yet.
href=http://en.wikipedia.org/wiki/Zarro_boogs

distributed teams along with the coevolution of software tools purpose-built for those practices. These trends combine to make now the time to make the better part of better, faster, cheaper a true reality.

tool-driven

Goals

The goal of Continuous QA is to provide direct business value through significantly increased quality of deliverables, as measured by end-user defect reports, without decreasing delivery speed or significantly increasing delivery costs. In a more vernacular rendering: Zarro boogs No speed bumps Business value = low cost

Its all about automated UI testing, and about running those tests as part of Continuous Integration (or Continuous Delivery, even better) of the project. The rationale is the same as it is for Continuous Integration generally, and it has to do with reduced cycle time, turbo-charged in this case, with the Deadly Curve of Cost of Delayed Defect Detection. The tool-driven approach also is critical because it allows the size of the tests suite to increase throughout the project, while paying for this increase with the cheapest resource available - processing time. This principle drives a strong alliance of the Continuous QA team with the tool chain workflows in the Agile project structure (sometimes called build/deploy or release management). (The benefits of running a real end-user regression suite every build - or at least every day - is very tangible, even for folks accustomed to a TDD, unit-test focused environment. The confidence of developers in the harness allows for earlier, bolder refactoring.)

Principles

These are just the ones I have been mulling over as this topic has been percolating, so we can consider them provisional:

User/BUsiness Point-of-view.

These are QA tests of end-user functionality, typically interactions with the UI. They should read like good user stories or defect reports - which are their typical underlying sources. Ideally, they will not change if the implementation of the system is re-factored. (This is in sharp distinction to unit tests, which are meant to test a particular API.) This places part of Continuous QA close to the requirements/backlog/issues workflows in an Agile project structure - similar to where traditional, manual QA has its focus.

rePeataBle/rePortaBle

Serious QA thrives on metrics, and its a key part of Continuous QA to deliver a repeatable, reportable process which not only informs about a particular test run, but also can be used to look at improvements over time. By folding user experience testing into the Continuous world, Continuous QA brings defect detection and code changes closer together than ever, making the root cause analysis for defects much more efficient. Equally close integration with release and March 2012

26 Automated Software Testing Magazine

www.automatedtestinginstitute.com

defect/issue management systems can give project teams the telemetry they need to keep targeting the most critical aspects of an evolving system.

Workflows of Continuous QA

Continuous QA sits at the nexus of the requirements and tool chain workflows. Interestingly, although it requires developer skills (to maintain the test suites), Continuous QA doesnt really need much coupling with developer

Bridging these two workflows is the critical ongoing creation and refining of the real artifacts of Continuous QA - test scripts. Lots of projects dont do functional test automation, and there really isnt much standard practice around how its done - for which you may read there isnt a good open-source tool in this area. But this lack of standard practice also is because it sits between existing workflows. Moreover, while practitioners in this role need development skills, the workflows they sit between are not

to commonly share where an interaction is narrated: I go to the login screen and enter my username/password and hit enter. When the Home screen appears, I click on the News button, I should see the list of headlines, and then I enter my query in the search box. Scripting frameworks which allow a simple, step-by-step syntax for describing interactions at this level allow for simple transcription in a form that all stakeholders can understand and work with.

So to say there is a need for better approaches to assuring a great user experience is like saying its time to look into getting something besides a staircase for those skyscrapers were building.
workflows on the project. Because of its user/business point-ofview, what ultimately becomes test scripts begin as user stories or defect reports - actual narratives of some interaction with the system. Capturing these from the actual product owners or users, and understanding them in context, is in the traditional domain of the BA, and understanding those captures well enough to repeat them and judge the results is in the domain of QA. These skills remain critical when Continuous QA is brought into the mix. As the word implies, the Continuous part is where the tool chain workflow becomes critical. In an ideal Continuous QA project, every successful build and unit test would be followed by a provisioning (or clean) of a dedicated testing environment, execution of the tests with as much context capture as possible, and an archiving of the results, cross-indexed to the CI build that spawned it. Making this happen, and keeping it happening, is the tool chain engineers domain. For example, a lot of this is anticipated in the integration-test phases in maven - a real tool chain engineers tool if ever there was one. March 2012 developer-centric. The result is that the practice and community are either defined by their tools (e.g., the Rational/ClearCase world) or are cobbled together using bits of tool sets from related workflows (e.g. VersionOne, FlexUnit, Jenkins, and svn to track a defect/resolution). The gap around the test-suite creation and maintenance remains one of the great challenges for realizing Continuous QAs goals. Tool chain standards: Continuous QA tools need tie-ins to the existing tool chain platforms - like ant, maven, Hudson, Jenkins, xUnit, puppet, etc. These tools provide the junction points that allow multiple artifacts of the project rendezvous, so simplifying the coupling and configuration here is critical.

It starts with a name

Oh, yeah the Agile part

In the real world the entire methodology, tool chain, and practice dont all sync up perfectly in every project. Critical to actual use of Continuous QA is that it be usable by folks without demanding a broad ideological and tool-configuration commitment. How do we get there?

recording

Every system is actually tested at some point by someone actually using it. Tools, like FoneMonkey, that allow recording of user interactions, can bootstrap the process without forcing business users or manual testers to go through long documentation sessions. Use cases, user stories and defect reports

The reality is that Continuous QA already is delivering value to folks who have put the pieces together. At Gorilla Logic, we believe in the value pretty strongly. It was our own experience with automated UI testing that drove us to create FoneMonkey and FlexMonkey, our open-source tools for UI automation. In fact, we believe in it so strongly that in addition to creating the tools, we are contributing this new name, a bit of jargon for all to use: Continuous QA. Its through the availability and adoption of a name like this - and tools like the ones we make, and the rest of the Continuous Delivery tool chain - that Continuous QA can become a standard practice for development, which will help everyone who uses computers, and also those who love them. 27

Narrative Scripting

www.automatedtestinginstitute.com

Automated Software Testing Magazine

28 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

We have 203 guests online

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

29

The diverse case studies in this book may be used for making contemporary decisions regarding engagement in, software test automation.

30 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

TEST

Automation Experiences

bridge between test AutomAtion theory And reAlity.

automation experiences

this

offering by

fewster

And

grAhAm

is A highly significAnt

test

AutomAtion

frAmework design And implementAtion is An inexAct science begging for A reusAble set of stAndArds thAt cAn only be derived from A growing body of precedence; this book helps to estAblish such precedence. much like predecessor court cAses Are cited to support subsequent legAl decisions in A judiciAl system, the diverse cAse studies in this book mAy in, support of, And educAting others on softwAre test AutomAtion frAmework design And implementAtion. be used for mAking contemporAry decisions regArding engAgement

Dion Johnson quote from Experiences of Test Automation

This is the quote from ATIs own Dion Johnson that you will find near the beginning of experiences of test automation: case studies of software test automation, the newest book by Mark Fewster and frequent ATI contributor Dorothy Graham. After being tapped as an early reviewer of the book, Dion Johnson gladly provided this quote to a book that was well on its way to joining Graham and Fewsters bestselling, decade old offering entitled, software test automation as a staple in the discipline of test automation. This book is significant, for the same reason that the community has deemed the Automated Testing Institute to be significant; because rather than lecturing to the community about whats important about and for test automation, it is a mouthpiece of the community that shares useful information that is organic and reality based. experiences of test automation is a collection of diverse case studies from multiple test automation practitioners that represent multiple systems, organizations, continents and experiences. It is crowdsourcings answer to book writing, and

the discipline will surely benefit from what it has to offer. It has even helped to inspire ATIs new Test Automation Wiki site that invites the testing community, in a much broader way, to add their own test automation experiences and case studies to broaden the dialog about what works and doesnt work in test automation. Read the Whats Hot article for more information on this. In deciding to do this article, which highlights the importance of test automation case studies, AST was originally going to have a single author. In the spirit of experiences of test automation, however, we decided to let multiple voices from the community be heard on the subject. Therefore, weve reached out to the contributing authors of the experiences of test automation book to explain the importance of test automation case studies in their own words. In addition, this article provides some insight into the various case studies and how some of the authors were selected to contribute to the book.

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

31

Dorothy Graham
Coauthor of Experiences of Test Automation

Jonathan kohl
Book Chapter: 19 Chapter Title: Theres More to Automation Regression Testing: Thinking Outside the Box than

A story is an account of events that have happened to people, told in an entertaining way (according to the dictionary) Real life can be quite different from theory in all aspects of life - test automation is no exception. In producing this book of case studies, we wanted to pull together representatives of a wide variety of industries, applications and environments to see what is really happening in test automation today. Knowing what actually works in practice gives the best guidance to other people in a similar situation, not only through what has gone well but also in the way that problems and difficulties were handled, and in understanding the reasons for failure of automation efforts. We selected the stories in our book from submissions sent to us in response to personal contact, an appeal on my web site, and people we met at conferences. The book took 2.5 years to produce, and Mark and I spent around 1000 hours between us, not counting the time spent by the books contributors. There are decades of experience and knowledge in these pages! We have been very impressed by the ingenuity, persistence and solution-seeking of the books contributors, as well as the pervasiveness of test automation. As the saying goes, Learn from the mistakes of others; life is too short to make them all yourself. The expertise encapsulated in this book should save you months or years of time, and help you take your own optimum route to success in test automation

Chapter Quote: Jonathan Kohl takes us through a set of short stories, each illustrating a variation on the theme of automating things other than what people usually think of, commonly known as thinking outside the box. The stories together demonstrate the value of applying ingenuity and creativity to solve problems by automating small things, or things other than test execution.

I was approached by Dorothy (Dot) Graham to contribute to a new book that she and Mark Fewster were working on. Dot explained that they were talking to people from their 1999 book: Software Test Automation to discover the lessons people learned since the book was published. She further explained that these discussions had grown into an idea for a collection of case studies, and she wanted me to talk about my non-traditional experience in test automation. Traditionally when people think of test automation they think about automation of GUI regression tests, but this didnt describe much of my experience. I often work for short times on various projects, so my automation experience often falls under the theme of automation assistance, rather than regression test automation. I use the term automation assistance to describe automation of test set up, automation to aid manual exploratory testing, use of non-GUI interfaces to speed up test execution, use of simulators to create real-world conditions, etc. These tasks aid primarily in increasing tester productivity with tools that require relatively low maintenance. In discussing this with Dot, she expressed a particular interest in a story that I relayed about saving a tester roughly 50% of her time through 50 lines of Ruby code. This effort did not involve the automation of any tests, but rather the automation of tasks that could be observed during testing, allowing the tester to focus on making real-time, manual assessments regarding the existence of problems in the system under test. Dot loved it and wanted it in the book. One of the factors in my decision to participate in the book was Dots passion along with her and Marks insistence on honest and experienced-based content, both good and bad. It was clear that this book wasnt going to be yet another test automation cheerleading publication. Case studies are incredibly important, particularly in an industry where processes, practices, tools and approaches are touted as superior with little to no empirical evidence. Without scientific studies that actually prove the claims made about popular practices and approaches, the next best thing are case studies that talk about the good and the bad of implementing these tools and approaches. I applaud the authors of Experiences of Test Automation who were as quick to point out automation disasters as they were the successes.

Jon haGar
Book Chapter: 29 Chapter Title: Test Automation Anecdotes Chapter Quote: An anecdote is a short account of an incident (especially a biographical one). Numerous people told us short stories (anecdotes) of their experiences, and because they merit retelling but dont constitute full chapters, we collected them in this chapter.

Understanding history helps you repeat the good and learn from the mistakes, case studies provide such history, but many cases in other books are in the information technology domain, while a large amount of software is in the embedded space. So when I was approached on test automation, I wanted to represent this segment of test automation which is often forgotten, but in my experience has common themes testers should know.

32 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

automation experiences Bo roop


Book Chapter: 4 Chapter Title: The Automator Becomes the Automated Chapter Quote: Bo Roop takes us on a guided tour of attempting to automate the testing of a test automation tool. Its one of the first questions to ask a tool vendor: Do you test the tool using the tool? But it isnt as straightforward as you might think!

henri van De Scheur


Book Chapter: 2 Chapter Title: The Ultimate Database Automation Chapter Quote: Henri van de Scheur tells a story that spans half a dozen years, showing how they developed a tool for testing databases in multiple environments.

If I remember correctly, Dorothy sent out a mail in August 2009 to her contacts about planning a new edition of the book she and Mark Fewster had written in 1999 called Software Test Automation. I thought I had a good story to tell, because I used their book a lot in my effort to improve testing in my previous company. In addition I knew it would be a bit different from other possible stories, since this focused on testing databases, so I thought it important to share my experiences with others. I hoped to help people in the way that Software Test Automation helped me. Ive already read Experiences of Test Automation and immediately found some stories and experiences that I was able to successfully implement in my daily work.

I first met Dot after a presentation she delivered at the Great Lakes Software Excellence Conference (GLSEC) in Grand Rapids, MI. The following year at the GLSEC I had the opportunity to speak with her during the lunch break. She asked about my testing background and upon sharing the story of how I began my testing career she mentioned that my experience and story would be a welcome addition to her (and Mark Fewsters) book. I was happy to contribute my story in the form of a case study for the book largely due to the excitement that she expressed when I was sharing it. Her attitude about testing, sharing and collaboration is contagious! I have to admit that my contribution to Experiences of Test Automation was my first foray into professional writing and I had a few reservations about how well my chapter would fit in with the other, bigger-name authors already on board to contribute to the book. Some of the others were already well-known authors, and I even owned many of the books written by them! But I think thats part of the benefit of this book. Each of us has our story (or multiple stories) that weve learned, experienced and lived. This isnt just a collection by thinkers, its a collection by doers. The main benefit to me in reading the book from cover to cover is the wealth of knowledge that was accumulated by the 40+ authors that are spread out over close to 30 chapters. There are many parallels to be found in the stories, and many other one-off lessons that can be applied to the responsibilities Im tasked with accomplishing in my job.

nick Flynn
Book Chapter: 25 Chapter Title: System-of-Systems Test Automation at NATS Chapter Quote: Mike Baxter, Nick Flynn, Christopher Wills and Michael Smith describe the automation of testing for NATS (formerly National Air Traffic Services) which amongst other responsibilities controls the airspace over the North Atlantic Ocean. Testing a safety-critical system where lives are at stake requires a careful approach, and their requirements included special technical factors, human factors and commercial considerations.

Dot was given our contact details by a test tool vendor. She already possessed a story describing how Google had used the product wed also used and ideally wanted another example so as to give prospective readers a balanced presentation of the pros and cons in different environments. Testing ATC systems presents unique challenges and because were operating in a niche market segment many of the tools we tend to rely upon result from bespoke development. The SAATS system-of-systems test automation project was one of those relatively rare occasions when a generic commercial tool aligned well with our requirements straight out-of-the-box. Because of the nature of the business were in, we have to get it right the first time. That makes it especially important to seek out and apply best practices wherever we happen to find them. Being able to conveniently refer to a well-balanced collection of case studies gives you a great yardstick with which to figure out if youre behind the maturity curve or about to embark on what might end up becoming a poor strategy thats costly to

Testing a safety-critical system where lives are at stake requires a careful approach, and their requirements included special technical factors, human factors and commercial considerations.
Automated Software Testing Magazine 33

March 2012

www.automatedtestinginstitute.com

ATI Automation Honors


Celebrating Excellence in the Discipline of Software Test Automation Nominations Open April 1!
www.atihonors.automatedtestinginstitute.com
34 Automated Software Testing Magazine www.automatedtestinginstitute.com March 2012

4 Annual
th

K The KIT is Coming gnimoC si TIK ehT gnimoC si TIK ehT


October 15-17 2012
http://www.testkitconference.com
March 2012 www.automatedtestinginstitute.com Automated Software Testing Magazine 35

Simon millS
Book Chapter: 10 Chapter Title: Ten Years On, And Still Going Chapter Quote: Simon Mills brings his case study from our previous book, Software Test Automation (1999, AddisonWesley) up to date. Still automating ten years on is a significant achievement! The original story is included in full and is full of excellent lessons and good ideas.

chriStian ekiza luJua


Book Chapter: 20 Chapter Title: Software for Medical Devices and Our Need for Good Software Test Automation Chapter Quote: Even if you are not working with medical devices, this chapter, written by Albert Farr Benet, Christian Ekiza Lujua, Helena Soldevila Grau, Manel Moreno Jimez, Fernando Monferrer Prez, and Celestina Bianco, tells of their experiences in automating testing with many interesting lessons for anyone in automation.

Very nearly 20 years ago, I embarked upon Ingenuity, a plan to have a testing practice which set out to unashamedly merge quality led testing processes with high-dependability computer aided software testing. Relatively early in the journey, some 5 years in, I was proud to be invited to contribute to the original Software Test Automation by Dot and Mark. That book has, quite rightly, earned itself the reputation of being a standard work on the subject. The years have moved on, as has the wider acceptance of test automation, and it became the natural thing to do, for Dot and Mark to revisit the subject. Again, I was delighted to be asked to contribute to the latest work, in part because my passion and dream for Ingenuity has continued to thrive as a business, with many points upon which to reflect. There is no doubt that I can afford a certain sense of validation, by seeing how many of the early decisions continue to pay off as much relief as pride! For me, though, the really interesting part is the wealth of additional experiences that have been collected from which we all have so much to gain. To be a member of such a fine group of contributors is an immense privilege and I have no doubt that, once again, Dot and Mark have been the catalyst of yet another standard work. Every minute you spend reading this book will not be a minute wasted!

When we learned that Dorothy (Graham) was looking for contributions for an upcoming book on test automation experiences we sat down and discussed the possibility of participating in it. We felt that our work in such a heavily regulated field (Medical Devices) could bring an interesting perspective to the book and bring to light the subject of regulation related requirements in test automation. We had a few projects to draw experiences from, and upon examination of each we found that while all of them had similarities, each was absolutely unique in their development and outcome. We eventually went for a full disclosure and presented the story of automation efforts in 4 independent projects. All of these efforts started with the strong belief that test automation done right - the right approach at the right moment - can bring quality in a project to the next level. Still, different paths and different obstacles lead to different outcomes. The results were different for each project; from the one where cancelling the automation efforts was the right choice to make to the one where automation grew strong and stable and is still running. Our goal was to detail not only the successes and what worked for us, but more importantly what did not worked, and try to assess why it did not worked. As they say, to err is human, to learn is wise. It turned out that this message was being shared throughout all the chapters. Authors have approached this collection of case studies with a willingness to share both good and bad experiences and with the hope that better knowledge and insight of the mechanisms that make test automation move forward would rise from this book. I do think that we all have succeeded individually and collectively (a big thank you to Dorothy and Mark here), and that the experience gathered in this book is priceless to anyone involved in Test Automation.

36 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

automation experiences SteFan mohacSi


Book Chapter: 9 Chapter Title: Model-Based Test Case Generation in ESA Projects Chapter Quote: Stefan Mohacsi and Armin Beer describe their experience in using Model-Based Testing (MBT) for the European Space Agency (ESA). This took significant effort to set up, but eventually was able to generate automated tests very quickly when the application changed.

celeStina Bianco
Book Chapter: 20 Chapter Title: Software for Medical Devices and Our Need for Good Software Test Automation Chapter Quote: Even if you are not working with medical devices, this chapter, written by Albert Farr Benet, Christian Ekiza Lujua, Helena Soldevila Grau, Manel Moreno Jimez, Fernando Monferrer Prez, and Celestina Bianco, tells of their experiences in automating testing with many interesting lessons for anyone in automation.

At Systelab we produce and verify medical devices Software. The Test team provides the testing for internal projects and outsourced testing services. Test Engineers, holding university degrees, are responsible for any project testing task including STA. We do not have a specialized, fully dedicated STA team. After collaborating with papers about Automatic Testing in SQS 2008 and CISTI 2009, and publishing a work in the Testing Experience Magazine, we were contacted by Dorothy Graham with regard to participation in her new book about STA. We accepted the request to share our experiences and the specificity of applying STA in Healthcare SW. We have chosen examples with added value, both stories of success and of partial failure, from projects in embedded and UI software. Whilst STA is valued by our customers, achieving the balance with the formalisation needed is sometimes hard work.

When I heard from my friend and mentor Armin Beer that Dorothy Graham was collecting show cases for a new book, I was immediately enthusiastic about contributing. I felt that our experiences in the somewhat obscure field of model-based testing could be useful to other people. Also, I hoped that the Space domain in which our projects took place would exert the same fascination for the readers as it does for me. Looking at the completed book I am very satisfied with the result. This wealth of practical experience cannot be found in any textbook. The various show cases explain not only how to do things but also how not to do them which can be equally important. I am convinced that even the most experienced test specialist can learn something new from this book.

Jonathon WriGht
Book Chapter: 29 Chapter Title: Automation Anecdotes Chapter Quote: Numerous people told us short stories (anecdotes) of their experiences, and because they merit retelling but dont constitute full chapters, we collected them in this chapter.

ken JohnSton
Book Chapter: 3 Chapter Title: Moving to the Cloud: The Evolution of TiP, Continuous Regression Testing in Production Chapter Quote: Ken Johnston and Felix Deschamps from Microsoft describe how they moved their automated testing from being product-based to service-based, with the implementation of the automation in the cloud.

Ever wanted to know how to avoid the 60% failure rate in test automation projects? Ever wished there was a book about the valuable lessons others have learned and the mistakes they made along the way? When I started my career, I was lucky enough to read Dorothy and Marks first book, Software Test Automation. This guided me through the minefield that is test automation and helped me develop an approach to finding test automation solutions that can be executed from day one. Thirteen years later, Ive had the privilege of contributing towards their long awaited sequel, Experiences of Test Automation: Case Studies of Software Test Automation which builds on the solid foundations in their first book, and captures the proven approaches that have evolved since then.

The call for a case study on testing services came to me in the spring of 2010. Instantly I thought of my friend Felix in Exchange and the work hed been doing to take the existing Exchange test automation and move it to the cloud. Id written book chapters and white papers on broad swaths of services testing techniques but this was a chance to dive deep into a real world example and show how those techniques can be successfully applied at scale and in this case while also shifting to the cloud.

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

37

Test monkeys explore the system under test in a new way each time the test is run often finding bugs that otherwise would not have been detected.
Seretta GamBa
Book Chapter: 21 Chapter Title: Automation Through the Back Door (By Supporting Manual Testing) Chapter Quote: Seretta Gamba tells of the difficulties they had in trying to progress automation, even though it had a good technical foundation.

urSula FrieDe
Book Chapter: 23 Chapter Title: Automated Testing in an Insurance Company: Feeling Our Way Chapter Quote: Ursula Friede describes the experience of feeling their way to better automation. They didnt plan the steps that they ended up taking, but it would have been a good plan if they did. They began by just experimenting with a tool but soon realized the limitations, so they decided to address the most pressing problems by changing their automation.

Whilst I was attending a conference in Edinburgh I happened to meet Dorothy Graham. At the conference she stated that she and Mark Fewster were doing a book about test automation and was wondering whether people would like to contribute a chapter about their own experiences with test automation. I volunteered to write a chapter regarding test automation that I had done for a company in Germany previously. I felt that the work that I had done for this company would make a relevant contribution to the book that Dorothy was proposing. During the process of developing this test automation pack we solved many problems and learned from our mistakes, which I hope helps people who read this book. Collections of case studies are important, because they present many different ideas and solutions to test automation in a rapidly expanding and developing IT world.

I have just begun to read the new book from Dorothy Graham and Mark Fewster, Experiences of Test Automation. It is particularly interesting for me, because I provided one of the case studies, but Im also quite curious about the other ones. I must say that the first chapters are very interesting and informative, Im quite proud to be in such good company. How did it start for me? I must go back some ten years. That was when I was charged with introducing test automation in my company. It was a completely new domain for me, so the first thing I did was read all books available on the subject. And from the first test automation book by Dorothy and Mark (Software Test Automation, 1999) I got ideas for how to implement our test automation framework. That original book was structured with many case studies, similar to this new book, so I was able to examine which best matched our requirements and start from there. No need to reinvent the wheel! Fast forward to 2009. I had just held a talk at EuroSTAR about an enhancement my team made to our test automation framework that served the purpose of more greatly supporting manual testing. That year the program chair was Dorothy and she told me that she had recommended the selection of my talk for the conference. And she did more. She asked me if I would be willing to include this experience in a new book about test automation. What a question! Of course I was willing and I soon sent her the first draft of what would become chapter 21. It has been an interesting experience, and if you are interested in learning more about all the case studies, just go and get the new book. Its definitely worth it!

John FoDeh
Book Chapter: 24 Chapter Title: Adventures with Test Monkeys Chapter Quote: John Fodeh tells of his experiences with automated random test execution, also known as monkey testing for medical devices used for diagnosis and in therapeutic procedures.

Test monkeys explore the system under test in a new way each time the test is run, often finding bugs that otherwise would not have been detected. In my chapter I describe how test monkeys were applied to enhance existing testing capabilities, detect defects and evaluate the reliability of the system under test. I believe Experiences of Test Automation is a valuable book because it presents real world experiences spanning multiple industries and technologies. While there are several books that describe the theory, this book focuses on pragmatic solutions and approaches that can help the reader bridge the gap between theory and practice.

38 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

automation experiences larS WahlBerG


Book Chapter: 18 Chapter Title: Automated Tests for Market Place Systems: Ten Years and Three Frameworks Chapter Quote: Lars Wahlberg gives an insight into his ten years of test automation for market place systems, including the development of three automation frameworks. I read their first book when I started to work with market place systems in 1999 and was very happy when I got an email from Dorothy and Mark in August 2009 with an offer to participate as a contributor in the book Experiences of Test Automation. The frameworks that I had been working with were very similar in the overall design. Sometimes we tried to do things smarter, but it very often ended up being too complicated and ultimately abandoned. As we say in aerospace KISS (Keep It Simple St). I felt that this story would be good to share with others. It feels great to finally be able to hold the Experiences of Test Automation book in my hand. All the case studies are unique, but there are many similarities and common lessons to be learned, all based on real experience, and many hours of hard work.

roSS timmerman
Book Chapter: 26 Chapter Title: Automating Automotive Electronics Testing Chapter Quote: Ross Timmerman and Joseph Stewart tell about the in-house testing tools that they developed over several years to test automotive electronic systems at Johnson Controls.

I met Dorothy Graham in November, 2009, after she delivered an excellent opening keynote presentation at the Great Lakes Software Excellence Conference (GLSEC) in Grand Rapids, Michigan. Later in the day, as I was rehearsing my presentation out in a lounge area, in walked up to Dorothy. I immediately took the opportunity to say hello and share how much I enjoyed her keynote. After I told her that I was also a Calvin College graduate, I learned that she was writing a case study for her book Experiences of Test Automation. I told her about the work my colleagues had done to automate the software testing of automotive industry embedded electronics modules, and explained the benefits that we achieved through the software test automation. Because my experience came from the embedded software development area, Dorothy asked me to contribute a case study. I said yes, and that is how Joe Stewart and I became authors. In todays quickly changing world of technology, starting something from scratch is not fast enough or efficient enough to keep the investment pipeline open. Software test automation, done correctly, offers significant improvements in both efficiency and cost. The case studies in Dorothys book, Experiences of Test Automation, provide the knowledge needed to get it right the first time. No matter what software development industry you are in, there are multiple gems for you in this book.

liSa criSpin
Book Chapter: 1 Chapter Title: An Agile Teams Test Automation Journey: The First Year Chapter Quote: Lisa Crispin describes what happened when an agile team decided they had to automate their testing. Given Lisas expertise in agile, you will not be surprised to see that this team really was agile in practice.

Dot Graham told me she was doing a new book on test automation, and asked if Id be interested in contributing a case study from my own team. I really like Dots earlier book, and I love books with real-world stories, so I agreed. Ive been automating tests for a long time, but I wanted to tell my current teams story, because I feel most teams could benefit from our Whole Team approach to automating tests. Im so excited about Experiences of Test Automation, because the contributors come from so many backgrounds and domains, and there are stories about a huge variety of domains and types of testing. Curious about testing in the cloud? Theres a real-world example of that. Need to automate tests for embedded software? Read how someone else accomplished that. Whats really cool is that these arent all success stories - some are more along the lines of lessons learned. Sharing these experiences means we dont have to reinvent the wheel, each of us can move forward and forge new frontiers in test automation.

Wait! theres more! Like case studies? Now you have an opportunity to read more test automation case studies from practitioners involved in the discipline. In addition, you also have the opportunity to contribute case studies for others in the community to learn from. The Automation Case Studies wiki is a new feature offered by the Automated Testing Institute to further provide practitioners with useful resources for accomplishing their goals and to contribute to the community. Find this wiki at http://automatedtestinginstitute.com/casestudies

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

39

40 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

We have 203 guests online

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

41

latest from the B

i blog to U

Automation blogs are one of the greatest sources to-date test automation information, so the Autom Testing Institute has decided to keep you up-to-d some of the latest blog posts from around the we interesting posts, and keep an eye out, because yo your post will be spotlighted.
Blog Name: Jonathan Kohls Blog Post Date: February 10, 2012 Post Title: Experiences of Test Automation Author: Jonathan Kohl
Blog Name: Eusebiu Blindu Blog Post Date: February 19, 2012 Post Title: ...Test Automation, Automated Checking Author: Eusebiu Blindu

For years, it seems that test automation writing is dominated by cheerleading, tool flogging, hype and hyperbole. (There are some exceptions, but I still run into exaggerations and how automation is an unquestioning good far too often.) The division between the promoters of the practice (ie. those who make a lot of money from it), the decision makers they convince and the technical practitioners is often deep.

Testing is the activity performed by the person, not by the tool, including decisions, use of skills. If I do scripts thats not testing. Thats scripting. Testing is when I make the decision of creating a tool that will help me. Automation can be the process of using scripts to help with testing. Populating a database with data every week/day for me to test that is automation. A script to install the application every day on my computer, thats automation.
Read More at: http://www.testalways.com/2012/02/19/testing-automation-test-automation-automated-checking/

Read More at: http://www.kohl.ca/blog/archives/000235.html

42 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

Blogosphere

s of upmated date with eb. Read below for some ou never know when
Blog Name: MyTechFinds Post Date: November 28, 2011 Post Title: Choosing the right UI test automation tool Author: Ajay Majgaonkar Blog Name: Xceptance Blog Post Date: February 25, 2012 Post Title: Handle authentication during WebDriver testing Author: Rene

I can fairly say that people looking for this topic have quite a bit of (bitter) experience with UI automation. Lets not talk about good and bad of UI automation and keep the focus on the tool selection process, yeah, it sounds pretty heavy but it is indeed a process in itself before you decide to go with a particular UI automation tool for your project. Of Course, if you want to succeed!

Sometimes authentication is necessary before a test case can be executed. While HtmlUnit based tests can easily enter and confirm authentication requests, most browser based tests, cannot workaround the dialog. This is a browser security measure to prevent automated data capture and/or data entering. WebDriver for Firefox delivers a solution for that problem, but IE and Chrome rely on a manual interaction with the browser before the test automation can run.
Read More at: http://blog.xceptance.com/2012/02/25/handle-authentication-duringwebdriver-testing/

Read More at: http://mytechfinds.com/articles/software-testing/6-testautomation/35-choosing-the-right-ui-test-automation-tool

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

43

go on a retweet

paying a visit to t
Microblogging is a form of communication based on the concept of blogging (also known as web logging), that allows subscribers of the microblogging service to broadcast brief messages to other subscribers of the service. The main difference between microblogging and blogging is in the fact that microblog posts are much shorter, with most services restricting messages to about 140 to 200 characters. Popularized by Twitter, there are numerous other microblogging services, including Plurk, Jaiku, Pownce and Tumblr, and the list goes on-and-on. Microblogging is a powerful tool for relaying an assortment of information, a power that has definitely not been lost on the test automation community. Lets retreat into the world of microblogs for a moment and see how automators are using their 140 characters.

Weve extended the deadline for the TestKIT 2012 conference proposals. So you still have time - http://www.testkitconference. com

Twitter Name: TestingMentor Post Date/Time: Feb 16 Topic: Code Coverage

If youre not analyzing untested code, why would you spend cycles measuring code coverage? Fuzzy blanket same comfort as pointless metric.

Twitter Name: automatedtest Post Date/Time: Mar 5 Topic: TestKIT Conference


44 Automated Software Testing Magazine www.automatedtestinginstitute.com March 2012

the microblogs
Join the 4th Annual ATI Automation Honors Committee. Learn more at http://www.newsletter.automatedtestinginstitute. com

Twitter Name: AntJHowell Post Date/Time: Feb 16 Topic: Automate Early

Always think How can we Automate this when considering new projects. The earlier, the better.

Twitter Name: automatedtest Post Date/Time: Feb 22 Topic: ATI Automation Honors

Dont understand why so many developers will automate aspects of invoice processing, stock trading, shopping etc. but not aspects of testing

Twitter Name: lanettecream Post Date/Time: Jan 25 Topic: Selenium

I thought #Selenium was for testers, & in Seattle we believe it is. There in SJ it is SO dev centric, but its a TESTING tool. Totally odd.

Twitter Name: AntonyMarcano Post Date/Time: Feb 16 Topic: Testing Decisions

March 2012

www.automatedtestinginstitute.com

Automated Software Testing Magazine

45

hot topics in automation

Case studies Are hot!


This entire issue of the magazine has been focused on the importance of test automation experiences such as those found in test cases. If youre like many others in the community, youre probably excited by this focus, because test automation case studies are increasingly becoming an extremely hot topic, and rightly so. From case studies, you can gain inspiration, you can gain support for building a business case for automation in your own organization, and you can update your personal TestKIT with a model from which to begin building your own test automation implementation. The community is also excited about something else; something new. Something that offers the ability for automators to benefit from the personal experiences of others, but on a much larger scale. The new thing that is being referred to is ATIs new Test Automation Case Studies Wiki site. This wiki

Why ATIs Case Studies Wiki Is Important For Test Automation

Case studies

provides a place for test automators from different projects, working on different technologies, using different tools, and even residing in different countries to come for reading and even sharing information about test automation implementation successes and lessons learned. The site provides guidance for entering information in a standard, yet flexible way that provides for cases to be searched by a number of key parameters. Parameters such as: enterprise type, type of automation, tools used, primary tool license types, application under test (AUT) types, locations, development lifecycle, and more. This site provides the potential for members of the test automation community to learn from automated case studies on a scale that cant be matched by any other singular source, while also allowing people to share their own cases, thereby becoming an active participant in the test automation discussions.

Updates and CorreCtions


It was brought to ATIs attention that the September 2011 AST article entitled Make Your Own Data-driven architecture Using Selenium & TestNG! by Jailton Alkimin Louzada contained some uncredited references from a blog by Felipe Knorr Kuhn in the original publication of that issue of the magazine. The back issue of that magazine has since been updated to provide Kuhn with the appropriate credit.

46 Automated Software Testing Magazine

www.automatedtestinginstitute.com

March 2012

http://www.googleautomation.com
March 2012 www.automatedtestinginstitute.com Automated Software Testing Magazine 47

K The KIT is Coming

if you thought Atis 2011 event was good, wait until you see 2012.
http://www.testkitconference.com
48 Automated Software Testing Magazine www.automatedtestinginstitute.com March 2012

You might also like