GUI Path Oriented Test Generation Algorithms

Izzat Alsmadi Kenneth Magel Department of computer science North Dakota state university {izzat.alsmadi, Kenneth.magel}@ndsu.edu

ABSTRACT Testing software manually is a labor intensive process. Efficient automated testing can significantly reduce the overall cost of software development and maintenance. Graphical User Interfaces (GUI’s) code has some characteristics that distinguish it from the rest of the project code. Generating test cases from the GUI code requires different algorithms from those usually applied in test case generation. We developed several GUI test generation automated algorithms that do not need any user involvement, except in defining the test inputs and pre conditions, and that ensure test adequacy in the generated test cases. The test cases are generated from an XML GUI model or tree that represents the GUI structure. This work contributed to the goal of developing fully GUI test automated framework.
General Terms User interface, generation. Keywords Automatic test case

Test Automation, GUI Testing, Test Case Generation. 1. INTRODUCTION

Testing tries to answer the following questions (3): Does the system do what it should do? Or does its behavior complies with its functional

specifications (conformance testing)? How fast can the system perform its tasks (Performance testing)? How does the system react if its environment does not behave as expected (robustness testing)? And how long can we rely on the correct functioning of the system (reliability testing)? User interfaces have steadily grown more rich, more user interactive and more sophisticated over time. In many applications one of the major improvements that are suggested with the new releases is to improve the user interface. Generating test cases can happen from requirements, design or the actual GUI implementation. Although it is expected that those three should be consistent and related, yet they have different levels of abstraction. Requirements and design are usually of a high level of abstraction to generate from them the test cases. On the other hand the task of generating the test cases from the GUI implementation model will be delayed until we implement the GUI, which is usually occurred in the late implementation. We should not have any problems in delaying GUI testing giving the fact that a tool will automate the generation and executing process. We designed a tool in C# that uses reflection to serialize the GUI control components or widgets. Certain control properties are selected to be serialized.

These properties are relevant to the user interface. The application then uses the XML file that is produced to build the GUI tree or the event flow graph and generate the test cases. Generating the test cases takes into consideration the tree structure. Test cases are selected with the consideration of the effectiveness of the selected test suit. We will study the fault detection effectiveness of our test case selections. The algorithms developed to generate test cases from the GUI are novels. The two factors that affect the suggested algorithms were first generating a valid test scenario in which
each edge is a legal edge in the actual GUI model. The second factor is ensuring a certain level of effectiveness on the generated test scenarios.

The next section introduces the related work. Section 3 lists the goals of this research and describes the work done toward those goals. Section 4 introduces in summary the developed GUI Auto tool. Section 5 presents the conclusion and future work.
2. RELATED WORK

Software testing is about checking the correctness of the system and confirming that the implementation conforms to the specifications. Conformance testing checks whether a black box Implementation Under Test (IUT) behaves correctly with respect to its specification. The work in this paper is related to test case generation algorithms, automatic test case generation and GUI test case generation in software testing. Several approaches have been proposed for test case generation, mainly random, pathoriented, goal-oriented and intelligent approaches (5) and domain testing (which includes equivalence partitioning, boundary-value testing, and

the category-partition method) (7). Pathoriented techniques generally use control flow information to identify a set of paths to be covered and generate the appropriate test cases for these paths. These techniques can further be classified as static and dynamic. Static techniques are often based on symbolic execution, whereas dynamic techniques obtain the necessary data by executing the program under test. Goal-oriented techniques identify test cases covering a selected goal such as a statement or branch, irrespective of the path taken. Intelligent techniques of automated test case generation rely on complex computations to identify test cases. The real challenge to test generation is in the generation of test cases that are capable of detecting faults in the IUT. We will list some of the works related to this paper. Goga(2) introduce an algorithm bases on probabilistic approach. It suggests combining the test generation and the test execution in one phase. Tretmans(3) studied test case generation algorithms for implementations that communicate via inputs and outputs, based on specifications using Labelled Transition Systems (LTS). In MulSaw project (4), the team use 2 complementary frameworks, TestEra and Korat for specification based test automation. To test a method, TestEra and Korat automatically generate all non-isomorphic test cases from the method's pre-condition and check its correctness using its post-condition as a test oracle. There are several papers related to this project. We have a similar approach that focus on GUI testing. As explained earlier, one of the goals of our automatic generation of test scenarios is to produce non-isomorphic test scenarios. We also check the results of the tests through comparing the output

results with event tables generated from the specification. Those event tables are similar to the pre post condition event tables. Clay (6) presented an overview for model based software testing using UML. Prior to test case generation, we develop an XML model tree that represents the actual GUI that is serialized from the implementation. Test cases are then generated from the XML model. Turner and Robson [8] have suggested a new technique for the validation of OOPS which emphasizes the interaction between the features and the object’s state. Each feature is considered as a mapping from its starting or input states to its resultant or output states affected by any stimuli. Tse, Chan, and Chen (9) and (11) introduce normal forms for an axiom based test case selection strategy for Object oriented programs and equivalent sequences of operations as an integration approach for object oriented test case generation. Orso and Silva (10) introduce some of the challenges that Object Oriented technologies added to the process of software testing. Rajanna (12) studies the impact and usefulness of automated software testing tools during the maintenance phase of a software product by citing the pragmatic experiences gained from the maintenance of a critical, active, and very large commercial software product as a case study. It demonstrated that most of the error patterns reported during the maintenance are due to the inadequate test coverage, which is often the outcome of manual testing, by relating the error patterns and the capability of various test data generators at detecting them. Stanford paper (13) is an example of using formal methods in defining the specifications through object specification tool that check for some

properties like correctness. It is hoped that the application produced by this project should form the groundwork for another tool that is capable of producing small adequate test-sets that can successfully verify that an implementation of the specification produced is correct. In the specific area of GUI test case generation, Memon (14) has several papers about automatically generating test cases from the GUI using an AI planner, the process is not totally automatic and requires the user decision to set current and goal states. The AI planner will find the best way to reach the goal states giving the current state. Another issue with respect to this research is that it does not address the problem of the huge number of states that a GUI in even small application can have and hence may generate too many test cases. The idea of defining the GUI state as the collection state of each control and that the change of a single property in one control will lead to a new state is valid but is the reason for producing the huge amount of possible GUI states. We considered in our research another alternative definition of a GUI state. By generate an XML tree that represent the GUI structure, we can define the GUI state as embedded in this tree. This means that if the tree structure is changed, which is something that can be automatically checked, then we consider this as a GUI state change. Although we generate this tree dynamically at run time and then any change in the GUI will be reflected in the current tree, yet this definition can be helpful in certain cases where we want to trigger some events ( like regression testing ) if the GUI state is changed. Mikkolainen (15) discusses some issues related to GUI test automation

challenges. Alexander (16) and Haward present the concept of critical path testing for GUI test case generation. They define the critical paths as those paths that have “repeated” edges or event in many test cases. The approach utilizes earlier manually created test cases through a capture\play back tool. Although this is expected to be an effective way of defining critical paths, yet it is not automatically calculated. As an alternative to the need of defining critical paths from run time, we define in one algorithm static critical paths through the use of metric weights. The metric weight is calculated by counting all the children- or grand children for a control. Other ways of defining critical paths is by measuring delay time during execution, or by manually locating critical paths from specification. From the specification a critical path can be a path that is calling an external API, saving to or calling an external file. 3. GOALS AND APPROACHES The goals for this work are to produce GUI test generation algorithms and critical path selection techniques. This is a summary description of the progress that is completed in this area: GUI Test Generation Algorithms: As explained earlier, the algorithms created are heuristics. The goal is to generate unique test cases that represent legal test scenarios in the GUI tree. 1. Random legal sequences. In this algorithm, the tool will select for example a random first level control. It then selects randomly a child for this control and so on. For example, in a Notepad like example, If Notepad Main menu is randomly selected as first level, children will be File, Edit, Format, View, and Help. Then if File control is

randomly selected from those children, the children for File (Save, SaveAs, Exit, Close, Open, Print, etc) will be the valid next level controls to select from and so on. 2. Random less previously selected controls. In this algorithm, controls will be randomly selected like the first algorithm. The only difference is that if the current control is selected previously ( in the test case just before this one ), this control will be excluded. 3. Excluding previously generated scenarios algorithm. Rather than excluding previously selected control, this algorithm will exclude all previously generated test case or scenario and verify the generation of a new unique test case or scenario. The scenario will be generated and if the test suit already generated contains the new generated scenario, it will be excluded and the process to generate a new scenario will start again. In this scenario, the application may stop before reaching the number of test cases to generate if there are no more unique test scenarios to create. 4. Weight selection technique algorithm. In this scenario, rather than giving same probability of selection for all candidate children as in all previous scenarios, in this algorithm, any child that is randomly selected from the current node will cause its weight or probability of selection next time to be reduced by a certain percent. If same control or node is selected again, its weight will be more reduced and so on. Both algorithms 3 and 4 are designed to ensure test adequacy in the generated test suit. We are in the process of developing a test execution and verification methods to measure the effectiveness of the selected algorithms.

We also define test suit effectiveness that can be calculated automatically in the application in order to measure the quality of the algorithms suggested. Test suit effectiveness is defined as the total number of edges discovered to the actual total number of edges for the Application Under Test (AUT). Figure 1 shows test effectiveness for the 4 algorithms explained earlier.
Test Generation Effectiveness
100 90 80 70 60 50 40 30 20 10 0 10 30 50 200 400 1000 3000 5000 20000 40000 algorithm s % effectiveness

1. Critical Path through using nodes weight. In this approach, each control will have a metric weight that represents the count of all its children. For example if the children of File are: Save, SaveAs, Close, Exit, Open, Page Setup, and Print, then its metric weight will be 7 (another alternative is to calculate all the children and grand children). Then for each generated scenario, the weight of that scenario is calculated as the sum of all the weights of its individual selected controls. Figure 2 is a sample output generated that calculate the weight of some test scenarios for an AUT.
Test Sequence NOTEPADMAINFILEPRINTPRINTTABPRINTB UTTON2 NOTEPADMAINFILEPRINTPRINTTABPRINTL ABEL3 NOTEPADMAINFILEPRINTPRINTTABPRINTL ABEL1 NOTEPADMAINFILEPRINTPRINTTABPRINTB UTTON1 NOTEPADMAINFILEPRINTPRINTTAB NOTEPADMAINFILEPRINT NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPLABEL7 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPLABEL6 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPTEXTBOX3 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPLABEL8 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPTEXTBOX1 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPTEXTBOX4 NOTEPADMAINPAGESETUPPAGESETUPGR OUPBOX2PAGESETUPTEXTBOX2 Weight 28 28 28 28 28 28 34 34 34 34 34 34 34

Effec. AI1 Effec. AI2 Effect AI3 Effect AI4

No. Of test cases generated

Figure 1: Test suit effectiveness for the 4 algorithms explained earlier.

The above figure is showing that the last 2 algorithms may reach to about 100 % effectiveness at around 300 test cases generated. Critical Path Selection: Here are some of the critical path examples: 1. An external API or command line interface accessing an application. 2. Paths that occurs in many tests( in a regression testing database) 3. The most time consuming paths. We developed 2 algorithms to calculate critical paths automatically in 1. the AUT. Part of future work is to calculate the effectiveness of the suggested algorithms.

Figure 2: Test scenarios weights.

The algorithm will then select randomly one of those scenarios that share the same weight value. An experiment should be done to test whether those scenarios that have same weight can be represented by one test case or not. Other alternative is to set a minimum weight required to select a test scenario and then generate all test scenarios that have a higher weight that the selected cut off. The 2 factors that affect the

critical path weight factor are the number of nodes that the path is consisted of and the weight of each edge. This technique can help us define the longest or deepest paths in an application. For example, all the 40 weight values in Notepad application belong to the node that has Page Setup. In this case it is so because it is the deepest one. Figure 3 is a table that shows the reduction percentage of selected scenarios using the above selection (Giving the fact that the same weight scenarios can be represented by one as explained earlier).
AUT Total number of test scenarios Reduction percentage (100 – selected scenarios/all scenarios) % 94.25 82.14 75 92.81 88.23 90

described above. Figure 4 is a sample output for measuring test case reduction using the above algorithm. The 5 result scenarios are listed along with their total reduction.
Test Scenarios Total percent of test reduction %

NOTEPADMAIN, PRINTER, PRINTERBUTTON1,,, NOTEPADMAIN,SAVE,SAVELABEL7,, NOTEPADMAIN,EDIT,FIND,TABCONT ROL1,TABFIND,FINDTABBTNNEXT NOTEPADMAIN,FILE,PRINT,PRINTTA B,PRINTLABEL7, NOTEPADMAIN,SAVE,SAVELABEL5 NOTEPADMAIN,FILE,PRINT,PRINTTA B,PRINTLISTBOX1 NOTEPADMAIN,FONT,FONTLABEL2 NOTEPADMAIN,HELPTOPICFORM,HE LPTOPICS,SEARCH,BUTTON1 NOTEPADMAIN,FONT,FONTTEXTBO X2,, NOTEPADMAIN, PRINTER, PRINTERBUTTON2,,, NOTEPADMAIN,FILE,PRINT,PRINTTA B,PRINTGROUPBOX1 NOTEPADMAIN, PAGESETUP,PRINTER, NOTEPADMAIN,FONT,FONTLISTBOX 2,, NOTEPADMAIN,OPEN,OPENFILELAB EL4,, NOTEPADMAIN,SAVEAS,SAVEFILEC OMBOBOX2,

65.1

Notepad FP Analysis WordNet Gradient GUI Controls Hover

174 28 8 153 51 10

41.67

Figure 3: Weight algorithm reduction percentage. 2. Critical path level reduction. In this approach, we will select a test scenario randomly and then for the low levels of the selected controls, we will exclude from selection all those controls that share with the selected control the same parent. This reduction shouldn’t exceed half of the tree depth. For example if the depth of the tree is 4 levels, we should exclude controls from levels 3 and 4. We assume that each test scenario should start from the main entry point and that 3 controls is the least required for a test scenario (like Notepad – file – exit). We select 5 test scenarios after each other using the same reduction process

51.56

Figure 4: Level reduction sample test results.
4. GUI Auto; The developed GUI test automation framework tool.

We are in progress of developing GUI Auto as an implementation for the suggested framework. GUI Auto tool generates in the first stage an XML file from the assembly of the AUT. It captures the GUI controls and their relations with each other. It also captures selected properties for those controls that

are relevant to the GUI. The generated XML file is then used to generate a tree model that represent the tested application user interface. Several test case generation algorithms are used to generate test cases automatically from the XML model. Test case selection and prioritization are developed to evaluate test adequacy in generating certain number of test cases. Test execution can be triggered automatically to execute the output of any test case generation algorithm. We are currently building the process of test results verification to compare the output of the executed test suites with the expected results. The generated files are in an XML or comma delimited formats that can re reused on different applications. A recent version of the tool can be found at http://www.cs.ndsu.nodak.edu/ ~alsmadi/ GUI_Testing_Tool/. 5.
CONCLUSION WORK AND FUTURE

In this paper we explained some GUI test generation algorithms and critical path test selection techniques. We studied test effectiveness statically by measuring the discovered parts to the total ones. The future work will include developing test execution and verification to measure the overall fault detection effectiveness from those generated test cases. Measuring the performance of GUI test case generation algorithms is another future work proposed track. 6. REFERENCES
1. Pettichord, Bret. Homebrew test automation. ThoughtWorks. Sep. 2004. www.io.com/~wazmo/ papers/ homebrew_test_automation_200409.pdf. 2. Goga, N. A probabilistic coverage for on-the-y test generation algorithms. Jan. 2003. fmt.cs.utwente.nl/publications/files/ 398_covprob.ps.gz.

3. Jan Tretmans: Test Generation with Inputs, Outputs, and Quiescence. TACAS 1996: 127146. 4. Software Design Group. MIT. Computer Science and Artificial Intelligence Laboratory. 2006. http://sdg.csail.mit.edu/index.html. 5. Prasanna, M, S.N. Sivanandam R.Venkatesan. and R.Sundarrajan. A survey on automatic test case generation. Academic Open Internet Journal. Vol. 15. 2005. 6. Williams, Clay. Software testing and the UML. ISSRE99. 99. http://www.chillarege.com /fastabstracts/issre99/. 7. Beizer, Boris. Software Testing Techniques. Second Edition. New York, Van Nostrand Reinhold, 1990. 8. Turner, C.D. and D.J. Robson. The Statebased Testing of Object-Oriented Programs. Proceedings of the 1993 IEEE Conference on Software Maintenance (CSM- 93), Montreal, Quebec, Canada, Sep. 1993. 9. T.H. Tse, F.T. Chan, H.Y. Chen. An AxiomBased Test Case Selection Strategy for ObjectOriented Programs. University of Hong Kong, Hong Kong. 94. 10. Orso, Alessandro, and Sergio Silva. Open issues and research directions in Object Oriented testing. Italy. AQUIS98. 11. T.H. Tse, F.T. Chan, H.Y. Chen. In Black and White: An Integrated Approach to ObjectOriented Program Testing. University of Hong Kong, Hong Kong. 96. 12. Rajanna V. Automated Software Testing Tools and Their Impact on Software Maintenance- An Experience. Tata Consultancy Services. 13. Stanford, Matthew. Object specification tool using VTL. Master dissertation. University of Sheffield. 2002. 14. Memon, Atef. Hierarchical GUI Test Case Generation Using Automated Planning. IEEE transactions on software engineering. 2001. vol 27. 15. Mikkolainen, Markus. Automated Graphical User Interface Testing. 2006. www.cs.helsinki.fi/u/paakki/mikkolainen.pdf. 16. Alexander K, Ames and Haward Jie. Critical Paths for GUI Regression Testing. www.cse.ucsc.edu/~sasha/proj/gui_testing.pdf