You are on page 1of 8

Automation:

Science or Fiction?

Author:
Pronil Sengupta (pronil@adobe.com)
Dinesh Kukreja (dinesh@adobe.com)
Adobe Systems India Pvt. Ltd. I-1A, City Center, Sector – 25A,Noida (U.P.) - 201301

1
Robots are science but Robots which has Emotional Intelligence or those who can reproduce, is a fiction. We all can
easily make out what is a fiction and which is science. Today in testing industry, automation is somehow believed to
be a science in which many of its part is actually a fiction. We should automate but automate correctly. Here in this
paper we are trying to segregate reality from imagination, science from fiction by discussing the correct ways to
automate, the importance of manual testing and the territory of manual testing where automation should not enter.
After all
Automation + Manual testing = Science
but
Automation + Automation (Manual testing) = Fiction.

Preface

In 1985 government created a microcomputer physically indistinguishable from a ten years old boy, the boy has
been designed for an artificial intelligence experiment. The boy somehow escapes and got adopted by a couple. He
baffles everyone with his superhuman capabilities. One day government finds him and takes him back. But the boy
develops emotions and attachment to his parents. Government scraps the project and tries to terminate him, but with
all the superhuman thrills at the end the boy reaches his parents and stays with them ever after. A boy, a robot but
has artificial intelligence and emotion? Does it sound like science or fiction?

Lot of you would agree that this is a nice fiction. Yes, indeed, it is the story of a Hollywood movie Daryl, Data
analyzing Robot youth Lifeform written by David Ambrose, Allan Scott and Jeffrey Ellis. Directed by Simon
Wincer.

So, what is a fiction? A science which has broken the bounds of imagination or some patentable ideas which does
not look real today but looks nice or a kind of hyped science is a fiction.

In the world of testing, the scientific approach has given birth to automation. As automation gained pace over the
time, it is now breaking the bounds of imagination and developing a perception which is making automation a
fiction. Perception like, automation can replace manual testing completely, or automation takes 100% control over
testing are the perception which makes automation a Fiction. Automation and Manual testing have their own
strengths and limitations. Automation and Manual testing actually complements each other. We should understand
what science of automation is and what kind of perceptions can make it fiction. We should focus our effort on right
strength of Automation and Manual testing.

Scope

In this paper we tried to put forward the Idea that neither the Automation nor the manual testing is an over head.
Among automation and manual testing, neither automation can replace manual testing nor can manual testing
replace automation. Automation and manual testing each have their own set of advantages and limitations. One
needs to understand the benefit of each to leverage best out of both the methodologies of testing. Through this paper
we are trying to build an understanding of actual benefits of automation by following topics:

1. What is Science and what is Fiction?


2. What is Fiction of automation?
3. What is Science of automation?
4. Beware of Automation’s Benefit and Limitations
5. How should we automate effectively?
6. What should we manually do?
7. What should we manually test?
8. Conclusion & Summary

It is important to build an understanding around Automation so as to keep it away from it being fiction, the
innovation in Automation should be seen in vertical rather than horizontal. By vertical we mean, we should start
innovating and expanding automation to different areas like development, coding, management rather than only
testing (horizontal)

2
What is Science and what is Fiction?
Robotics is Science. Artificial Intelligence is an open research area, and Artificial Intelligence to an extent where
Robots can imitate true human emotions and think by their own is a Fiction. If we need some examples of Fiction in
robotics we would probably list down some statements like:
Robots can Feel
Robots can Reproduce
Robots can Suspect
Robots can Inspect
Robots can Adapt
May be someone can go into long argument to prove these fiction a science, maybe he win at some point with
logics, but the basic is, robot may be able to do a part or each, but would not be able to imitate 100% as human.
Robotics is science and the scientific statements around robotics would be:
Robots can Jump
Robots can Run
Robots can Work
Robots can execute commands what they are programmed for
Here lies the difference between Robots and Human. Robots no doubt would increase efficiency, increase accuracy
and help in giving space to human to think more. We should not forget the real strength of human, and we should
remember that, robots can’t feel, reproduce, suspect, inspect or adapt but human can

What is Fiction of Automation?


As in line of robotics, automation is a field of science for testing. When expectation from the automation rises to an
extent of impossibility it can mislead the testing community. We should thus understand clearly what automation
can do and what manual testing can do better. We should thus understand the statement which makes Automation a
fiction:
Automation can Adapt
Automation can Evolve
Automation can Feel
Automation can Suspect
Automation can Inspect
Such kind of hypothesis or belief can be considered as a fiction

What is Science of Automation?


We should be now able to distinguish science and fiction in terms of automation. While going through the
limitations of automation, one must not forget the real benefits of Automation as well. The statements which are
pure scientific for Automation would be:
Automation removes redundancy
Automation save cost
Automation save time
Automation increases coverage
Automation increases confidence
Automation increases Precision
Benefits and limitations of the automation lies in between Science and Fiction of Automation.

Beware of Automation’s Benefit and Limitations


To leverage the best out of automation one needs to understand real strengths of the automation. Automation helps
in reducing effort, cost and time by speeding up testing activities and executing without human intervention.
Automation can remove redundancy by automating the repetitive tasks, for example, sanity or daily build
acceptance. Automation can be implemented for legacy feature testing and regression testing. This would increase
coverage and thus confidence.

At the same time one should be aware of limitations of automation. What automation cannot do which human can.
Automation cannot feel, adapt, evolve, suspect and inspect but a manual tester can. During manual testing when a

3
tester finds a bug, he investigates, he hunts for the root cause, he then suspects that there could be more bugs.
Automation system cannot have this suspicion. Automation system cannot make itself aware of surrounding
environment and generate ideas which can bring innovations. However, during manual testing, testers can have
ideas which can evolve and bring innovation. These are some of the limitations of automation where manual testing
adds value.

If we think that manual testing is prone to human error, then this should not be the only reason to automate. Testing
got its place not because of the only reason of syntactical or semantic error in code. Thus, automation and manual
testing goes hand in hand.

How should we automate effectively?


As we have already talked about the benefits of automation and its limitation, we are at the stage where we need to
understand how should we automate effectively. To achieve the realistic and effective results we are suggesting to
divide the automation lifecycle into following phases:
- Goal
- Plan
- Execute->Inspect->Correct
- Retrospect
- Maintain
Goal:
To achieve an effective automation a clear measurable goal needs to be defined. Goal could be automation of 30%
test cases by end of the quarter, but this goal needs to be backed up by proper reasoning, like why to automate, then
why only 30%, why this quarter etc. To define a goal, one needs to answer all the associated “Why?” questions.
Goal could also be to reduce the cost, save time, better efficiency and faster release cycle. We need to determine
whether automation will aid us to achieve them or there are other ways/methods which can help us in achieving the
same goal with less effort.

It is important to understand that, automation should not be a goal of testing. The purpose of automation is to
facilitate us in achieving testing goal.

Plan:
Once the goal is set, automation needs to be planned. Automation project many a times face problems with deciding
the right time to start automation, defining the right scope of the automation and selecting the right approach for the
automation. While planning automation we should remember the following things:

Which test cases which needs to be automated?


Firstly, one needs to mark the testcases, which are viable for automation, as automation candidate. While
selecting the testcases as automation candidate, one must consider:
Technical Feasibility of the test cases.
For developing a successful automation suite it is important that we do not choose complex test cases
which are not technically feasible to automate. Based on the tool being used or the validation required in a
test case, some test cases may not at all be feasible to automate. It is better to leave these test cases and not
waste effort in trying to find workarounds.
Execution Frequency of the test cases.
While selecting test cases for automation it is important to identify what will be the frequency of execution
of these test cases. The higher the frequency of execution the better it is to automate it so that we can save a
lot of effort that we may otherwise be spending on testing these manually..
Effort required to automate the test cases.
Some of the test cases require lot of scripting and effort to automate. Since, the ROI on these test case is not
in proportion to the effort spent and time and resources in a project are fixed, hence it is advisable to select
test cases where it is worth spending the effort in terms of ROI.

Fine tune automation candidate test cases for automation.


The test cases need to be fine-tuned for automation. Some of the ways to fine tune the test case could be
- Elaborative and clear test steps
- Remove dependency from other test cases.

4
- Availability of test data (files) at proper places
- Specify the special test requirement for the test cases.
- Verify the test specification (Priority, Language, platform etc) for test cases

Define automation script writing best practices


Your plan should act as a guideline to scripting. Following are some best practices which needs to be
followed while creating automation script.
Creating independent objects for reuse.
At the very outset, the steps common amongst different test cases should be identified and converted in the
form of reusable object or functions. Such objects/functions can be placed in a common file, from where
they can be called by different test cases, by passing suitable parameters as per the need. This will
encourages re-usability of code and save effort and time. Besides, these functions can be used again when
newer test cases are added to the automation suite at a later stage.
Consider extensibility while automation.
The automation suite should be written in a manner such that additional test cases can be added to it. The
additional test cases may cater to testing enhanced functionality of an existing feature as well as testing new
features incorporated in the application/product.
Proper generation of logs.
A common problem is what to do when automated tests fail. Failure analysis is often difficult. Hence, it is
required that automation suite generates clear, user friendly logs of its own. Logs should be created in a
manner that facilitates statistical analysis of the results. This implies that the log file should have the results
in such a format from which useful statistics can be generated. A good automation suite with ambiguous
logs is worse than manual testing.
Error-handling and Error Recovery routines
Error Recovery routines enable the test suite to run continuously unattended. The function of these routines
is to anticipate possible errors, decide on corrective actions, log the error and proceed further with next test,
if possible. Example - If unexpected termination of application under test happens, the routine should
recognize this and, restart the application. In a nutshell, this will ensure that failures in test execution are
effectively trapped, interpreted and suite continues to run without failures.

Identify right tool for automation


There are a number of automation tools available today. A diligent effort has to go into deciding which tool
would be most suitable for automating the testing of one’s product/application. Some questions that can
help make this decision are:
- If platform independence is required
- If the testing to be automated is UI-based or functionality based
- If a suitable scripting language suffices the need instead of automation tools available in the market.
- If there is a need for an in house automation tool to be developed which may provide the best solution.
Stability of the product/application/feature is ensured.
The first thing that needs to be 'ensured is that the feature/product to be automated is fairly stable in terms
of functionality. There is no sense in automating the testing of a product that is supposed to change
functionality-wise.

Execution ->Inspection->Correction:
Once the detailed automation plan has been created, on the basis of various best practices, now it’s time to start
writing the automation scripts using the identified automation tool. The various best practices like reusability of
objects, proper log generation, error handling mechanism, extensibility etc. which were defined in the test plan
needs to be taken care while writing the automation scripts.

Once the automation scripts are ready then trial run should be performed on the actual builds of the product. Result
of the trial run needs to be analyzed over few builds. The reason for analyzing the result over few build is to make
sure that automation is not giving any false positives. There are occasion when automation scripts itself are working
fine but due to some other glitches it gives incorrect result. Once we start seeing the static result over few builds
then we need to dig into the actual automation failures and start analyzing them manually. There could be three main
reasons for the failure of an automation scripts i.e

5
It could be an actual bug which is found by automation.
A buggy script itself.
Known/Unknown limitation of automation tool.

It’s important to take the Corrective actions as soon as the root cause of the failure has been analyzed. If corrective
action against failure is not taken in its first instance then it will keep bugging you again and again each time the
automation is executed. This will result in unnecessary wastage of time. The various corrective actions which can be
taken as:
If it’s an actual bug then log the bug and specify the defect against its result so that next time you saw the
same failure then you can look at the previous result and ensure that it’s an actual defect.
If it’s an automation script failure then script needs to be corrected for logical, syntax error
If it’s happening due to unknown/known limitation of the automation tool then it’s better to keep such test
case as Manual test case.

Retrospection:
In the due course of action once the automation has gone through several passes on daily builds now it’s time to give
a pause; look back and analyze whether we are on the right track of achieving the defined goal ? It is important to
answers of the following question for measuring the effectiveness of the automation.
Is automation is giving the desired result which we have aimed for?
What are the things which went good?
What are the things which went bad?
What are the backlog items?
What is the learning from the project?
Where is the Scope of improvement?

Maintenance:
Maintenance of automation suite is an integral part of the overall automation project. At this stage automation has
become our asset now and it’s become important to keep our assets updated as and when it is required. It’s necessary
to regularly monitor for the instance when automation scripts needs to be modifies. Several possible instances could
be:
- Major change in functionality.
- Change in UI of the application.
- Baselines need to be updated.
- Added Support for languages.
- Added support for platforms.

What should we manually do?


So far we have understood the real benefits of automation, how to automate and we know now that automation does
not replaces manual testing but enhances the quality of testing. Now the question arise if automation better do
regression or redundant testing where should we focus manually, or what should we do manually.

Automate – Have testers to automate:


First and foremost, if we talk about automation then you need testers to automate. Another option is to have
developers to write automation script and do the automation then hand over the scripts to test team for execution.
What we suggest here is, have testers to automate rather than Developers. Enhance the skill set of your testers, help
them learn scripting and coding but have them design the automation scripts. This is because, testing is a specialized
field and needs a different type of IQ to find bugs and think of cases. Having developers to write automation script is
as good as having developers to test. We all are aware of the limitations of having developers test, the same applies
here.

Process improvement:
Manually lookout for gaps in process and do continuous process improvement. Process improvement should a focus
of every organization. A correct process can stop some bugs being introduced. Automation process as suggested
above in retrospection phase can be assessed for gaps and then the process can be enhanced to avoid the gap.

6
Review:
Involve testers at the initial phase of the project, and have them review requirements. Review of any document
created should be carried out as a practice throughout the project. Any bug caught at review phase can save cost if
those bugs discovered at later phase.

Test case writing:


Test case writing is an integral task which testers would need to continue to do. Testers, whether black box or white
box, whether automation or manual should be able to write good test cases. Test cases would be the base of manual
as well as automation testing. So test cases should be written manually.

Manually test:
Finally even after implementing automation, what you should manually do is manually test.

What you should manually test?


Manual testing should never been replaced by automation. But automation would give space to manual testers to
focus on the areas where manual testing works better. Manual testing works better in areas like:

Bug Investigation & Root cause analysis:


Bug investigation and its root cause analysis needs to be performed manually. It’s the tester who knows about the
interaction of the feature with other functionality, any prior fixes related to same issue, any new change which came
in recently. These could be some of the reason which might be the root cause for the bug. An automation tool is not
capable of doing investigation & root cause analysis in such situations.

New Features:
Newly developed features needs to be tested manually, since these feature are not stable and there will be many bugs
in the initial stages of the feature. Also changes in the feature will keeps on coming build over build as and when
bugs gets fixed or there is some change in functionality.

Non Regressive Cases :


Low priority cases which probably need to be executed once in the project life cycle, should be tested manually, if
the cost of automating them is expensive than manually executing them. Even if a test case is not a low priority test
case but if automation cost (which also includes maintenance effort) is high it is better to test those case manually.

Adhoc/Exploratory Testing:
Adhoc/Exploratory testing is the one which gives us lots of bug in the feature. Even if a particular feature is 100%
automated we should perform the adhoc/exploratory testing at regular interval. This is type of testing where we test
a feature in integration with various other feature and try to create the complex situation. This testing helps us in
finding real bugs which could be very annoying to our customer if not discovered.

Unstructured Testing:
Apart from Adhoc/Exploratory testing there are couple of other unstructured testing techniques like Bug hunts,
Feature swap testing, workflow testing, customer site testing which we should keep on doing for our feature even if
they are automated. These testing techniques continuously keeps on improving the quality of our feature by finding
the potential bugs which are not possible to catch through automation.

Test Automation System/Audit:


Although we have given our due diligence while creating our automation test suite in order to properly validate the
test conditions. But there are circumstances when something wrong is happening in the functionality and our script
keep on passing. Improper validation could also cause the similar problem. So during regular interval we should
pick up such suspicious automated test case and run them manually to ensure that automation is not giving any
incorrect result. This process can also be termed as Auditing of our automation test suites.

7
Conclusion & Summary
Daryl had a happy ending and it was fiction, fictions do open the doors for a new line of thought and imagination
and may research results in some science, which makes Daryl a reality. But we would say it is witty to use available
science on your project, rather than doing research with a live project. Concluding, we only want to say that
automation and manual testing both have their own benefits. Use automation where it is most effective and use
human where they are most effective.

Watch fiction, read fiction but practice science.

References:
http://www.imdb.com/title/tt0088979/
http://www.fast-rewind.com/daryl.htm

Biography:
Pronil Sengupta is working as Lead Quality Engineer for Adobe since more than 4 years. He has a total experience
of 8 years in software testing. Pronil has varied experience in testing application, application installer and online
travel portal.
Academically Pronil is a management graduate. He has also co-authored and presented a paper, Product Vaccination
and Quality Upbringing in STC2009
Email: pronil@adobe.com

Dinesh Kukreja is Lead Quality Engineer at Adobe and has 5years or experience. He has a total experience of 6
years in software testing.
Academically Dinesh holds Bachelor degree in IT. He has also co-authored and presented a paper, “Continuous
Quality improvement through unstructured testing”, in STC2009.
Email: dinesh@adobe.com

Appendix
Explained inline

You might also like