You are on page 1of 2

Traceability Matrix (IEEE)

A matrix that records the relationship between two or more products; e.g., a matrix that
records the relationship between the requirements and the design of a given software
component.

When to stop testing?

Testing is potentially endless. We can not test till all the defects are unearthed and
removed -- it is simply impossible. At some point, we have to stop testing and ship the
software. The question is when.

Realistically, testing is a trade-off between budget, time and quality. It is driven by profit
models. The pessimistic and unfortunately most often used approach is to stop testing
whenever some or any of the allocated resources -- time, budget, or test cases -- are
exhausted. The optimistic stopping rule is to stop testing when either reliability meets the
requirement, or the benefit from continuing testing cannot justify the testing cost. This
will usually require the use of reliability models to evaluate and predict reliability of the
software under test. Each evaluation requires repeated running of the following cycle:
failure data gathering -- modeling -- prediction. This method does not fit well for ultra-
dependable systems, however, because the real field failure data will take too long to
accumulate.

Testing automation

Software testing can be very costly. Automation is a good way to cut down time and cost.
Software testing tools and techniques usually suffer from a lack of generic applicability
and scalability. The reason is straight-forward. In order to automate the process, we have
to have some ways to generate oracles from the specification, and generate test cases to
test the target software against the oracles to decide their correctness. Today we still don't
have a full-scale system that has achieved this goal. In general, significant amount of
human intervention is still needed in testing. The degree of automation remains at the
automated test script level.

The problem is lessened in reliability testing and performance testing. In robustness


testing, the simple specification and oracle: doesn't crash, doesn't hang suffices. Similar
simple metrics can also be used in stress testing.

Manual vs. Automation

Pros of Automation
• If you have to run a set of tests repeatedly, automation is a huge win for you
• It gives you the ability to run automation against code that frequently changes to
catch regressions in a timely manner
• It gives you the ability to run automation in mainstream scenarios to catch
regressions in a timely manner
• Aids in testing a large test matrix (different languages on different OS platforms).
Automated tests can be run at the same time on different machines, whereas the
manual tests would have to be run sequentially.

Cons of Automation
• It costs more to automate. Writing the test cases and writing or configuring the
automate framework you’re using costs more initially than running the test
manually.
• Can’t automate visual references, for example, if you can’t tell the font color via
code or the automation tool, it is a manual test.

Pros of Manual
• If the test case only runs twice a coding milestone, it most likely should be a
manual test. Less cost than automating it.
• It allows the tester to perform more ad-hoc (random testing). In my experiences,
more bugs are found via ad-hoc than via automation. And, the more time a tester
spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual
• Running tests manually can be very time consuming
• Each time there is a new build, the tester must rerun all required tests - which
after a while would become very mundane and tiresome.

Automated Testing Life Cycle Methodology

You might also like