You are on page 1of 9

Analysis of Software Artifacts

Departamento de Engenharia Informática, FCTUC

Analysis of Software Artifacts (ASA)


Henrique Madeira,
Departamento de Engenharia Informática
Faculdade de Ciências e Tecnologia da Universidade de Coimbra
2022/2023

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 1

Software testing overview

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 2

Henrique Madeira, 2022/2023 1


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Software testing topics

• Dynamic approaches: running the code to test it.


– Unit testing, integration testing, system testing, etc.
– Coverage criteria
– Black-box and white-box testing
– Model-based testing
– Defect tracking and ODC

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 3

Software testing in a nutshell

Input domain: could be


very large or infinite. Input Output
SW component
⎻ Valid data input under test
⎻ Non-valid data input
(robustness testing)

Test case à (input, expected outcome)


Test suite
input_1, expected outcome_1
input_2, expected outcome_2
input_3, expected outcome_3


input_4, expected outcome_4

input_n, expected outcome_5

Test coverage? Oracle?


Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 4

Henrique Madeira, 2022/2023 2


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Software testing in a nutshell

Input SW component Output


under test

Black-box testing White-box testing


• Input partitioning classes • Control flow testing
• Boundary values analysis • Data flow testing

Coverage?

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 5

Test coverage: how to measure it?

Test coverage

Code coverage Data coverage Functional coverage

Reliability Cost

Method coverage Object coverage Function coverage


Statement coverage Attribute coverage Function outcome coverage
Branch coverage Value coverage Function chain coverage
Path coverage State coverage Function state coverage

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 6

Henrique Madeira, 2022/2023 3


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Testing phases
Test of individual units of source code. The
• Unit testing definition of “unit” for test purposes depends on the
actual software. It is done by the programmer most
of the times
• Integration testing
Units are combined and tested as a whole. Typically, it
• System testing uses black-box testing.

(and Alpha testing)


The testing team plays the role of end-users. The test
is executed in the developers’ site.
• Beta testing
A beta version of the software is released to be used
by external users (beta testers).
• Acceptance testing
Determined by contract. Confirms that the solution
works for the customer.
Henrique M adeira Adapted from Prof. Mancoridis, Vokolos slides, Drexel University, PA, USA Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 7

Types of Testing
• Functional
• Non-functional
– Performance: test the run-time performance of the software.
– Stress: execute a system in a manner that demands resources in abnormal quantity,
frequency, or volume.
– Usability: attempt to identify discrepancies between the user interfaces of a product
and the human engineering requirements of its potential users.
– Security: verify that protection mechanisms built into a system will, in fact, protect it
from improper penetration
– Dependability: operate the system for long periods of time and estimate the likelihood
that the the requirements for failure rates, mean-time-between-failures, and so on, will
be satisfied. à very, very, very difficult!
– Specific non-functional elements:
Error detection Checkpointing Recovery
Intrusion detection Logging Etc, etc

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 8

Henrique Madeira, 2022/2023 4


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Software faults: a persistent problem

• Software reliability is mainly based on fault avoidance using good


software engineering methodologies

• In real systems (i.e., not toys) à fault avoidance is not enough à


Fault-tolerance is needed, unless the impact of failures is
acceptable.

• Rule of thumb for fault density in software (Rome labs, USA)


– 10-50 faults per 1,000 lines of code à for good software
– 1-5 faults per 1,000 lines of code à for critical applications using highly
mature software development methods and having intensive testing

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 9

Different types of software faults

• In complex systems, the failures caused by software bugs may appear in different way,
defining a very first big types of software faults (bugs):
• Bohrbugs
• Bugs that cause failures deterministically
• Easiest to find during testing
• Fault tolerance à design diversity and redundancy

• Mandelbugs
• Re-execution after a failure caused by a Mandelbug will generally not cause another failure
• Very difficult to find and correct during testing
• Fault tolerance à simple retries and recovery-oriented computing using checkpointing

• Aging-related
• Bugs tend to be activated and cause failures after long periods of system run-time
• Difficult to find during testing (but static code analysis is effective for some of them)
• Fault tolerance à software rejuvenation

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 10

10

Henrique Madeira, 2022/2023 5


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Myths about bugs

• Benign bug hypothesis: bugs are nice, tame, and logical.

• Bug locality hypothesis: a bug discovered within a component affects only that
component behavior.

• Control bug dominance: most bugs are in the control structure of programs.

• Corrections optimism: a corrected bug remains correct.

• Silver bullets: language, design method, development process, environment, etc


grants immunity from bugs.

Henrique M adeira Adapted from Prof. M ancoridis, Vokolos slides, Drexel University, PA, USA Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 11

11

Software testing axioms


“An axiom is a premise or starting point of reasoning.” (Wikipedia)

1. It (normally) is impossible to test a program completely.


2. Software testing is a risk-based exercise.
3. Testing cannot show the absence of bugs.
4. The more bugs you find, the more bugs there are.
5. Not all bugs found will be fixed.
6. It is difficult to say when a bug is indeed a bug.
7. Specifications are never final.
8. Software testers are not the most popular members of a project.
9. Software testing is a disciplined and technical profession

Henrique M adeira Adapted from Prof. M ancoridis, Vokolos slides, Drexel University, PA, USA Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 12

12

Henrique Madeira, 2022/2023 6


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Test case, test suit, oracle

• Test case: inputs to test the program and the predicted outcomes (according to
the specification). Test cases are formal procedures:
– inputs are prepared;
– outcomes are predicted;
– tests are documented;
– commands are executed;
– results are observed and evaluated.

• Test suite: A collection of test cases


• Testing oracle: A mechanism (a program, process, or set of data) that helps us
determine whether the program produced the correct outcome or not.

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 13

13

Outcome

• Outcome: What we expect to happen as a results of the test. In practice, outcome


and output may not be the same.
– Example: if the screen did not change as a result of a test that is a tangible outcome
although there is no output.
– In testing we are concerned with outcomes, not just outputs.

• Question: When does a test “succeed”? When does a test “fail”?

èA successful test is a test that discovers one or more software faults.

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 14

14

Henrique Madeira, 2022/2023 7


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Expected outcome

• Some times, specifying the expected outcome for a given test case can be quite
difficult:
– For some applications we might not know what the outcome should be.

– For other applications the developer might have a misconception.

– Or the program may produce too much output to be able to analyze it in a reasonable
amount of time.

• In general, this (i.e., the oracle) is a fragile part of the testing activity, and can be
very time consuming.
• When possible, automation should be considered as a way of specifying the
expected outcome and comparing it to the actual outcome.

Henrique M adeira Adapted from Prof. M ancoridis, Vokolos slides, Drexel University, PA, USA Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 15

15

Regression test and retest

• It is common to introduce problems when modifying existing code to either


correct an existing problem or otherwise enhance the program.
• Retest: test to verify that a corrected bug was effectively corrected
• Regression test: retest to verify if other code (i.e., not the one that has been
modified) still works after the modifications introduced to correct a bug or to add
new functionalities..
• Options:
– Retest-none (of the test cases)
– Retest-all
– Selective retesting

Henrique M adeira Adapted from Prof. M ancoridis, Vokolos slides, Drexel University, PA, USA Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 16

16

Henrique Madeira, 2022/2023 8


Analysis of Software Artifacts
Departamento de Engenharia Informática, FCTUC

Limitations of testing
• Testing cannot occur until after the code is written.
• The problem is big!
• Exhaustive testing is not practical even for the simplest programs.
• Even if we “exhaustively” test all execution paths of a program, we
cannot guarantee its correctness. The best we can do is increase our
confidence!
• “Testing can show the presence of bug, not their absence.” Dijkstra
• Testers do not have immunity to bugs.
• Even the slightest modifications – after a program has been tested –
invalidate (some or even all of) our previous testing efforts.
• Lack of testing tools

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 17

17

Testing priorities

• Only exhaustive testing can show a program is free from defects.


However, exhaustive testing is impossible.
• The problem is to decide what should be tested first or more
intensively
• Some examples:
– System wide functionalities vs component functionalities
– Old functionalities vs new functionalities
– Typical situations vs corner cases.
– More complex code vs simpler code
– …

Henrique M adeira Analysis of Software Artifacts, DEI-FCTUC, 2022/2023 18

18

Henrique Madeira, 2022/2023 9

You might also like