Professional Documents
Culture Documents
Compiled by :
1411483202 Winda Larasati
PREFACE
Praise we pray to God Almighty, who has mercy abundant, so the compiler can resolve this
diktat well according to ability. This diktat entitled "Testing and Implementation" prepared to
fulfill the tasks subjects testing and implementation.
In writing this diktat, the author would like to thank Padeli, M.Kom. as the supervisor of
testing and implementation of courses as well as all those who have provided support to resolve
this diktat.
The author realizes that dictates preparation is far from perfect, hence the suggestions and
constructive criticism very authors expect to improve in the future. Hopefully the author of this
diktat hopefully useful for writers and students STMIK and AMIK College of Raharja.
TABLE OF CONTENTS
PREFACE ....................................................................................................................... i
TABLE OF CONTENTS .............................................................................................. ii
CHAPTER I
INTRODUCTION
I.1
I.2
I.3
I.4
I.5
CHAPTER II
BASICS OF TESTING
II.4.2
II.4.3
II.4.4
II.4.5
II.4.6
II.6.2
II.6.3
II.6.4
II.6.5
II.6.6
II.6.7
Operability .................................................................................................... 17
II.7.2
Observability ................................................................................................. 17
II.7.3
Controllability ............................................................................................... 17
II.7.4
Decomposability ........................................................................................... 18
II.7.5
Simplicity ...................................................................................................... 18
II.7.6
Stability ......................................................................................................... 18
II.7.7
Understandability .......................................................................................... 18
III.2.2
III.2.3
III.2.4
III.2.5
III.2.6
III.2.7
III.2.8
III.2.9
III.3.2
III.3.3
III.3.4
III.3.5
III.3.6
III.3.7
III.3.8
III.3.9
III.4.2
III.4.3
III.4.4
III.4.5
III.4.6
III.4.7
CHAPTER I
INTRODUCTION
I.1
Definition Testing
According to Hetzel 1973:
Testing is the process of strengthening of confidence in the performance of a
program or system as expected.
According to Myers 1979:
Testing is the process of executing a program or system intensely to find the error.
According to Hetzel, 1983 (Revised):
Testing is any activity that used to be able to evaluate an attribute or ability of a
program or system and determine whether it has to meet the needs or outcomes.
According to Standard ANSI / IEEE 1059:
Testing is the process of analyzing a software entity to detect differences between the
conditions that existed at the desired conditions (defects / errors / bugs) and evaluate the
features of the software entity.
Some practitioners view of testing, are as follows:
A. Conduct a check on the program specification.
B. Finding a bug in the program.
C. Determining the acceptance of users.
D. Ensure a system is ready for use.
E. Improving confidence in the performance of the program.
F. To show that the program works correctly.
G. Prove that the error does not occur.
H. Recognizing the limitations of the system.
I. Study what can not be done by the system.
J. To evaluate the ability of the system
K. Verification of documents.
L. Ensure that the job has been completed.
Here is the notion of testing associated with the process of verification and validation
of software:
Testing software is the process operate the software in a condition to control, to (1)
verify whether enacted as determined (according to specification), (2) detect an error, and
(3) validation of whether the established specifications already meet the desires or needs
from actual users.
Verification is the checking or testing of entities, including software, for compliance
and consistency by evaluating the results of the needs that have been set. (Are we building
the system right?)
Validation of the system to see the truth, whether the processes that have been
written in the specification is what is really wanted or needed by the user. (Are we building
the right system?)
Error detection: Testing must be oriented to make mistakes intensively, to determine
whether it occurs when not supposed to happen or something that does not happen where
they should exist.
From the definition above, we can see that there are many different views of the
practitioners of the definition of testing. However, in general it was found that testing
should be seen as an activity that is thoroughly and continuously throughout the
development process. Testing the activity of collecting the information needed to evaluate
the effectiveness of work.
So any activity that is used with objectivity to help us in evaluating or measuring an
attribute of software can be termed as a testing activity. Including reviews, walk-throughs,
inspection, and assessment and analysis of existing during the development process. Where
the ultimate goal is to obtain information that can be repeated consistently (reliable) on
what might be around the software with the easiest and most effective ways, among others:
1. Does the software is ready to use?
2. What are the risks?
3. What does it do?
4. What are the limitations?
5. What are these problems?
2
I.2
I.3
shareholders, reviewer of the magazine, and others, where each type of customer will have
the viewpoint of their own to quality.
Testing makes quality can be viewed objectively, because testing is a measure of the
quality of the software. In other words, testing means of quality control (Quality Control QC) and QC measures the quality of the product, while quality assurance (Quality
Assurance - QA) measures the quality of the process used to make a quality product.
However, testing can not ensure the quality of the software, but can provide
confidence or assurance of the software in a certain degree. Because testing is proof in a
controlled condition, in which the software functioned as expected in the test case used.
QA and product development are activities that run in parallel. QA includes a review
of the methods and the development of standards, review of all documentation (not only
for standardization but also verification and clarity). Overall QA also includes validation
code.
The task of QA is a superset of testing. Its mission is to assist in minimizing the risk
of project failure. Each individual QA must understand the causes of project failure and
help the team to prevent, detect and fix problems. Sometimes referred to as the testing team
QA team.
I.4
E. Integrity (Integrity)
F. Engineering (Quality In)
G. efficiency (Efficiency)
H. testability (testability)
I. Documentation (Documentation)
J. Structure (Structure)
K. Adaptability (Quality Forward)
L. Flexibility (Flexibility)
M. Reusability (Reusability)
N. Maintainability (Maintainability)
Therefore testing nice to be able to measure all the factors relevant, of course, each
component factors will have different levels of interest between one application with
another application. For example in a common business system usability and
maintainability factor components are key factors, which are techniques for programs that
might not be the key factor.
So that testing can be fully effective, it must be run to perform the measurements of
each factor related and also makes the quality of being real and visible.
I.5
the software is the cost and schedule of the root causes of the problem, namely the
5
engineering capability party software developers are not sufficient, and the ability of
customers who are very poor (not even able) to provide requirements specification of
system.
With quality oriented, then the organization's software will be able to process
analysis, evaluation and continuous development to achieve a software development
process that is increasingly effective, efficient, scalable, controllable, and can be repeated
consistently in producing a product (software) quality , timely and funding.
Where it will provide a guarantee for customers / clients to obtain the product as
expected, so it will increase their confidence in the ability of the developer, it is necessary
for the organization of software for client relations and the development is for the long
term and continuous (marital status).
CHAPTER II
BASICS OF TESTING
If a designer must instill in his mind deep will testability, programmers should be
oriented to the "zero defect minded", the tester must have a desire that is fundamental to
"prove the code fails (fail), and will do anything to make it fail." So if a tester just want to
prove that the code to act in accordance with its business functions, then the tester has
failed in his duty as a tester.
It is not a statement that conduct testing from a business standpoint is wrong, but in
fact is very important disclosed, because most testing that exist in the organization just
looked at from the point of "proving that the code is working as specified."
Complexity:
User interface and design is very complex to do a complete testing. If a
design error occurs, how a tester can declare a bug is a bug when it is in the
specification, and more than that how a tester can recognize that the behavior of
the system is a bug. Validating the program is based on logic is also not possible
because it would be time-consuming, and there are time limits.
Paths Program:
There will be so many paths that may be passed to a program for a complete
test. Eg will be testing the program code as contained in Figure 2.1. Plot all the
lines from the beginning (START) to the end (END). X will be able to go to END
or perform loop back to the A 19 times. There are five Path from A to X, namely:
ABCX, ABDEGX, ABDEHX, ABDFIX, and ABDFJX. Then the whole
combination of lines to be tested is 5 + 52 + 53 + ... + 520 = 1014 (100 trillion).
II.4.4
design test
Specification test developed
Test plan was made after the model has been created that need. And details
of the test case definition was made after the design approved model. Or in other
words tests are planned and designed before the code is created.
Testing should start from small to large and increasing. The first test plans
and execute focused on the individual components. Implementation of testing
focused on finding errors in the cluster related to the components and the overall
system.
What are the ratings of a particular test correct? Belief in what they can produce
opponents charge that they use for testing.
The discovery of problems and defects. Planning is very important test,
because:
12
13
Hypothesis tests noticed the types and quantity defect of the program. Then designed
experiments to verify or assess this amount. The views will be reflected in the motto of this
testing testing declared by Myers in 1976:
A good test case that has a high possibility to detect defects that previously
undiscovered and not to show that the program has worked correctly.
One of the most difficult problems in testing is the knowledge of when to stop. It is
impossible to test your own programs.
Parts needed for each test case is a description of the expected output. Avoid testing
unproductive or in the air. Write a test case for a valid condition and is not valid.
Inspection results of each test.
The increasing number of defect detected from a part of the program, the possibility
of the existence of undetected defect also increases. Point your best programmers to do the
testing.
Make sure that the testability is a key objectivity in the design of your software. The
design of the system should like each module is integrated into the system only once.
Never change the program to make testing easier (except the permanent change).
Testing, as every activity of most of the others, testing should also start with objectivity.
14
II.6.2
II.6.3
II.6.4
II.6.5
15
not aware of the qualities. Besides other cause is the myth that one of the testing
as described above.
II.6.6
II.6.7
16
II.7 Testability
Ideally, software engineers design computer program, system or product by putting
the testability in mind. This will allow the testing to assist in designing a test case effective
and easier.
Simply put, according to James Bach, the testability of software is how easy (a
computer program) can be tested.
Sometimes programmers willing to help the process of testing and a list of items that
may design, features, and others, would be very helpful if it can work with them. Here's a
list of a set of characteristics that can lead to software that can be tested.
II.7.1
Operability
"The better the software works, will make software testing more efficiently."
1. The system has a new bug (bug added indirect costs in the testing process,
with analysis and reporting).
2. No bug that stopped the execution of tests.
3. Product changes in functional phase (allowing simultaneous development
and testing).
II.7.2
Observability
"What you see, is what you test."
1. Results of each output should show the results of the input.
2. Condition and variable system can be viewed or queried during the
execution took place.
3. Conditions and the old system variables can also be viewed or queried.
4. All of the factors that influence the outcomes can be seen.
5. Output wrong can be easily identified.
6. Internal error can be automatically detected by a thorough test mechanism.
7. An internal error is automatically reported.
8. Source code can be accessed.
II.7.3
Controllability
"With the better we can control the software, the more testing can be
automated and optimized."
17
Decomposability
"By testing the limits of control, we can more quickly isolate problems and
perform testing the better."
1. System software is built from independent modules.
2. The software modules can be tested independently (alone).
II.7.5
Simplicity
"The less tested, the sooner we do it."
1. Simplicity functionality (features that have made a minimum to meet the
existing needs).
2. Simplicity structures (architecture as simple as possible to avoid errors).
3. Simplicity code (standard code is made to be easily inspected and treated).
II.7.6
Stability
"The fewer the changes, the fewer problems / disorders testing."
1. Amendment of the software happens sometimes.
2. A change of software uncontrollable.
3. Amendment of the software can not be validated on an existing test.
4. Software can make improvements to resume running well (recovery) of the
failure of the process.
II.7.7
Understandability
"The more information we have, we will be able to do a better test."
1. The design is easy to understand and well understood.
2. The linkage between internal, external and share component is well
understood.
3. The design changes are communicated.
18
A good test should provide the best results [KAN93]. In a test group that has
a limit intentions, the time, the same resource, will execute only a subset of these
tests. In certain cases, the test that has the highest probability of obtaining error
class should be used.
A good test should not be too simple, but not too complex. Although
sometimes makes it possible to combine a series of tests into a single test case, the
possible side effects associated with this approach is that the error is not detected.
Generally, each test must be executed separately.
A background QA
Standard tests
Integration testing
Functional testing
End-To-End Testing
GUI Testing
20
Banking / Finance
Internet Y2K
Ability to application
groups, namely:
1. functional ability of the subject is the reference
2. Basis Technology
3. Engineering - engineering testing and QA
21
For that it is necessary to know the attributes of personality that is expected for a
tester, namely:
A. Positive attributes that should be developed
B. Planned, systematic, and be careful (not frivolous) - a logical approach to testing
C. Mentality champions - such as the application of high quality standards in working
on a project
D. resolute - no quitter
E. Practical - be aware of what can be achieved to a certain time limit and budget
F. Analytical - have intiusi in taking an approach to digging errors. Morality - striving
for quality and success, understand and aware of the costs that occur to a lowquality
Negative attributes that should be avoided:
A. A little empathy for developers (developers) - easily affected emotionally childish in
conjunction with developers (developers).
B. Less diplomacy - creating a conflict with developers (developers) to show the face
of a hostile. A tester will smile when face to face with the developers (developers) to
discover the defect, and report defects that accompanied statistical calculations
occurrence of defects and bugs.
C. Skeptics - asking for information from the developers (developers) with suspicion
(not believe).
D. Stubborn - can not be flexible in discussing a proposal.
Besides a tester needs to know the obstacles to be faced in working with developers
(developers), where developers (developers) will generally tend to run away and hide from
it, if they feel things as follows:
A. Believing that the tester would interfere with their work and adding complexity to
the problems caused by the presence of the tester.
B. Fear to discuss matters relating to the development that is being done, where it will
be used to drop them.
22
analysis of needs
code inspection
Glass-box testing
Design of fault-tolerant
Black-box testing
Defensive programming
training tester
usability analysis
Beta testing
A clear specification
test automation
Usability testing
design review
Most attribute testing costs spent about 25% of the development. Some
projects may even reach approximately 80% of the development funds (for
reasons outlined below).
II.11.2 Costs defects
Especially for software developers, the costs of defects can be things as
follows:
1. Readiness support technician.
2. Getting Started FAQ guide books.
3. Investigation of customer complaints.
4. Indemnity and take back the product.
5. Coding or testing of revamping bugs.
6. Delivery of products which have been repaired.
7. The addition of costs to support various versions of the product that has
been in release.
8. Public Relations task to explain the review of defects.
24
25
26
Figure 2.5 Graph relationship testing effort towards the cost of failure.
The graph above can be correlated to the allocation of costs, based on
experience and estimation, or internal measurement and data analysis.
The higher level of critical projects, the cost of defect also increased. This
indicates a lot of resources can be allocated to achieve the proportion of higher
defect removal. As shown below:
27
Figure 2.6 Graph business relationship testing to variations in the cost of failure.
Planning
Measurement
Execution of tests
Check termination
Not used.
Data
usually not recorded.
II.15.2 system testing in general practice
Aim
Assembling modules into a system that works.
And determine readiness to do Acceptance Test.
Doer
The team leader or group of tests.
What are tested
The need for and function of the system.
Interface system.
When is finished
Usually, if the majority requirement is appropriate and there are no major
errors were found.
AIDS
System test case libraries and library.
Generator, the comparator and the data simulator testing.
Data
Data errors are found.
Test case.
II.15.3 acceptance testing in general practice
Aim
Evaluate readiness for use.
Doer
The end user or dealer.
What are tested
major functions.
Documentation.
Procedures.
31
When is finished
Normally when a user has been satisfied or test runs smoothly / success.
AIDS
comparator.
Data
Formalities document.
32
CHAPTER III
DESIGN TEST CASE
The execution of a test case causes the program to execute certain forwardpernyaan, relating to specific paths, as illustrated in the flow graph. Branch
coverage, statements and lines formed from the execution path of the program
relating to the review points, arrows, and lines in the flow graph.
Scope Statement
Statement coverage determined by assessing the proportion of statements
reviewed by a set of test cases to be determined. 100% statement coverage is
when every statement in the program is reviewed at least at least once a test.
Scope statements regarding the review of the points (nodes) on a flow
graph. 100% coverage occurs when all the spots visited by the paths traversed by
the test cases.
35
Branch coverage
Branch coverage determined by assessing the proportion of branch decisiontested by a set of test cases that have been determined. 100% branch coverage is
where each branch of the decision on the program is reviewed at least at least once
a test.
Branch coverage with regard to the review of arrows branch (branch edges)
of the flow graph. Coverage 100%, when all the arrows branches reviewed by the
paths traversed by the test cases.
36
Figure 3.5 Example of branch coverage 100% but not 100% path coverage.
Can also make 100% branch coverage, by reviewing all of the arrows
branches without having to review all the existing path (path coverage 100%).
Example program:
37
Figure 3.6 Example of arrows branch 100% but not 100% path coverage.
From the example above, it can be seen that it only takes two lines to visit all
the arrows branches, from four lanes that exist in the flow graph.
So if 100% path coverage, it will automatically branch coverage of 100%
anyway. Similarly, if the branch coverage of 100%, then automatically the
statement coverage of 100%.
Design Coverage Tests
To design the scope of the test, please note the following stages:
1. Analyzing the source code to create a flow graph.
2. Identify a test track to achieve compliance tests based on flow graph.
3. Evaluate the test conditions to be achieved in each test.
4. Provide input and output values based on the condition.
III.2.2 Basis Path Testing
Is a white box testing techniques were introduced by Tom McCabe [MC76].
This method allows the designer of test cases to take measurements of the
procedural logic complexity of the design and use it as a guide in determining the
basis of group execution path, where it will ensure the execution of each statement
in the program at least once during testing takes place.
Identification method based on lines, the structure or the existing connection
of a system is commonly referred to as branch testing, because the branches of
code or logic functions are identified and tested, or also known as control-flow
testing. Base path comes in two forms, namely:
38
40
Independent path is any path to a program that shows the first group of new
processes or new condition statement.
[Region / Complexity] V (G) = E (edges) - N (nodes) + 2
Sample Flow Graph view (Figure 3.10):
V (G) = 11-9 + 2 = 4
V (G) = P (predicate node) + 1
Sample Flow Graph view (Figure 3.10): V (G) = 3 + 1 = 4
Based on the sequence of the plot, found a group of base flow graph (Figure
3.10):
Line 1: 1-11
Line 2: 1-2-3-4-5-10-1-11
Line 3: 1-2-3-6-7-9-10-1-11
Line 4: 1-2-3-6-8-9-10-1-11
Stages in creating test cases using cyclomatic complexity:
Use the design or code as a base, draw a flow graph
Based on the flow graph, determine cyclomatic complexity
Determine the base group of linearly independent paths
Prepare test cases that will carry out the execution of each path in the group
bases
Examples of test cases from Figure 3.10
Test case path (Path) 1
Value (record.eof) = input is valid, where record.eof = true
Expected results: System out of the loop and the sub program.
Test case path (Path) 2
Value (field 1) = input is valid, where field 1 = 0
Value (record.eof) = input is valid, where record.eof = false
Value (counter) = value (counter) + 1
Expected results: The system did [process record], [store in buffer] and
[increment counter].
Test case path (Path) 3
41
When the process is expected at the track during the transfer process is
done.
Memory required during the transfer process is done on line.
Resources (resources) are required during the transfer process is done on line.
A. Boolean operators
B. Variable Boolean
C. A pair of parentheses boolean (as found on the condition of simple or
complex)
D. Expression Relational operators Arithmetic
If a condition is not true, then at least one component of these
conditions is not true. These types of errors on the conditions are as follows:
1. Error Boolean operators
2. Errors Boolean variables
3. Errors parentheses Boolean
4. Errors relational operators
5. Errors arithmetic expression
The test method focuses on testing the condition of each condition in
the program. Conditions test strategy has two advantages, namely:
a. Measurement conditions coverage test is simple.
b. Coverage of the conditions tested program providing guidance for the
preparation of additional tests for the program.
Objective tests to detect errors in addition to the conditions of the
program conditions also for other errors of the program. Some strategies test
conditions:
Testing Branch
Test strategy is the simplest conditions. For a complex condition C,
right and wrong branch of C and each simple condition in C should be
executed at least once [MYE79]. As an illustrative example of use, it is
assumed there is a piece of code below:
44
45
The third test with E1 and E2 represents the values 1 and 2, which is
obtained from the input (X, Y, Z) = (0,4,2), so that E1 <E2. And the
results expected condition is false.
For a boolean expression with n variables, it takes all possible tests 2n
(n> 0).
This strategy can detect an error of the operator and the parenthesis
boolean boolean variable as well, but this was practiced only if n is small.
example:
Then the domain of testing does not require 22 = 4 tests, but quite two
tests, namely the value (X, Y) = (t, t), to evaluate the conditions are right
(true).
And (X, Y) = (f, t), as the representative of the rest of the input
possibilities for the evaluation of the condition is false (false).
BRO (Branch and Relational Operators) Testing [TAI89]
46
This technique ensures the detection error of the branch and relational
operator in a condition that is where all the variables boolean and relational
operators contained in the condition occurs only once and no more variables
are used together.
BRO strategy testing using boundary conditions for a boundary
condition C. A condition for C with n simple condition is defined as (D1,
D2, ..., Dn), where Di (0 <i n) is a symbol of the me-specification-kan an
existing boundaries on the simple condition to i to a condition C.
A range of conditions to ensure C D has been covered with an
execution C if, during the execution of C, the result of each simple condition
on C satisfy limits dikorespondesikan in D.
For a boolean variable, B, our me-specification-kan a result of the
restriction D which states that B is true (T) or false (f). similarly, for the
relational expression, the symbols <, =,> used to me-specification-the
boundaries of the result of the expression. As an illustration given examples
as follows:
Example 1:
A condition C1: B1 & B2
Where B1 and B2 is a boolean variable.
C1 range of conditions in the form (D1, D2), and D1 and D2 are t or f.
Value (t, f) is a range of conditions C1 and covered by tests that make the
value of B1 to B2 become true and false values.
BRO testing strategies require a set of constraints {(t, t), (f, t), (t, f)}
covered by the execution of C1.
If C1 is not correct to one or more boolean operator error, at least one of
a set of restrictions will make the C1 wrong.
Example 2:
A condition C2: B1 & (E3 = E4)
Where B1 is a boolean expression, E3 and E4 are arithmetic expressions.
47
48
A simple strategy of data flow testing should include at least once every
bond DU. Therefore the data flow testing also called DU testing strategy.
DU testing does not always guarantee the fulfillment of all branches of
program coverage. But this is a rare situation, where a branch is not a coverage of
DU testing, such as the construction of IF-THEN-ELSE, which THEN part does
not have any variable definition, and there is no ELSE part. In this situation, the
ELSE branch of an IF statement does not need to cover by DU testing.
Data flow testing strategies are very useful for determining a test track on a
program that contains nested if statements and loops. As an illustration of the
application of DU testing to choose the path of PDL test as follows:
5, B6 and ties to other DU can cover this by creating a fifth lane loop iterates
through the suit.
If using the testing branch strategy for selecting a test track of the PDL, as
mentioned above, no additional information is needed. To select a test track on the
diagram to BRO testing requires knowledge of the structure of each condition or
block. (After the election program track, need to determine if a feasible path for
the program; and at least one input no through lane.)
Since the statements in a program linked to each other based on the
definition and use of variables, data flow testing approach will be effective in
detecting errors. However, the problems of measurement of test coverage and the
selection of a test track for data flow testing will be more difficult than the
problems associated with the testing conditions.
It is unrealistic to assume that the data flow testing will be used extensively
when conducting tests of a system that is great. But will usually be used in certain
areas targeted as the cause of the error from the software.
III.2.7 Loop Testing
A set of tests following can be used for simple loops, where n is the
maximum amount that can be passed in the loop:
1. Skip the overall loop, no iteration / throughput in the loop.
2. Spend just one iteration of the loop.
3. Spend two iterations in the loop.
4. Miss m iterations in the loop where m <n.
5. Skip n-1, n, n + 1 iteration in the loop.
Nested Loops
If the test approach for simple loops developed in nested loops, the
number of possible tests will grow geometrically in line with the increasing
levels of nested loops.
Beizer [BEI90], provide an approach that will help to reduce the
number of tests.
1. Start from the innermost loop. Set all other loops to minimum values.
2. Do a simple test loops to loop the most in, while maintaining the
existing loops on the outside with the minimum iteration parameter
values. Add other tests for value outside of the area or not included in
the limitation iterations parameter value.
3. Work from the inside out, doing tests for the next loop, but the loop
retaining all that is outside the minimum values and other nested
loops on common values.
4. Continue until the whole of the loops have been tested
Concatenated Loops
Concatenated loops can be tested by using an approach that is defined
for simple loops if each independent loops (not interdependent) between one
another. It said two loops are not independent, if two loops are concatenated
loops, and the value of the loop counter in the loop 1 is used as an initial
value for the loop 2. When the loops are not independent, is recommended
to use the approach as used in nested loops.
Unstructured Loops
52
mechanism to ensure that the test case that there has been quite cover all aspects
of the system.
Test case as the design is done manually, there is no automation tools to
specify test cases required by the system, because every system is different, and
test tools can not know the rules of right and wrong from an operation. The design
of the test requires experience, reasoning and intuition of a tester.
Specifications as guidance testing
Specifications or the system model is the starting point in starting the design
of the test. Specifications or system models can be functional specifications,
performance or safety specifications, specification of user scenarios, or
specifications based on the risk of the system. The specification describes the
criteria used to determine the correct operation or unacceptable, as a reference
implementation of the test.
Many cases, usually associated with the old system, there is little or no
documentation of system specifications. In this case a much-needed role of end
users who know the system to be incorporated into the design of the test, instead
of the system specification document. Nevertheless, should remain documentation
specification, which could have been made in a simple form, which contains a set
of tests at the top level of objectivity.
Decomposition objectivity test
The design of the test focuses pas TDA specification of components tested.
Objectivity top level test is based on a component specification. Each objectivity
of this test for objectivity then decomposed into other tests or test cases using test
design techniques.
There are many types of engineering design tests that can be selected based
on the type of testing that will be used [BCS97A], namely:
Equivalence Class Partitioning
Boundary Value Analysis
State Transitions Testing
Cause-Effect Graphing
55
When the nodes have been identified, the relationship and connection
weights will be assigned. Relationships should be named, although the
relationship between object representing the flow of control programs do not
actually need to be named.
In many cases, the graph model may have loops (ie, lines on a graph that
consists of one or more nodes, and accessed by more than one iteration). Loop
testing can be applied at the level of black box. The graph will lead in identifying
loops that need to be tested.
Here is an illustration of transisivitas, where there are three objects X, Y,
and Z, which has the following relationship:
X needed to calculate Y
Y required to calculate Z
Therefore, transisivitas relationship between X and Z, are as follows:
X needed to calculate the Z
Based on this relationship, the test to find errors in the calculation process Z,
must pay attention to the variation of both X and Y.
When you start a design test cases, objectivity is the first to achieve the
fulfillment of the node coverage. This means that the test should be designed to
not miss a single node and node weights (object attribute) is correct.
Then, the scope relationship tested based on its properties. For example, a
symmetry relationship tested to perform a two-way relationship. Transisivitas
relationship tested to prove the existence transisivitas. Reflexive relationship
tested to ensure the existence of a null loop. When the connection weights have
been specified, a test developed to prove that the weight is valid. And finally, loop
testing involved.
III.3.3 Equivalence Partitioning
Is a black box testing method that divides the input domain of a program
into classes of data, where the test cases can be derived [BCS97A].
Equivalence partitioning is based on the premise of the input and output of a
component that is partitioned into classes, according to the specifications of the
59
components, which will be treated the same (equivalent) by the component. It can
also be assumed that the same inputs would produce the same response as well.
Single value in a partition equivalence assumed as a representation of all the
values in the partition. It is used to reduce the problems that are impossible to
testing on each input value (see the principle of testing: testing is complete is not
possible).
The implementation guidelines in conducting equivalence partitioning, are
as follows:
If the input has a certain level, then define the category of valid and invalid
to the input level.
If the input requires a specific value, define categories of valid and invalid.
If the input requires a particular set of inputs, define categories of valid and
invalid.
If the input is a boolean, define categories of valid and invalid.
While several combinations are possible in equivalence partitioning, are:
The input value is valid or not valid.
The numeric value is negative, positive or zero.
String is empty or not empty.
List (list) is empty or not empty.
File data and not, which can be read / written or not.
The date that is after 2000 or before 2000, a leap year or not a leap year
(mainly on 29 February 2000 yangg has a separate process).
The date is in the month amounted to 28, 29, 30, or 31 days
Day on a weekday or a weekend getaway.
Time inside or outside of office hours.
Type of data files, such as text, formatted data, graphics, video, or sound.
The source or destination file, such as hard drives, floppy drives, CD-ROM,
network.
An illustrative example
A function, generate_grading, with the following specifications:
60
The function has two markers, namely "Test" (over 75) and "Duty" (over
25). Function performs a gradation value of the courses in the range of 'A' to 'D'.
Grade level is calculated from the second marker, which is calculated as the total
sum of values "Test" and value "Duty", as stated below:
Greater than or equal to 70 - 'A'
Greater than or equal to 50 but less than 70 - 'B'
Greater than or equal to 30 but less than 50 - 'C'
Smaller than 30 - 'D'
Where if the value is outside the expected range will result in an error (
'FM'). All input is an integer.
analysis partition
Tester provides a component model tested is a partition of the value of the
input and output components. Input and output are made from the specification of
the behavior of the component.
Partitioning is a set of values that are selected in such a way that all values
in the partition, is expected to be treated in the same manner by the component
(such as having the same process).
Partitioning for valid and invalid values should be determined. For
generate_grading function, there are two entries:
"Exam"
2. A set of small test cases created to include all partitions. The same test
cases can be repeated for the other test cases.
Partition one - to - one test cases
Test cases for partitioning input "test", are as follows:
Test Case
Masukan Ujian
44
-10
93
Masukan Tugas
15
15
15
Total Nilai
59
108
0 e 75
e<0
e > 75
FM
FM
Masukan Ujian
44
40
40
Masukan Tugas
-15
47
Total Nilai
48
25
87
0 c 25
c<0
c > 25
FM
FM
10
Masukan Ujian
48.7
40
40
Masukan Tugas
15
15
12.76
Total Nilai
63.7
52.76
real
alpha
real
alpha
FM
FM
FM
FM
63
11
12
13
Masukan Ujian
-10
12
32
Masukan Tugas
-10
13
Total Nilai
-20
17
45
t<0
0 t 30
30 t 50
FM
Test Case
14
15
16
Masukan Ujian
40
60
80
Masukan Tugas
22
20
30
Total Nilai
66
80
110
50 t 70 70 t 100
B
t > 100
FM
The input value "Test" and "Duty" is taken from the total value of "Test" with the
value "Duty". And finally, the partition Invalid output, are:
Test Case
17
18
19
Masukan Ujian
-10
100
null
Masukan Tugas
10
null
-10
110
A+
null
FM
FM
FM
Total Nilai
Partisi yang dites
Keluaran yang diharapkan
64
Test Case
Masukan Ujian
60
Masukan Tugas
20
Total Nilai
80
Masukan Ujian
-10
Masukan Tugas
-15
Total Nilai
-25
FM
65
66
Value of each side of the boundary selected, arranged to have the smallest
possible difference to the value of constraints (eg the difference between numbers
1 to integers).
on
generate_grading
the
previous
neighbor
example
function,
(see
wherein
equivalence
the
partition
partitioning)
initialization
Masukan (Ujian)
-1
74
75
76
Masukan (Tugas)
15
15
15
15
15
15
Total Nilai
14
15
16
89
90
91
Nilai Batasan
Keluaran yang Diharapkan
0
FM
75
D
67
FM
As for the value Duty risk limits have value 0 and 25, which generates test
cases as follows:
In the example of equivalence partitioning also consider partitioning
Examination score and Duty for non-integer numbers and non-numeric, ie, real
numbers, and the value of the alphabet. But keep in mind that this is not an
identification of the boundaries, and therefore not to be identified from the
restrictions into consideration in making the test cases.
Partition equivalence to the results gradation value are also considered. The
limit values of the partition results gradation value is 0, 30, 50, 70, and 100.
Test Case
10
11
12
Masukan (Ujian)
40
40
40
40
40
40
Masukan (Tugas)
-1
24
25
26
Total Nilai
39
40
41
64
65
66
Nilai Batasan
FM
25
FM
13
14
15
16
17
18
Masukan (Ujian)
-1
29
15
Masukan (Tugas)
15
25
Total Nilai
-1
29
30
31
Nilai Batasan
FM
30
Test Case
19
20
21
22
23
24
Masukan (Ujian)
24
50
26
49
45
71
Masukan (Tugas)
25
25
20
25
Total Nilai
49
50
51
69
70
71
Nilai Batasan
50
68
70
B
Test Case
25
26
27
Masukan (Ujian)
74
75
75
Masukan (Tugas)
25
25
26
Total Nilai
99
100
101
Nilai Batasan
100
A FM
Partition one of the results of the gradation used in the example equivalence
partitioning, (such as E, A + and null), no limit that can be identified, and
therefore can not be made test cases based on the value limits.
For the record, there are many unidentified partitions that are bound only on
one side, such as:
Examination score> 75
Examination score <0
Values Task> 25
Values Task <0
Total Value (Value + Value Exam Task)> 100
Total Value (Value + Value Exam Task) <0
Partitions will be assumed to be bound by the type of data used as input or
output. For example, 16-bit integers and -32 768 32 767 has limitations.
Then it can be made of test cases to test the following values:
Test Case
Masukan (Ujian)
Masukan (Tugas)
28
30
31
32
33
Nilai Batasan
Keluaran yang Diharapkan
29
15
15
15
32767
FM
FM
15
15
-32768
FM
FM
FM
FM
Similarly in creating test cases from value tasks and results gradation value,
is done in the same manner as in the manufacture of test cases to test the value of
the above.
69
Figure 3.21 Diagram logic and limits on the cause-effect graphing techniques.
An illustrative example
As an illustration, given some conditions and actions of a debit function
checks, as follows:
condition:
C1 - new loan transaction journal.
C2 - new withdrawal transaction journals, but in certain withdrawal limits.
C3 - estimates have a journal post.
action:
A1 - debit processing.
A2 - delays journal estimates.
A3 - mail delivery.
Where the debit function specification check:
70
71
The combination of input feasible and then covered by test cases, are as follows:
Sebab (cause)
Akibat (effect)
Nilai
Nilai
Test case
Tipe
Batas
Jumlah
Perkiraan penarikan perkiraan debit perkiraan Kode aksi
saat ini
baru
Kredit
100
-70
50
-70
Tunda
1500
420
2000
420
S&L
Kredit
250
650
800
-150
D&L
Tunda
750
-500
200
-700
D&L
Kredit
1000
2100
1200
900
Tunda
500
250
150
100
D&L
72
Status mulai
S1
S2
S3
S2
S2
S4
Masukan
CM
TS
CM
DS
AT
AD
Status akhir
S2
S3
S1
S1
S4
S2
TS
DS
S1
S2/D
S3/AT
-1
-1
S2
S1/T
S4/AD
S3
S1/T
S4
S2/D
The cells of the table status is shown in the form - symbolizing no transition
(transition null), whereby when each transition is implemented will result in
failure. The test for invalid transition was made as it has been shown to transition
valid. Table status is ideal for identifying test cases. There will be 16 test by cells
of the table status.
III.3.7 Orthogonal Array Testing
Many applications that have relatively limited input domain, which is a
small number of input parameters and values of each parameter are clearly
connected. When the number of input parameters is very small (like, three input
parameters, each of which has three discrete values), allows for testing a complete
74
(exhaustive testing) to the input domain. However, when the number of input
values increased and the number of discrete values for each item of data also
increases, the testing is complete are impossible.
Exhaustive testing is a test that covers every possible input and initial
conditions. Is an important concept as a reference to an ideal situation, but it was
never carried out in practice, due to the volume of test cases is very great.
To illustrate the difference between orthogonal array testing approach with
more conventional "one input at a time", assumed to be a system that has three
inputs, namely X, Y and Z. Each input has three discrete values associated to it.
So there will be 33 = 27 test cases. Phadke [PHA97] provides a geometrical point
of test cases that may be, associated with X, Y and Z as shown below.
"send". In each of these parameters there are three discrete values, eg for P1 will
have value:
P1 = 1, send now.
P1 = 2, send one hour later.
P1 = 3, sent after midnight.
P2, P3, P4 also has a value of 1, 2, 3 as contained in the value of P1.
If the strategy of testing "one input at a time" is selected, the sequential test
(P1, P2, P3, P4) to be specified as follows: (1, 1, 1, 1), (2, 1, 1, 1), (3 , 1, 1, 1),
(1, 2, 1, 1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 3, 1), (1 , 1, 1, 2), and (1, 1, 1, 3).
Phadke [PHA97] provides an assessment of test cases is as follows:
"A test cases will only be useful if a certain test parameters do not interact.
Can detect logic errors made by a single parameter values for the failure of
software functions, commonly called single fault mode. This method can not
detect logical errors that cause a malfunction when two or more parameters
simultaneously with a certain value, it can not detect an interaction. This is a
limitation upon ability in detecting errors ".
Based on the number of input parameters and discrete values are relatively
small, as mentioned above allows for complete testing. The number of tests
required is 34 = 81, is great, but can to Be. All the errors associated with the
permutation data items can be found, but the effort required is relatively high.
Orthogonal array testing approach allows to design test cases that provide
test coverage with the number of test cases to acceptable than a complete testing
strategy.
Test
Parameter tes
cases
P1
P2
P3
P4
76
Menu of system.
The table of contents or the header section of the guide book.
The main menu provides the initial function group for operator functions:
Figure 3.28 Description of the purchase order entry function (puchase order entry).
80
Test case
b.
Inputs function.
Identification of inputs required to produce the output of each function.
83
No. Masukan
1 Supplier Number
Kebutuhan
Test case
PUR-POE-01-001
PUR-POE-03-001
Identifying what happens when the necessary conditions are not satisfied.
84
defines the general behavior of the system's use case, and the branches of the
alternative, which the other parts are already available can be used by use case.
Use cases also have parts that are shared. Use cases and test cases will work
fine in two ways, namely:
If the use cases of system kompit, accurate and clear, the manufacture of test
cases can be done directly.
If the use cases are not in good shape, then your test cases will help in
debugging the test cases.
Here's an example of creating test cases from a use cases.
An illustrative example
Manufacture of test cases from use cases is relatively straightforward, and
consists of the selection of paths exist on the use case. The existence of pathways
not only for the normal pathways, but also to alternative branches.
For each test case, the path is checked to be identified first, then the input
and output are expected to be defined. A single lane can result in many different
test cases, and also design black box testing techniques, such as equivalence
partitioning and boundary value analysis, should be used to obtain the conditions
for further testing.
A large number of test cases can be created using this approach, and calls
for the assessment to prevent an explosion in the number of test cases. As always,
the selection of candidate test cases should be based on the risks associated, as a
result of the error, the discovery of errors habits, frequency of use, and the
complexity of the use case.
86
87
An example of a test case, which is based on the use case above, as shown
in Figure 3:37.
Figure 3.37 Test normal case scenario for the selection of products on purchase orders.
Test cases negative
Negative test cases used to handle invalid data, also for scenarios in which
the initial conditions (preconditions) is not satisfied. There are several types of
negative test cases that can be made, namely:
Branches of alternative uses actions invalid user, as shown in Figure 3:38.
not have the status of "in progress", such as the addition of new items on a
purchase order that has been closed.
Scope.
90
Model represents syntax using a set of rules that defines how a valid
language form of iteration, sequential, and the selection of symbols or other forms
of language.
Select models are often available in programming languages, and can be
found at the end of the textbook and manual programming.
Test cases are built on rules that use a number of criteria that have been
defined in advance.
There are four things that must be considered in performing testing, where
the first three things related to syntax and semantics associated with the fourth:
A set of characters and symbols that are legitimized, for the construction of
basic blocks of input, such as "\", "a:".
The key words and the fields are constructed of characters and symbols.
Keywords are words that have a specific purpose, eg <copy>, <help>. A set of
grammatical rules for incorporation of key words, symbols and fields, and the
development of string that have meaning (command) of the components, eg the
<copy c: \ coba.txt a:>
A set of rules for how to interpret the commands, for example from the
<copy c: \ coba.txt a:> interpreted as a command to duplicate a file that has the
identity <coba.txt> of drive <c> to a floppy disk <a>.
III.4.5 Cross-Functional Testing
Cross-functional testing [COL97A] using matrix interactions between the
features of the system. Axis of matrices X and Y is a feature of the system, and
indicates the cell component that is updated by one feature and then used by
others.
The test is designed to examine the interaction of the matrix between the
features that have been didifinisikan in the matrix. Interactions can occur in two
types of dependencies, namely: directly with the passage of messages or
transactions among the features, or indirectly by their common data shared by the
features. Each dependency can cause a change in the status and behavior of the
corresponding features.
91
F1
F2
F3
F1
C1
M2
F2
M3
F3
M1
Notations of cells in the table above means that the features interact as
follows:
Features F1 updating the count C1 used by F2 features.
Features F2 do not update the count C1.
Features F3 send a message M1 to F1.
Features F1 sends a message M2 to F3.
Features F2 to F3 send a message M3.
92
Data tersimpan.
Statistical Sampling
Operational Profiling
Risk Assessment
Localized Test
Volume Test
Regression Test
94
CHAPTER IV
TESTING STRATEGY
95
97
One response pragmatic, stating: "You never finish testing, simple way, is
transferring the responsibility of testing of your (engineers, software) to your
customers." Every time a customer / user executes a computer program, then the
program has been tested. Because of this there needs to be other activities of the
SQA.
Musa and Ackerman [MUS89] develops a model of a software error
(obtained during testing) as a function of the execution time, based on statistical
modeling and reliability theory. This model is called a logarithmic Poisson
execution-time model, taking the form of:
f (t) = (1 / p) ln (l0 p t + 1)
Where :
f (t) = Number of expected cumulative error occurred when the software in a
test for an execution time, t.
l0 = initials of the intensity of a software error (error per unit time) at the
beginning of testing.
P = Reduction exponentially intensity error when an error has been fixed.
Intensity error, l (t) can be reduced by lowering the derivation of f (t): l (t) = l0
/ (l0 p t + 1)
With equation l (t) above, the tester can predict drop in errors of the results
of the performance testing process. The intensity of the actual error can be plotted
against the prediction curve (figure 4.3). If, for some reason that makes sense, the
actual data obtained during testing and logarithmic Poisson execution-time model,
contiguous between each other at each point of the data, then the model can be
used to predict total testing time is needed to reach the intensity error low and
acceptable. By collecting measurement data during testing software and modeling
the reliability of the existing software, allowing it to develop guidance means to
answer the question: "When will we be able to complete the testing?" Although
there is still a debate, but the empirical approach that there is better to consider its
use, rather than just based on intuition roughly.
100
101
Develop a testing plan that is based on a "rapid cycle testing". Gilb [GIL95]
recommends a software engineer team to learn to do a test on a strict cycle (2 percent of
project effort) of customer use. The resulting feedback can be used to control the level of
quality and test strategies are concerned.
Create software which is strong (robust), which is designed to test himself. Software
should be designed using techniques antibugging. So the software can diagnose certain
error classes. In addition, the design should accommodate automated testing and regression
testing.
Use Formal Technical Review (FTR) which is effective as a filter specific testing.
FTR can be as effective as testing in covering error. For this reason, reviews can reduce the
amount of testing effort, which is needed to produce high-quality software.
Make Formal Technical Review to assess the test strategy and test cases themselves.
FTR may include inconsistencies, irregularities and errors of approach to testing. This will
save time and improve the quality of products.
Develop a sustainable development approach to the testing process. Test strategy
should be measured. Measurements collected during testing should be used as part of a
statistical process control approach to testing software.
102
Local data structure is checked to ensure the integrity of data storage has
been taking care of temporally during the execution phase of the algorithm.
Range of conditions tested to make sure the module is operating correctly on
the limitations defined for processing limitations or restrictions.
All independent paths (basis paths) in the control structure is inspected to
ensure all statements within the module has executed at least once.
All error handling lines tested. Test data flow between modules initialization
is required before other tests. If the data is not in and out properly, all the other
tests doubt. In addition, the structure of local data to be checked and determined
as a result of the global data (if possible) during unit testing.
Selection of the execution path testing is an essential task for the unit tests.
Test cases shall be designed to include errors of computation is wrong, the
comparison is not correct or control flow that is not right. Base path and loop
testing is an effective technique for this.
Computing common mistakes:
Error arithmetic priority.
The operation mode mixture. Initialization is not true.
Inaccuracy precision.
untruth symbolic representation of expression.
Comparison and flow control is a unity.
Usually change the flow of control occurs after comparison.
Test case should include error:
Comparison of different data types
Logical operators and the priority that is not true
The possibility of the equation if the error precision, making the results of
the equation is not as expected.
Error comparison between variables.
Termination loop inconsistent or improperly.
Failure exit iteration when conflict occurs.
Modify the loop variable undue.
103
Good design includes anticipated error conditions and error handling path is
set to be reusable or termination of the cleaning process at the time the error
occurred. This approach is referred to as antibugging by Yourdon [YOU75].
Potential errors that should be tested during the evaluation of error handling:
Description of the error is not clear.
Note the error does not function to calculate the error.
Error conditions causing interference to the system specific error handler.
processing exception conditions are not right.
Description of the error does not provide enough information to direct cause
of the error.
Limitation of testing is the final task of unit testing. Software has sometimes
failed to limitations. Mistakes sometimes happen when the element to n of ndimensional array to be processed, when repetition i of the loop is done, when the
maximum and minimum values are calculated.
IV.3.2 unit test procedures
Unit testing is generally seen as a continuation of the process of coding
stages, with the following procedures:
Once the code is developed and verified against the relevant componentlevel design, test case design of the unit test begins.
Review design information provides guidance to define test cases in order to
approach the overall fault coverage in each category, as discussed
previously.
Each test case should be linked with the expected results.
Because the component is not a stand-alone program, or stubs drivers and
software must be developed for each unit test.
In most applications drivers no more than a "major program" which received
data is test case, enter data into the component that is being tested, and print the
results in question.
Stubs apply to replace modules that are subordinate (called by) the
component being tested. Stub or "dummy subprograms" using the interface
104
106
108
that occur tealah correct and does not cause unwanted behavior or additions
errors.
Regression testing can be done manually, by executing the back of a subset
of the overall test cases or use automation tools capture / playback. Tools capture /
playback software allows technicians to record test cases and results for both
reusable and compared at certain sequences or entire sub. Sub-set of tests
executed consists of three classes of different test cases:
Representation of examples of tests that will check all functions of the
software.
Additional tests that focus on software functions that may be affected by the
changes.
The test focuses on software components changed.
When integration tests conducted, the number of regression tests will
increase to quite large. Therefore, regression tests should be designed to cover
only the same tests or some classes of errors in each of the major functions of the
program. Is impractical and inefficient to execute the back of each test for each
function program when a change occurs.
IV.4.4 Smoke testing
Smoke testing is integration testing approach that is often used when a
software product "limited small" was made. Designed as a mechanism to deal
with the critical time of a project, enabling software teams to run the project with
a base frequency. Fundamentally, the approach consists of testing smoke-activity
following activity:
Software components that have been translated into code, be integrated to
"build", which consists of all data files, libraries, modules are used again, and
components are developed that are needed to implement one or more functions of
the product.
A series of tests designed to generate an error that would make "build"
continues to function as it should. Intention must include a "show stopper" errors
that have the greatest likelihood of making a software project experienced delays
110
of schedule. "Build" integrated with "build" and a whole other products that do
smoke tests daily. Integration approach can be top-down or bottom-up.
Daily frequency of testing the entire product may surprise some readers. In
any case, the tests are often done this will provide a realistic assessment of the
work process integration testing for managers and practitioners. According to Mc
Connell [MCO96], smoke tests are as follows:
"Smoke tests should check the entire system from end to end. We need not
complete, but is able to show major problems. Smoke tests should be fairly
systematically run "build", it can be assumed that he is stable enough to perform
the test fairly systematically. "
Smoke testing can be characterized as an integration strategy that rotates.
Software rebuilt (with the new components were added) and checked daily.
Smoke testing provides a number of advantages when used in an engineering
project for time-critical and complex, as follows:
Minimizing the risk of integration. Because it is done per day, mismatch and
a "show stopper" Other errors may be seen per day, so a change in the schedule
due to the occurrence of serious errors can be reduced.
Increasing the quality of the final product. Because of the integrationoriented approach, smoke tests encompass functional errors and architectural and
component level design errors. This error can be corrected as soon as possible, the
better quality is obtained.
Simplified fault diagnosis and correction. As with all approaches integration
testing, errors were found during smoke testing associated with "an increase in
new software", where the software has been added to "build" that might lead to
the discovery of new errors.
Assessment work processes easier. With the passing of days, the software
has been integrated more and more that have been demonstrated to work. This
increases the morale of the team and give the manager a good indication that the
work processes have been implemented.
111
determines the overall functional categories in the software and are generally
linked to a particular domain of the structure of the program.
Hence "build" program was created based on each phase. Criteria and things
- matters relating to the tests used in all phases of the test:
Integrity interface. Interface on the internal and external tests each module
(cluster) is connected to the structure.
Validity functional. The test is designed to find functional errors committed.
Fill in the information. The test is designed to find the error associated with
a local or global data structures.
Performance. The test is designed to verify the performance of relevant predefined during the design of the software is done.
A schedule for the integration, development of software overhead, and
related topics are also discussed as part of a test plan. Start and end dates for each
phase of the set and the "availability windows" for modules that do the test unit is
defined. A brief description of the software overhead (stubs and drivers) to
concentrate on those characteristics that may require special effort. Finally, the
environment and resources didiskripsikan test. The hardware configuration of
unusual, exotic simulators, and tools or special test technique is a piece of the
many topics that will be discussed.
Detailed testing procedures required to complete the test plan are described
next. The order of integration and testing are concerned at each stage of
integration are described. A list of all test cases and results are expected to be put.
A history of test results, problems, real-time items recorded in the test
specifications. The information contained in this section can be vital during
treatment (maintenance) software. References and Appendix Appendix-used are
also displayed.
Like all the other elements of a software configuration, test specification
format allows to be created and tailored to local needs rather than a software
engineering organization. It is important to note, however, an integration strategy
113
(covered in a test plan) and detailed testing (test procedures are described in a) are
elements that are essential and necessary.
recovery is done by the system itself, you should evaluate the validity of the reinitialization, checkpointing mechanism, data recovery, and restart. If the recovery
requires human intervention, the mean-time-to-repair (MTTR), average time of
repair, is evaluated to determine whether it is still within acceptable limits.
IV.6.2 Security testing
Each computer-based system that is managing sensitive information or
cause actions that could hurt the people as a target of penetration unhealthy or
illegal, requiring safety management system of the system.
Security testing is used to verify the protection mechanism, built into the
system will protect the system from unwanted penetration. Security systems
should be tested for direct attacks (frontal) and indirect (way back). During
security testing, the tester plays tasks as those who want to penetrate the system.
With time and adequate resources, good security testing will penetrate the
system with very great. The task is to make the system designer penetration costs
more than the value of the information observed.
IV.6.3 Stress testing
During the initial stages of testing software, technical white-box and blackbox produced by the evaluation of the functions and performance of normal
programs. The stress tests are designed to expose the program to an abnormal
situation.
Stress Testing executing the system in a condition where the resource needs
are not normal in quantity, frequency or volume, for example:
Special tests designed to make ten interrupts / sec in which one or two
intrupsi an average value.
Test cases requiring maximum memory or other resources executed. In
essence, the tester must try to stop the system.
Variations stress testing is sensitivity testing. In some situations (most often
mathematical algorithms), data is very small interval will cause extreme
conditions even a process error or performance degradation.
117
118
external manifestation of the internal causes of the error and error that may not have a
direct relationship with each other.
IV.7.1 debugging process
Debugging is not testing but always occurs as a consequence of testing.
Based on Figure 4.6, the debugging process starts from the execution of the test
case. The results are assessed and the lack of correspondence between the
expected performance with actual performance calculated. In many cases, data
corresponding to an indication of a hidden cause. The process of debugging is the
process to match inidikasi with the cause so it can direct the error correction.
successful, will be many things to spend effort and time in vain. Intellect must be
used first.
Backtracking is a fairly common method that is usually used for small
programs. Starting from where indications have been covered, source codes
manually traced back to the place where the cause is found. Unfortunately, along
with the increasing number of lines of code, the number of potential paths to the
back tracking will become more and more also and not termanajemeni.
Cause Elemenation is a manifestation of induction or deduction and using
the concept of binary partitioning. Data related to the error was organized to
isolate potential causes. Developed a hypothesis about the cause and the data
needed to prove or thwart this hypothesis.
Alternatively, a list of all the possible causes of developed and do tests to
eliminate each of these possibilities. If the initial test indicates that a portion
hypothesis about the cause looks potentially, the data formed in order to isolate
the bug.
When a bug is found, then the correction should be done about it. Van Vleck
[VAN89] gives three simple questions, each of which software engineers must ask
first before making corrections and eliminate the cause of a bug:
1. What is the cause of the bug is generated again in another part of the program?
In many situations, the program defect caused by an incorrect logic
pattern that may be generated again in another tempat.Pertimbangan explicit
logic pattern, is the possibility that the result in the discovery of other errors.
2. Is the next bug is the result of the improvements that have been made?
Before the correction is done, the source code (or design) should be
evaluated to assess pair data structure and logic. If the correction is performed
on parts that have a high level of engagement in the program, must pay special
attention when any change is made.
3. What can be done to prevent the occurrence of bugs in the beginning?
121
122
REFERENCES
[AMB95] Ambler, S., Using Use Cases, Software Development, July 1995, pp. 53-61.
[BCS97A] British Computer Society Specialist Interest Group in Software Testing (BCS
SIGIST), Standard for Software Component Testing, Working Draft 3.2, 6 Jan 1997.
[BEI84] Beizer, B., Software System Testing and Quality Assurance, Van Nostrand-Reinhold,
1984.
[BEI90] Beizer, B., Software Testing Techniques, 2ndPP ed., Van Nostrand-Reinhold, 1990.
[BEI95] Beizer, B., Black-Box Testing, Wiley, 1995.
[BER92] Berson A., Client Server Architecture, McGraw-Hill, 1992.
[BER93] Berard, E.V., Essays on Object-Oriented Software Engineering, Vol. 1, AddisonWesley, 1993.
[BIN92] Binder, R., Case Based Systems Engineering Approach to Client-Server
Development, CASE Trends, 1992.
[BIN94] Binder, R.V., Object-Oriented Software Testing, Communications of the ACM, Vol.
37, No. 9, September 1994, p. 29.
[BIN94A] Binder, R.V., Testing Object-Oriented Systems:A Status Report, American
Programmer, vol. 7, no.4, April 1994, pp. 23-28.
[BIN95] Binder, R., Schenario-Based Testing for Client Server Systems, Software
Development, Vol. 3, No. 8, August 1995, p. 43-49.
[BOE76A] B. W. Boehm, Software engineering, IEEE Transactions on Computer, Dec 1976.
[BOE81] Boehm, B., Software Engineering Economics, Prentice-Hall, 1981, p.37.
[BRI87] Brilliant, S.S., J.C. Knight, and N.G. Levenson, The Consistent Comparison Problem
in N-Version Software, ACM Software Engineering Notes, vol 12, no. 1, January 1987, pp. 2934.
[Col97A] R. Collard, System Testing and Quality Assurance Techniques, Course Notes, 1997
[Col99A] R. Collard, Developing Test Cases from Use Cases,Software Testing and Quality
Engineering, Vol 1, Issue 4, July/August 1999
[CUR86] Curritt, P.A., M. Dyer, and H.D. Mills, Certifying the Reliabililty of Software,IEEE
Trans. Software Engineering, vol. SE-12, no. 1, January 1994.
123
[CUR93] Curtis, B. etc. Capability Maturity Model, Version 1.1. Technical Report. Software
Engineering Institute. Carnegie-Mellon University. 1993.
[DYE92] Dyer, M., The Cleanroom Approach to Quality Software Development, Wiley, 1992.
[FAR93] Farley, K.J., Software Testing For Windows Developers, Data Based Advisor,
November 1993, p. 45-46, 50-52.
[HOW82] Howden, W.E., Weak Mutation Testing and the Completeness of Test Cases, IEEE
Trans. Software Engineering, vol. SE-8, no. 4, July 1982, pp. 371-379.
[HUM94] Humphrey, Watt S. Managing the Software Process. Addison-Wesley: Reading. MA.
1994.
[IEEE83A] IEEE Standard, Standard for Software Test Documentation, ANSI/IEEE Std 8291983, August 1983
[JON81] Jones, T.C., Programming Productivity: Issues for the 80s, IEEE Computer Press,
1981.
[KAN93] Kaner, C., J. Falk, and H.Q. Nguyen, Testing Computer Software, 2ndPP ed., Van
Nostrand-Reinhold, 1993.
[KIR94] Kirani, S. and W.T. Tsai, Specification and Verification of Object-Oriented
Programs, Technical Report TR 94-64, Computer Science Department, University of
Minnesota, December 1994.
[LIN94A] Linger, R., Cleanroom Process Model, IEEE Software, vol. 11, no. 2, March 1994,
pp. 50-58.
[LIP82A] M. Lipow, Number of Faults per Line of Code, IEEE Transactions on Software
Engineering, Vol 8, pgs 437 439, June, 1982.
[MCG94] McGregor, J.D., and T.D. Korson, Integrated Object-Oriented Testing and
Development Processes, Communications of the ACM, Vol. 37, No. 9, September 1994, p. 5977.
[MCO96] McConnell, S., Best Practices:Daily Build and Smoke Test, IEEE Software, vol. 13,
no. 4, July 1996, 143-144.
[PHA97] Phadke, M.S., Planning Efficient Software Tests, Crosstalk, vol. 10, no. 10, October
1997, pp. 11-15.
124