You are on page 1of 22

Software Testing Standard

Software Testing Standard

1
Software Testing Standard

CHANGE HISTORY

Version No. Date of Issue Change Description


1.0 Initial Version
1.1 After review corrections

2
Software Testing Standard

TABLE OF CONTENTS

1.0 Objective..............................................................................................................4
1.1 Purpose.....................................................................................................................4
1.2 Scope.........................................................................................................................4
2.0 Reference(s)........................................................................................................4
3.0 Abbreviations and Acronyms...........................................................................4
4.0 Definitions...........................................................................................................4
5.0 Procedure............................................................................................................6
5.1 Objective...................................................................................................................6
5.2 Testing.......................................................................................................................6
5.2.1 Testing Approach................................................................................7
5.3 Testing Overview.....................................................................................................8
5.3.1 Design of Test cases...........................................................................8
5.3.2 Order of Verification............................................................................9
5.4 Test Coverage Analysis........................................................................................12
5.4.1 Requirement Coverage Analysis......................................................13
5.4.2 Structural Coverage Analysis............................................................14
5.4.3 Structural Coverage Analysis Resolution.........................................18
5.5 Object oriented testing..........................................................................................18
5.6 Regression Testing................................................................................................19
5.7 Problem Capturing and Reporting Mechanism.................................................19
5.8 Checklists................................................................................................................20
5.9 Software Verification Matrix.................................................................................20
5.10 Test Results Summary.......................................................................................20
5.11 Naming Convention.............................................................................................21
5.12 Transition Criteria................................................................................................21
5.13 DO-178B Compatibility.......................................................................................22

3
Software Testing Standard

1.0 Objective

The objective of this document is to define the conventions for Software Testing. The
purpose of this document is to ensure that the testing documents are produced to the
defined level of presentation, both in format and in content, to support common
interpretation.

1.1 Purpose

1.2 Scope

2.0 Reference(s)

Sl.No. Document No. Document Title


1 Software Considerations In Airborne Systems And
Equipment Certification

3.0 Abbreviations and Acronyms

Sl.No. Abbreviations & Acronyms Definitions


1.
2.
3.

4.0 Definitions

Term Definition
Compiler Program that translates source code statements of a high level
language into object code.
Complete All attributes fully defined to ensure full implementation
Deactivated Code Executable object code (or data), which by design is either not,
intended to be executed (code) or used (data), for example, code that
is enabled by a hardware pin selection or software programmed
options.
Dead Code Executable object code (or data) which, as a result of a design error
cannot be executed (code) or used (data) in an operational
configuration of the target computer environment and is not traceable
to a system or software requirement.
Derived Additional requirements resulting from the software development
requirements processes, which may not be directly traceable to higher level
requirements.

4
Software Testing Standard

May Allowed Action


Object Code A low-level representation of the computer program not usually in a
form directly usable by the target computer but in a form which
includes relocation information in addition to the processor instruction
information.
Shall Mandatory Requirement
Should Recommended Action
Software Integration The Process of combining code components
Hardware-Software The process of combining the software into the target computer
Integration environment
Source Code Code written in source languages, such as assembly language and/or
high-level language, in a machine-readable form for input to an
assembler or a compiler.
Stub Stubs serve to replace modules that are subordinate (called by) the
component to be tested.
A stub or “dummy subprogram” uses the subordinate module’s
interface, may do minimal data manipulation, prints verification of
entry, and returns control to the module undergoing testing.
System Test It is the Verification and validation effort to test against the
System Requirements
Test case A set of test inputs, execution conditions and expected results
developed for a particular objective, such as to exercise a particular
program path or to verify compliance with a specific requirement.
Test procedure Detailed instructions for the set-up and execution of a given set of test
cases, and instructions for the evaluation of results of executing the
test cases.
Testing The process of exercising a system or system component to verify
that it satisfies specified requirements and to detect errors.
Traceability Matrix Traceability matrix is Reference System where by any Project
Requirement can be traced forward or backward through various
phases of Software Testing.
Unit Test It is the Verification effort to Test against Low Level Requirements.
Validation The process of evaluating software at the end of the software
development process to ensure compliance with software
requirements. The techniques for validation is testing, inspection and
reviewing.
Verification The process of evaluating a system or component to determine
whether the products of the given development phase satisfy the
conditions imposed at the start of that phase.
White Box Testing Testing based on an analysis of internal workings and structure of a
piece of software. Includes techniques such as Branch Testing and
Path Testing. Also known as Structural Testing and Glass Box
Testing.

5
Software Testing Standard

5.0 Procedure

This document lays down the Testing standards for Airborne Safety Critical Software.

5.1 Objective

a. The purpose of this document is to provide guidelines for the preparation of a


Software Testing Document. These guidelines have been prepared with DO-
178B as the focus and are imposed so that any errors introduced during
the development processes are detected and reported.

b. The Test philosophy which has been adopted throughout the Testing Process,
addresses the following:

 Independence of Software Verification Team


 Requirement based Testing
 All verification activities shall be carried out on Configured items

5.2 Testing

a) Testing activities aid in error prevention. They are intended to meet the stringent
requirements of DO-178B such that the testing is complete, traceable and correct.

b) The principal aims of the testing shall be to verify that the unit under test has
met its design specification and to verify that errors shall not lead to
unacceptable failure conditions.

c) Test cases shall be selected to check for incorrect loop operations,


incorrect logic decisions, incorrect implementation of algorithms and incorrect
computation sequence.

d) Tests shall be carried out by either running a test harness that calls the unit
under test and reports calls to external functions and values of global data to a
results file or by manually changing values of variables and recording test results
on a formal result sheet.

e) The level of software determines the intensity of testing.

6
Software Testing Standard

5.2.1 Testing Approach

a) Bottom up Approach : The lowest level of subprograms shall be tested first


using Test points, then the user of those subprograms, and so on until the unit
interface is exercised. All external interfaces shall be replaced with stubs to
provide Testing in isolation. Further Testing shall be carried to ensure the unit
contains no errors.

Driver
A B C

D Unit under Test E

Tested Tested Tested Tested

F G H I

Figure-1: Test lowest level sub units

b) Top Down Approach : The highest level of subprogram shall be tested first.
Subprograms are integrated by moving downward through the control
hierarchy. All the internal interfaces shall be replaced with stubs to provide
Testing in isolation. Further Testing shall be carried to ensure the unit contains
no errors.

Tested Tested Tested


A B C

D Tested Unit under Test E

Stub Stub Stub Stub

F G H I

Figure-2: Test highest-level sub units

7
Software Testing Standard

c) Isolation Approach: Isolation testing tests each unit in isolation from the units
which call it and the units it calls. Units can be tested in any sequence, because
no unit test requires any other unit to have been tested. Each unit test requires
a test driver and all called units are replaced by stubs.

Driver
A B C

D Unit under Test E

Stub Stub

F G H I

Figure-3: Test units in any sequence

5.3 Testing Overview

a) Software Verification Plan decides the method of testing to be followed for a


project to satisfy its objectives. Software Verification Plan shall be developed
as defined in the Software Verification Procedure document for the specific
project. The Test environment i.e., Tool(s) for testing & Integrated Development
Environment (IDE) shall be specified in the Unit Test Procedure document.

b) This document is applicable for the following verification activities:

• Unit Test
• Software Integration Test
• Hardware-Software Integration Test
• System Test

5.3.1 Design of Test cases

a) Testing has two complementary objectives:

 To demonstrate that Software performs its intended function and


 To demonstrate that errors which could lead to unacceptable failure
conditions have been removed.

b) To implement the Software Testing objectives, different categories of Test


Cases are to be designed as follows:

 Normal Range Test Cases - To ensure that the Software responds under
normal conditions, including, as required:

8
Software Testing Standard

 Exercise real and integer variables using equivalence classes


and boundary values
 Exercise inputs set with at least two different values
 Exercise inputs set explicitly with zero value
 Exercise inputs set to test the software at its boundary limits
and +/- 1 bit difference from the boundary limits
 All Outputs are measured for at least two different values
 Exercise the inputs to the stubs and outputs from the stubs correctly
 Exercise the inputs to the stubs and outputs from the stubs for
Boundary Values
 Multiple iteration of time related functions
 State Transitions during normal operation
 Variable usage and Boolean operators

 Robustness Test Cases - To ensure that the Software responds under


abnormal conditions, including, as required:

 Exercise real and integer variables using equivalence class


selection of invalid values
 Drive inputs in the invalid equivalence class such that the
relationship between the input and the expected result has a one to
one correspondence. Thus, it should be possible to establish that if
a test case fails it is only due to the value that is being driven
 Initialization
 Failure of data
 Out-of-range loop counts
 Check that exceeded frame times respond correctly
 Check that arithmetic overflow responds correctly
 Check that non-defined State Transitions respond correctly

5.3.2 Order of Verification

a) Verification activities follow the order as mentioned below:

• Code Scrutiny
• Development and review of Unit Test procedures, cases, and generation
of results.
• Development and review of Software Integration Test procedures,
cases, and generation of results.
• Development and review of Hardware-Software Integration Test
procedures, cases, and generation of results.
• Development and review of System Test cases and generation of results

b) Configured templates for the Test Script, Module Verification Cases and
Procedures and Test Script Checklist may be used during development as
defined in the respective Planning document.

9
Software Testing Standard

c) The templates for following documents are available and should be used if
detailed customer checklists are not available. If needed these templates
can be customized as per project needs and the same shall be
documented in project planning documents of the specific project.

• Unit Test Script Template


• Software Verification Matrix Template
• Software Verification Result Template
• Module Verification Cases and Procedures

d) Review of Unit, Software Integration, Hardware-Software Integration and


System Test cases, procedures and results shall be carried out as stated
in respective Planning document.

e) Regarding code scrutiny and Unit Test Procedures.

 Code Scrutiny: Code Scrutiny shall verify correct, complete and


accurate implementation of the Design requirements.

 Unit Test Procedures, Cases, and Results : Unit testing demonstrates


that each unit complies with the Software Design Requirement.

 Unit Test may cover the following:

 Range Checks
 Boundary Values Checks
 Independent Path Coverage
 Robustness Checks

 Errors revealed by this Testing may include:

 Failure of an algorithm to satisfy a Software requirement


(associated only with the unit).
 Incorrect loop operations.
 Incorrect logic operations.
 Incorrect State Transition
 Failure to satisfy time related requirements.
 Failure to process legitimate combinations of input conditions.
 Incorrect responses due to corrupted input data.
 Incorrect handling of exceptions, such as arithmetic faults or
violations of array limits.
 Incorrect computation sequence.
 Inadequate algorithm precision, accuracy or performance.
 Unreachable code & Dead code
 Software Integration Test Procedures, Cases and Results : Software
Integration Testing demonstrates the inter-relationships between the
software requirements, and on the implementation of the requirements
by the software architecture. Software integration test demonstrates
the testing of integration between various modules.

10
Software Testing Standard

The objective of this Test Phase is to ensure that the software


components interact correctly with each other and satisfy the
software requirements and software architecture.

 Software Integration Test may cover the following:

• Functionality of the Requirement


• Performance of the Software
• Interface between Software Units

 Errors revealed by this Testing include:

• Incorrect initialization of variables and constants.


• Parameter passing errors.
• Data corruption, especially global data.
• Inadequate end-to-end numerical resolution.
• Incorrect sequencing of events and operations.
• Incorrect Integration Dependencies.

• Hardware-Software Integration Test Procedures, Cases and Results:


Hardware-Software integration test demonstrates the testing of
software in the target environment. This testing method concentrates
on error sources associated with the software operating within the
target computer environment and on the high-level functionality.

The objective of this Test Phase is to verify that the software satisfies
the high-level requirements in the target environment.

 Hardware-Software Integration Test may cover the following:

• Functionality of the High Level Requirements


• Performance of the Hardware/Software
• Compatibility of Executable object code with target hardware
• Operation using nominal, range limits and erroneous input
values
• Error detection and proper error recovery, including
appropriate error messages and warnings where applicable
• Built-in-Tests
• Performance testing
• Systematic exercising of functionality
• Timing
• Initialization

 Errors revealed by this Testing may include:

• Incorrect interrupt handling.


• Failure to satisfy execution time Requirements.
• Incorrect Software response to hardware transients or hardware
failures.

11
Software Testing Standard

• Inability of built-in Test to detect failures.


• Errors in hardware / Software interfaces.
• Incorrect behavior of feedback loops.
• Incorrect control of memory management hardware or other
hardware devices under Software control.
• Stack overflow.
• Incorrect operation of mechanism(s) used to confirm the
correctness and compatibility of field-loadable Software.
• Violations of Software partitioning.

 System Test Cases and Results: System Testing covers the full
functional and performance requirements of the product. System Test
shall be performed on the actual hardware.

 System Test may cover the following:

• Functionality of the System Requirements


• Performance of the Software
• Interface between Software and External Interfaces

 Test data and Test Cases are produced to verify the following:

• Compliance with the System Specification.


• System initialization
• Attempt start-up with HW errors present
• BITE
• System reset
• Run several Tests together to confirm that there is no
adverse interaction
• Verify the correct operation of the remote units and the
interface to these units.
• System Loading Tests.
• System timing and throughput Tests.
• Input data to cover equivalence classes.
• Output data to cover equivalence classes
• Data beyond boundary conditions for equivalence
classes
• Handling of missing or incomplete data.
• System Overload Tests to gauge expansion capability.

5.4 Test Coverage Analysis

Test coverage analysis is the result of Requirement coverage analysis (Black


Box) and Structural coverage analysis (White Box). If the test coverage achieved
is less than 100%, reasons shall be cited for the same.

12
Software Testing Standard

5.4.1 Requirement Coverage Analysis

a. Requirement coverage analysis determines that Test Cases exist for


each software requirement and satisfies the criteria of normal and
robustness testing.

b. Types of Requirement coverage are:


• Equivalence Class
• Boundary Value Analysis

 Equivalence Class:

A technique in black box testing is equivalence partitioning.


Equivalence partitioning is designed to minimize the number of test
cases by dividing tests in such a way that the system is expected to
act the same way for all tests of each equivalence partition. Test
inputs would be selected from each partition.

Any one of the following shall be used for deriving Test Cases:

 If input condition specifies a Range, one valid (within range) and


two invalid classes (outside lower limit and outside upper limit of
range) are defined.

 If input condition specifies a Value, one valid (value) and two invalid
classes (outside lower limit and outside upper limit of value) are
defined.

 If input condition specifies a member of a set, one valid (within set)


and one invalid class (outside set) are defined.

 If input condition is Boolean, valid class is defined.

 Boundary Value Analysis:

Boundary value analysis is a form of black box testing in which input


values at the boundaries of the input domain are tested. It has been
widely recognized that input values at the extreme ends of, and just
outside of, input domains tend to cause errors in system functionality.

Any one of the following shall be used for deriving Test Cases:

 If input condition specifies a Range, Test Case(s) should be


designed with range bounded values and the values just above
and below the ranges

 If input condition specifies a number of values, Test Case(s), should


be designed with minimum and maximum numbers. Values just
above the maximum and below the minimum are also tested.

 Test Cases should be designed to create an output that produces


maximum and minimum allowable number.

13
Software Testing Standard

 If input or output specifies unordered set (Ex. Array), Test Case(s)


should be designed to exercise first and last elements.

5.4.2 Structural Coverage Analysis

Structural coverage analysis confirms that Testing exercises the code


structure control and data flow. The coverage to be measured are project
specific. The coverage that is required to be covered while testing shall be
indicated in the Project Planning Document. Below is the table for the coverage
required for each level of software as per DO-178B.

Level Coverage Explanation


Level A MCDC Level B + 100% Modified Condition Decision Coverage
Level B DC Level C + 100% Decision Coverage
Level C SC Level D + 100% Statement (or Line) Coverage
Level D 100% Requirements Coverage Requirements
Level E No Coverage Requirements

a. Types of Structural Coverage are as follows:

• Statement coverage
• Decision coverage
• Condition coverage
• Condition / Decision coverage
• MCDC
• Multiple condition coverage
• Assertion Coverage
• Call-pair Coverage
• Instruction Coverage
• Object Code Verification

b. Statement Coverage:

Statement coverage verifies that all executable code statements are reachable.

c. Decision Coverage:

Decision coverage verifies the complete Testing of Control Constructs. Decision


coverage verifies that both conditional and unconditional decision coverage are
exercised.

Example:

 For the decision (A or B), Test Cases (TF) and (FF) will toggle the
decision outcome between true and false.
 Unconditional Decision Coverage: JMP Label, For such a case, JMP
Label needs to be exercised once.
 Conditional decision Coverage: JNZ Label
 For such a case, the JNZ Label needs to be exercised twice. Once
for ZERO result and once for NON-ZERO result.

14
Software Testing Standard

d. Condition Coverage: Condition coverage verifies each condition in a


decision take on all possible outcomes at least once

Example: For the condition (A or B) Test Cases (TF) and (FT) meet the
coverage criteria.

e. Condition/Decision Coverage: Condition/Decision coverage verifies the


Requirements for Decision Coverage with those for condition coverage.

Example: For condition/decision coverage, (A or B), Test Cases (TT) and


(FF) would meet the requirement.

f. MCDC (Modified Condition / Decision Coverage):The MC/DC criterion


enhances the condition/decision coverage criterion by requiring that each
condition be shown to independently affect the outcome of the decision.

Example: For the decision (A or B), Test Cases (TF), (FT), and (FF)
provide MC/DC

Example on MCDC

X = ((A or B) and C)
C
A X
B

3 Inputs 2 Conditions, hence number Of Possible Input Conditions = 23

Possible Input Conditions Result

A B C X
T T T T
T T F F
T F T T
F T T T
F F T F
F T F F
T F F F
F F F F

For MCDC, Number of inputs + 1 condition => 3+1

15
Software Testing Standard

T F T T
F F T F
F T T T
F T F F

where T = TRUE and F = FALSE

g. Multiple Condition Coverage:

Multiple condition coverage requires Test Cases that ensure each possible
combination of inputs to a decision is executed at least once, that is,
multiple condition coverage requires exhaustive Testing of the input
combinations to a decision.

Example: For a decision with n inputs, multiple condition coverage requires 2*n
Tests.

h. Assertion Coverage:

Assertion coverage requires Test Cases to measure the proportion of data


coverage assertions that have been executed correctly. Assertion coverage
gives a measure of the number of correctly executed data coverage assertions
expressed as a percentage of the total number of data coverage assertions.

A data coverage assertion shall be considered correctly executed if the


expression has been executed with overall value TRUE at least once during
the test run.

Ex: IF (A < 10 || A>= 100)


{
…..
…..
}

An assertion coverage value of 100% may only be obtained if the variable A


holds each of the values 9, 100 and 101 at least once during a set of tests.
i. Call-pair Coverage:

Call-pair coverage requires Test Cases to measure the proportion of


possible function calls that have been executed.

j. Instruction Coverage:

Instruction coverage verifies that all the executable instructions are reachable.

k. Object Code Verification:

16
Software Testing Standard

The objective of object code verification is to detect and identify any


additional object code in the airborne software application generated by the
compiler, the linker, libraries, the run-time system, the operating system, or
any other means that is not directly traceable to the source code
statements.

If the compiler generates object code that is not directly traceable to the
source code, then identify the untraceable, compiler-generated object code
and verify it.

The guidelines for performing source code to object code traceability and
verification are as follows:

 Ensure that the functionality of any additional code should be


summarized and documented.

 While verifying object code ensure that the analysis has been
performed on representative source code and object code that
is applicable to the target environment of the intended airborne
system.

 Additional considerations are required for the programming language,


as well as hardware and architecture specific features of the system.
Any representative code used for the identification and behavior
analysis of added, untraceable object code should be evaluated to
ensure that they were produced in identical development
environment, using identical procedures, configurations, and build
instructions as that intended for the software application and target
environment of the intended airborne system.

 There are two types of potential approaches for detecting,


identifying, and establishing the acceptability of the behavior of any
code added by the compiler, linker, libraries, run-time system, and
operating system.
 Manual review/analysis of the complete program - The source
code and associated assembly code listings of the object
code may be manually examined (reviewed) to detect any code
added by the compiler beyond what is required for execution of
the source code statements. The analysis and review results shall
be prepared to gain confidence that the analysis/review was done
correctly.

 Analysis of a complete set of used/implemented programming


constructs

In this approach first evidences are produced that operational


program and library functions fully comply with coding

17
Software Testing Standard

standards and are developed with in the rules of language to


develop programs. Once all the existing code is shown to be in
compliance with the coding standard then one or more tests
combining all the constructs specified in the coding
standards can be produced and compiled. The results of
these tests are then examined and analyzed to verify that the
any additional, untraceable code correctly implements its
identified functionality. The results of this test and analysis and
some representative code from the actual operational program
should be document to establish the validity of the test
programs.

5.4.3 Structural Coverage Analysis Resolution

Structural Coverage analysis reveals unexecuted code structure. The causes


may be due to the following:

a) Shortcomings in requirement based Test Cases – The test cases shall be


supplemented or test procedures changed to provide the missing
coverage.

b) Inadequacies in Software Requirements –The software requirement shall


be modified and additional test cases developed and test procedures
executed.

c) Dead code - The dead code shall be identified and reported to design
team so that it can be removed.

d) Deactivated code – For deactivated code, which is not intended to be


executed in any configuration used, testing should show that the means
by which such code could be inadvertently executed are prevented,
isolated or eliminated. For deactivated code, which is only executed in
certain configurations of the target computer environment, the operational
configuration needed for normal execution of this code should be
established and additional test cases and test procedures developed
to satisfy the required coverage objectives.
5.5 Object oriented testing

Object Oriented Testing is similar to Non Object Oriented Testing or traditional


software testing. Following approach should be followed to do Object Oriented
Testing at different levels.

a. Unit Testing: Unit testing of OO software is generally performed in the


same manner as unit testing of non-OO software. In OO software Unit can
be defined as either a METHOD, or a CLASS itself depending on complexity.

b. Software Integration Testing: Integration testing is done in a bottom up


approach and is based on the behavior of the software versus the structure.
Methods and classes are unit tested and then composed with other classes

18
Software Testing Standard

and their methods to perform integration testing.

c. System Testing : System testing for Object Oriented is same as non OO-
testing process. The end product needs to be verified against requirements
specifications using inputs to the system and output is analyzed.

5.6 Regression Testing

a. Regression Test shall be carried out for all baseline test cases for which there
is a change in requirements and/or source code. If there is any variation
between the base lined and current results, then the corresponding Test Case
is scoped for work. Regression Test is carried out for Unit, Integration as well
as System Tests. When Software failures are rectified, not only the Test
which identified the failure be repeated, but also a selection of other Tests
should be repeated to verify that the change does not have an adverse effect
elsewhere in the Software.

b. Unit Test: The changed module and all others using it during Unit Test must
be retested.

c. Integration Test: The Integration Test of the asynchronous unit using the
changed module must be re-executed.

d. System Test: The modified system must be selectively re-tested to verify


system still complies with the current requirements.

5.7 Problem Capturing and Reporting Mechanism

a) Typical problems encountered during test are as follows:

 The Software component or a part that is non-conforming with requirements


 Error in a document or Source Code module, or behavior of executable
code
 The input requirement that has not been met

b) The Problems captured during the verification phase shall be reported


through Problem Report or as defined in the Planning document and shall
be tracked for closure.

5.8 Checklists

The following Checklists templates are available and should be used if detailed
customer checklists for reviews are not available.

i) Unit Test Scripts Review Checklist Template.


ii) Unit Test Procedure Review Checklist Template.

19
Software Testing Standard

iii) Unit Test Plan Review Checklist Template.


iv) Software Integration Test Scripts Review Checklist Template.
v) Software Integration Test Procedure Review Checklist Template.
vi) Software Integration Test Plan Review Checklist Template.
vii) Hardware-Software Integration Test Scripts Review Checklist Template.
viii) Hardware-Software Integration Test Procedure Review Checklist Template.
ix) Hardware-Software Integration Test Plan Review Checklist Template.
x) System Test Scripts Review Checklist Template.
xi) Software Verification Matrix Review Checklist Template.
xii) Software Verification Results Review Checklist Template.

• The above checklists can be customised for specific project requirements if


required.

• All the checklists and documents generated during the process of testing
shall follow a proper naming convention. Refer Section 5.11 Naming
Convention for details.

5.9 Software Verification Matrix

A Software Verification Matrix (SVM) shall be produced and updated for all levels of
testing.

The SVM will address the following:

• Traceability of test case reference to SRD tags


• Traceability of test schedule, test case reference to SDD tags (design
decisions)

5.10 Test Results Summary

The Test Results Summary will be captured as defined in the Software


Verification Plan or Project Planning document.

The Test Results Summary document will address the following:

• Results of Static Analysis carried out on the Source Code.


• Results of Test Cases
• Coverage Details
• Object Code Verification details (if applicable)

5.11 Naming Convention

20
Software Testing Standard

All the documents like Test Script, Test Plan, Test Procedure, MVCP, SVM,
SVR and Checklists that will be generated during the process of testing shall
follow a Standard Naming convention. This naming convention will be either
mentioned in the planning document (according to customers specifications if
required) or will be as mentioned below:

Checklists: checklist_<itemname>.doc
Review Report: RR_<itemname>.doc

where <itemname>" can be any one of the following


• SUT item under review for unit test
• SIT item under review for software-integration test
• HSIT item under review for hardware-software integration test
• System test item under review for System testing
• SVM
• SVR

MVCP: MVCP_<functionname>.doc
Where <functionname> can be any sub-system name

SVM: SVM_<projectname>.doc
SVR: SVR_<projectname>.doc
Where <projectname> is the name of the project

Test Script: SUT_<unitname>.xxx


Where <unitname> is the unit under test
xxx is the extension like “tcd”, “adt”, “ctt”, “tst”

Test Plan: TP_<functionname>.doc

Test Procedure: TProc_<functionname>.doc


Where <functionname> can be any sub-system name

5.12 Transition Criteria

The Testing activity shall commence once all the following documents are
available:

• Testing Standard(s) (if any)


• Baselined Requirements document(s)
• Baselined Design document(s)
• Baselined Source Code(s)

5.13 DO-178B Compatibility

The Software Testing Standards (this document) is in full compliance with the
sections 6.3, 11.13 and 11.14 of RTCA/DO-178B. The section-wise compatibility
matrix is as follows:

21
Software Testing Standard

DO-178B Description Sections in this


Reference document
6.3.4, 11.14 Scrutiny of Source Code 5.3.2
6.3.5, 11.14 Scrutiny of Outputs of Integration 5.3.2
Process
6.3.6,11.13,11.14 Scrutiny of Test Cases, Procedures 5.3.2
and Results
6.4.1 Test Environment 5.3
6.4.2, 11.13 Requirement-based Test case 5.3.1
Selection
6.4.3a,11.13,11.14 Requirement-based Hardware/ 5.3.2
Software Integration Testing
6.4.3b,11.13,11.14 Requirement-based Software 5.3.2
Integration Testing
6.4.3c, 11.13,11.14 Requirement-based Low Level 5.3.2
Testing
6.4.4., 11.14 Test Coverage Analysis 5.4
6.4.4.1, 11.14 Requirement-based Test Coverage 5.4.1
Analysis
6.4.4.2, 11.14 Structural Coverage Analysis 5.4.2
6.4.4.3, 11.14 Structural Coverage Analysis 5.4.3
Resolution

22

You might also like