You are on page 1of 24

ESA UNCLASSIFIED – For Official Use

ESA guidelines for software


product service history

Prepared by ESSB-HB-Q-002 Working Group


Reference ESSB-HB-Q-002
Issue 1
Revision 0
Date of Issue 30 January 2013
Status Approved
Document Type Handbook
Distribution
ESA UNCLASSIFIED – For Official Use

Title ESA guidelines for software product service history


Issue 1 Revision 0
Author ESSB-HB-Q-002 Working Group Date 30 January 2013
Approved by QSB on behalf of ESSB Date 30 January 2013

Reason for change Ref/Issue Revision Date


First issue ESSB-HB-Q-002 0 30 January 2013
Issue 1

Issue 1 Revision 0
Reason for change Date Pages Paragraph(s)

Page 2/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

Table of contents

1 Purpose and scope ......................................................................................................... 5


1.1 Purpose............................................................................................................................. 5
1.2 Scope ................................................................................................................................ 5

2 References ...................................................................................................................... 6

3 Terms, definitions and abbreviated term ...................................................................... 7


3.1 Terms and definitions from other documents ..................................................................... 7
3.2 Terms and definitions specific to this document ................................................................ 7
3.3 Abbreviated terms ............................................................................................................. 7

4 Background ..................................................................................................................... 8
4.1 ECSS requirements related to Product Service History ..................................................... 8
4.2 Product Service History approach in other domains .......................................................... 8
4.2.1 Aviation................................................................................................................ 8
4.2.2 Nuclear Power Plants Systems ............................................................................ 9
4.2.3 Other domains ................................................................................................... 11
4.3 Current practices in ESA projects .................................................................................... 11
4.3.1 Overview ........................................................................................................... 11
4.3.2 Galileo project.................................................................................................... 11
4.3.3 RTEMS operating system .................................................................................. 13

5 Guidelines ..................................................................................................................... 15
5.1 Context of application ...................................................................................................... 15
5.2 PSH data collection and validation .................................................................................. 16
5.2.1 Overview ........................................................................................................... 16
5.2.2 Configuration and change management ............................................................ 17
5.2.3 Operations similarity .......................................................................................... 17
5.2.4 Platform similarity .............................................................................................. 17
5.2.5 Error detection, recording and reporting............................................................. 18
5.2.6 PSH observation time ........................................................................................ 18
5.2.7 Additional information from development processes .......................................... 19
5.3 PSH acceptability criteria................................................................................................. 19
5.4 PSH report ...................................................................................................................... 20

Page 3/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

Annex A Product service history data validation checklist ......................................... 22

Tables
Table 5-1 Proposed PSH acceptability criteria .......................................................................... 20

Page 4/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

1
Purpose and scope

1.1 Purpose
The purpose of this handbook is to define guidelines for the use of Product Service History as one of the
means to assess the suitability of a given existing software item for its reuse in a specific project, in
accordance with ECSS requirements.
PSH responds to a basic concept: in a given project, the supplier is required to provide software which is
compliant with a specified set of engineering and product assurance requirements, normally corresponding
to a tailoring of the [ECSS-E-40] and [ECSS-Q-80] Standards, as well as with non-functional requirements
(concerning, for instance, dependability and safety) from project specific documents. These requirements are
meant to ensure that the software achieves certain objectives, in terms of, for example, functionality,
reliability, and maintainability. In case of reused software, the information necessary to claim compliance
with the above mentioned requirements could be unavailable or insufficient, but the software could have
operated well in previous projects, and therefore it could be a good candidate for the current application. In
such a case, PSH could be used as a means to support the claim that the proposed software is capable of
meeting the project's objectives (e.g. functionality, and reliability), also in absence of exhaustive development
process information.
It should be noted that this handbook, as it happens for other similar documents (e.g. [FAA-HB]), does not
define any hard criteria for the acceptability of Product Service History data. It rather addresses specific
aspects to be considered when evaluating the relevance and validity of PSH information, supporting the
supplier in the collection of the required evidence and documentation, and the customer in his final decision
about the reuse of the proposed existing software.

1.2 Scope
This handbook does not specify any new requirement. It is meant to help understand and implement the
ECSS requirements relevant to software reuse and Product Service History.
The guidelines defined in this handbook are addressed to ESA space and ground segment projects which
intend to make reuse of existing software whose development process details are not fully available. As
specified in the Scope section of [ECSS-Q-80], these projects can include manned and unmanned spacecraft,
launchers, payloads, experiments and their associated ground equipment and facilities, as well as the
software component of firmware.
The intended target users of this handbook are software development and maintenance engineers, software
product assurance engineers and software/system operators. The guidelines defined in this handbook can
assist the mentioned ESA personnel when supporting project managers in decisions concerning software
reuse.

Page 5/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

2
References
The documents listed below are called by this document. For each document listed, the mnemonic used to
refer to that source throughout the present document is shown in the left column, and then the complete
reference is provided in the middle column. The complete name of the document is provided in the right
column.

[ECSS-Q-80] ECSS-Q-ST-80C Space product assurance – Software product assurance


[ECSS-E-40] ECSS-E-ST-40C Space engineering – Software
[DO-178B] DO-178B/ED-12B Software Considerations in Airborne Systems and
Equipment Certification
[DO-278] DO-278/ED-109 Guidelines for Communication, Navigation, Surveillance
and Air Traffic Management (CNS/ATM) Systems
Software Integrity Assurance
[FAA-HB] DOT/FAA/AR-01/116 Federal Aviation Administration - Software Service
History Handbook, Final Report, January 2002
[IEC-60880] IEC-60880 ed. 2.0 (2006) Nuclear power plants – Instrumentation and control
systems important to safety – Software aspects for
computer-based systems performing category A functions
[FAA-REP] DOT/FAA/AR-01/26 Commercial Off-The-Shelf (COTS) Avionics Software
Study, Final Report, May 2001
GSWS GSWS Galileo software standard
NOTE: The GSWS is the tailored version of ECSS-E-ST-
40C and ECSS-Q-ST-80C for the Galileo project.

Page 6/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

3
Terms, definitions and abbreviated term

3.1 Terms and definitions from other documents


For the purpose of this document, the terms and definitions from ECSS-ST-00-01C, [ECSS-Q-80] and [ECSS-
E-40] apply in particular the following terms:
certification
configuration management
COTS
critical software
software

3.2 Terms and definitions specific to this document


3.2.1 product service history
contiguous period of time during which the software is operated within a known environment, and during
which successive failures are recorded
[DO-178B]

3.2.2 middleware
software layer located between the application software and the hardware

3.3 Abbreviated terms


For the purpose of this document, the following abbreviated terms apply:
Abbreviation Meaning
DRD document requirements definition
FAA Federal Aviation Administration
MTBF mean time between failures

OS operational system
PDS pre-developed software
PSH product service history

Page 7/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

4
Background

4.1 ECSS requirements related to Product Service History


Requirements for the reuse of existing software are defined in ECSS-E-ST-40C and ECSS-Q-ST-80C.
ECSS-E-ST-40C, clause 5.4.3.7, requires that an analysis of the potential reusability of existing software be
carried out, assessing the existing software with respect to the requirement baseline. For this assessment,
reference is made to ECSS-Q-ST-80C, clause 6.2.7, where the majority of the ECSS requirements on software
reuse are listed.
Clause 6.2.7.2 requires the supplier to evaluate and document the potential advantages that can be achieved
through the reuse of existing software.
Clauses 6.2.7.3 to 6.2.7.6 deal with the evaluation of the software proposed for reuse with respect to general
and project specific requirements. The Standard requires that the compliance of the existing software with
those requirements be assessed and documented in the software reuse file, together with the expected
amount of the existing software that can be actually reused, and the method applied to calculate this amount.
Clauses 6.2.7.7 and 6.2.7.8 are related to the cases where the software proposed for reuse does not meet the
applicable requirements. The supplier is required to identify and implement corrective actions, with the
objective of demonstrating the suitability of the existing software for its reuse in the project.
Although not explicitly stated in the Standard, clause 6.2.7.8 is an expansion of clause 6.2.7.7, and it is meant
to specify a set of measures that can be used complementarily to support the claim of reusability of the
existing software, in case of non-compliance with project requirements.
In particular (see 6.2.7.8, bullet b.2), if life cycle data from previous development are not available, the
Standard specifies that Product Service History can be used to provide evidence of the reused software’s
suitability for the current application, as far as specific information is provided and documented.
This handbook provides guidelines to comply with ECSS-Q-ST-80C requirement 6.2.7.8, bullet b.2.
The outcomes of the activities described in this handbook are documented in the Software Reuse File (see
section 5.4).
Data collected during software maintenance and operations can represent inputs to Product Service History.

4.2 Product Service History approach in other domains

4.2.1 Aviation
The aviation community appears to be one of the more active in the field of Product Service History. [DO-
178B] is the outcome of a joint effort of RTCA and EUROCAE, respectively American and European technical
commissions for aeronautics. It is used as guideline by the Federal Aviation Administration (FAA) to assess
the reliability of airborne software, and it explicitly mentions the use of Product Service History as an
alternative method to demonstrate compliance with one or more of the [DO-178B] objectives.

Page 8/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

The ECSS-Q-ST-80C approach to software reuse is in many aspects similar to the one of [DO-178B]. In
particular, [DO-178B] section 12.1.4 concerns software whose life cycle data from a previous application is
inadequate or does not satisfy the objectives of the Standard. In this case, the applicant for the software
certification is required to perform a set of activities meant to increase the confidence in the software
proposed for reuse. These activities may include reverse engineering and/or Product Service History.
Guidelines and requirements for the application of PSH are provided in section 12.3.5 of [DO-178B], where it
is specified that “some certification credit may be granted” based on Product Service History, provided that
specific requirements are met. These requirements span from configuration management to actual error rates
collection, and define firm conditions for PSH data acceptability.
In January 2002, the FAA issued a ‘Software Service History Handbook’ (DOT/FAA/AR-01/116) with the
purpose of providing more details and guidance on the application of Product Service History.
Besides an analysis and discussion of the various aspects of Product Service History and their relation to the
[DO-178B] objectives, the FAA handbook gives example worksheets to guide the evaluation in the domains
‘problem reporting’, ‘time’, ‘environment’ and ‘operation’. The aim of the sheets is to create uniformity and
structure in the assessments.
It is not known to what extent the FAA handbook and Product Service History is being used today in
certification of avionics software. An earlier report ([FAA-REP]) on the use of COTS in avionics software
suggests that this is only done on a very limited scale and only for category D software.
It is important to note that the FAA handbook makes an explicit statement about its status: “This handbook
is the output of a research effort. It does not, by itself, constitute policy or guidance. The FAA may use this
handbook in the creation of future policy or guidance.”
While the conclusions of the handbook express the expectation that the subject of Product Service History
will be revisited when [DO-178B] is revised, this is not listed as one of the focus points for DO-178C (the next
version of the document), which are: formal methods, software modelling, tool qualification, and object-
oriented software.
Another document from the aviation domain relevant for product service history is [DO-278], whose
objective is "to be an interpretive guide for the application of DO-178B/ED-12B guidance to non-airborne
CNS/ATM systems". In this document, the objectives of [DO-178B] for the assurance that airborne software
has the integrity needed for use in a safety-related application have been reviewed and, in some cases,
modified for application to non-airborne CNS/ATM systems.
For the purpose of this handbook, the aviation documents and handbooks where analysed taking into
consideration the fact that they are mainly concerned with safety aspect related to the certification of
airborne and ground systems, and therefore the corresponding objectives may be more stringent than the
ones underlying ESA requirements and standards.

4.2.2 Nuclear Power Plants Systems


Utilisation of Pre-Developed Software (PDS) is an accepted practice in the domain of instrumentation and
control systems for nuclear power plants. In this domain, PDS is often identified as a mechanism to
implement part or the whole of a new system. It is considered that, provided PDS items offer a suitable
quality, they can be beneficial for the productivity and reliability of the new system.
In this domain, the [IEC-60880] Standard specifies engineering, product assurance, configuration
management and security requirements, addressing both newly developed software and pre-developed
software (PDS).

Page 9/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

Requirements for PDS are contained in clauses 5.7.2 and 5.7.4 (security), 7.1.4 (design and implementation),
and 8.2.3.3 (verification of configuration data).
Clause 14.3.4 on translators/compilers requires that the libraries contained in such tools and used in the
target system be considered as sets of pre-developed software components.
Key preconditions are prescribed for the integration of PDS items in nuclear power plants systems. In
particular the evaluation of the PDS comprises:
• Evaluation of functional and performance features
• Evaluation of required changes
• Evaluation of the quality of the software development process
• Evaluation of the operational experience of the PDS
• Documentation of the evidence gathered during the assessment process.
Hence, the evaluation of the operational experience of the PDS represents the link to the concept of Product
Service History addressed in this document.
In those cases where the PDS has been used in many applications similar to the intended use, operational
experience can be claimed during its evaluation in order to increase the confidence in the reliability of the
system.
It is important to note that all requirements applicable to the software components performing “Category A”
functions are still applicable to the PDS. Clause 15 of [IEC-60880], “Qualification of pre-developed software”,
requires compliance of any candidate PDS software comply with all the requirements defined in the
Standard itself.
Nevertheless, Product Service History is accepted as complimentary information to compensate weaknesses
in gathering evidences during the evaluation process.
Clause 15 defines an evaluation and assessment process for PDS reutilisation in Nuclear Power Plants
Systems encompassing a systematic analysis of four key areas:
• Operating Time: this evaluation area aims at ensuring that the accounting of time includes
representative operational experience.
• History of Defects and Errors: this evaluation area aims at identifying the relevance of the problems
detected during the execution of the PDS.
• Operating Environment: this evaluation area aims at ensuring that the operating environment of the
PDS is similar to the one expected during operations of the new system.
• History of changes: this evaluation area aims at identifying the relevance of the changes introduced in
different releases of the PDS.
Data gathered for these key areas includes:
• Operating time: including elapsed time since first start-up, last release, last severe error, last error
report.
• History of defects and errors: including severity, affected subsystem, PDS version where the problem
is detected, PDS version where the problem is fixed, detection date, detection phase, source of the
problem.
• Operating environment: including configuration data, features of the system being executed,
hardware conditions

Page 10/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

• History of changes: including software versions, list of changes per software version, affected
subsystems, pending changes.
This standard requires that gathered data follow a verification process to determine the relevance of the
Product Service History in the evaluation of the PDS:
• Operating time: the verification aims to demonstrate that a minimum pre-established time has been
achieved.
• History of defects and errors: the verification aims to demonstrate that all problems detected were
recorded and analysed.
• Operations environment: the verification aims to demonstrate that configuration data, features of the
system being executed and hardware conditions are comparable to the ones defined by the target
system.
• History of changes: the verification aims to demonstrate that all changes were recorded and analysed.

4.2.3 Other domains


A survey performed on other domains (e.g. NASA) has shown that either no specific guidance is defined for
the use of Product Service History, or the approach taken by the aviation domain with [DO-178B] and [FAA-
HB] is used as a reference.

4.3 Current practices in ESA projects

4.3.1 Overview
Product Service History (PSH) has not historically been widely used across ESA projects. Only a few
examples exist. Sections 4.3.2 and 4.3.3 describe the approach taken in the Galileo project and in the reuse of
the RTEMS operating system.

4.3.2 Galileo project


The Galileo programme presents an atypical example of an ESA project in which PSH-like data is intended to
be used. The data is needed to support qualification of COTS and re-used software in preparation for a
future safety-of-life certification of the Galileo system. The project implements the Galileo Software Standard
(GSWS) - a tailoring of [ECSS-E-40] and [ECSS-Q-80] additionally incorporating objectives (converting them
into requirements) from [DO-178B] in order to facilitate the safety-of-life certification of the Galileo system
(or services).
The Galileo project timeline is characterised by the following distinct phases:
1. In-Orbit Validation: development, launch and operation of 4 satellites plus reduced ground
segment infrastructure.
2. Full Operational Capability: transition to the full satellite constellation and complete ground
systems culminating in a publicly available open service.
3. Future safety certification enabling safety-of-life services to be offered.
The GSWS calls for qualification of all re-used or procured software to the same verification objectives as
newly developed software, making the usage of common third-party or pre-existing software (e.g. COTS,

Page 11/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

operating systems, and databases) significantly difficult, particularly on the ground where their usage is
commonplace. Industry has struggled to meet these requirements for Category D software and higher, and
the ground control segment has instead sought an alternative approach for widely-used ‘industry-standard’
COTS software at assurance level D (e.g. Windows, or open-source software such as Linux operating
systems, MySQL databases, and XML parsers). With this kind of software there are limited (if any) lifecycle
and development data and artefacts available, and the full qualification of community-developed open-
source software to level D would require significant cost and schedule effort that cannot be accepted by the
project.
A “late PSH” approach is proposed whereby the PSH data does not exist a-priori at the time of subsystem
qualification or acceptance. The timescales and schedule characteristics of the Galileo programme, coupled
with the shifting timeline for safety-of-life certification, permit a non-standard approach to the use of PSH
data in terms of its scheduling and availability. With the major driver for qualification of COTS being the
eventual safety-of-life certification, the project is able to temporarily waive the normal COTS qualification
requirements at software component level and address them at a future point.
The approach therefore is to collect a-posteriori PSH data for particular COTS in-situ in the integrated ground
control segment during segment and system validation campaigns, operations preparation, validation and
simulation campaigns as permitted by [DO-278], and then into initial operations and onwards during the
transition to the final configuration and deployment of the full complement of satellites and ground systems.
By the time the system is ready to begin offering a publicly available open service it is expected that there
will be sufficient PSH-like data to claim the COTS products being qualified to level D.
This approach to PSH data collection has certain advantages:
1. The operational usage scenario to which the PSH pertains corresponds precisely to the
operational usage scenario of the software being qualified by PSH data, i.e. the PSH is entirely
relevant since it was gathered on, and consequently corresponds precisely to the usage of the
software in its intended manner in the system. Nominal and non-nominal scenarios and
functionality should be exercised adequately as part of system validation, operations
preparations, and actual early operation of the system, during formal test campaigns and in ad-
hoc system usage by the operators.
2. The hardware environment to which the PSH data pertains corresponds precisely to the
hardware environment in which the software to be qualified is running.
3. Configuration Management considerations for PSH data are fully addressed – the ground
segment software is under strict configuration control, anomalies and NCRs are monitored,
raised and tracked as in any other ESA project, and according to ECSS and GSWS. Software
anomaly reporting tools include fields to flag anomalies that are found to be caused by a
software product for which PSH data is being gathered. The anomaly process feeds directly into
the PSH data gathering process. Anomaly Review Boards ensure that PSH concerns are not
overlooked and that the appropriate flagging of anomalies as PSH-impacting is performed.
4. PSH data is gathered on a mature and stable system having already undergone lower level
software/subsystem/element-level qualification and acceptance campaigns, as well as
integration at ground segment level.
In this way, Galileo overcomes some of the main problems associated with the use of PSH data (e.g.
relevancy to the current project, dissimilarity of environments and usage scenarios, inadequacy of
configuration management and anomaly reporting process).
At the time of publication of the present document, some aspects of the process remain to be agreed with
industry:

Page 12/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

5. The measurement of total software run-time or in-service hours, and how actual “usage” of the software is
defined - the PSH should characterise valid usage time supported by, for example, analysis of
machine and process logs, operator logs, test records, and monitoring data, in order to have
granular run-time data reflecting the real usage period of the software. Industry prefers to
collect coarser data based simply on machine start/stop times independent of actual usage (i.e.
including idle times within the total runtime), arguing that idle times are part of the normal
usage scenario of the ground segment, and that some software (e.g. XML parsers) may only run
a discrete number of times per day for which it will be difficult to amass sufficient numbers of
hours if measurement is based on the few milliseconds each usage actually incurs. The issue of
how run-time combines together across multiple instances of software also remains to be agreed
along with the criteria for determining when software defects cause the fault-free runtime
period counter to be reset.
6. Software maintenance issues (e.g. patching, and upgrades) - careful consideration needs to be given
to the impact on the PSH data collection by any potential upgrade to the software COTS in
question (e.g. vendor patches, service packs, and security updates).
7. Constraints created by the PSH data collection process on the system’s future hardware evolution - the
need to keep the hardware environment stable for the purpose of PSH data collection is clear,
but the long duration of the project necessitates a strategy for dealing with hardware
obsolescence and migration. The obvious impact of hardware migration is to invalidate any
PSH gathered up to that point. An approach to mitigate risks is currently being introduced in
Galileo by using hardware virtualization such that PSH data is collected on virtualized instances
that maintain a constant configuration of qualified software (OS, COTS and application
software) on top of a thin virtualization layer, across hardware environments that evolve over
time.
The risk exists that Galileo’s PSH process ultimately fails to collect data that qualifies the products in
question to level D, though the programme’s duration should provide sufficient time to address this in
another way, particularly with safety certification deferred till some years after the achievement of full
operational capability. ESA’s involvement in shaping the PSH process and in the analysis of periodic PSH
data reports from industry will help provide early detection of potential problems. Sufficient time is
available to refine the process further if necessary. Recovery strategies might then be to replace the offending
COTS with qualifiable COTS or to undertake dedicated COTS qualification activities in timescales
compatible with the overall programme’s schedule for safety certification.
It should be noted that the Galileo example does not reflect the PSH data usage as intended in [ECSS-Q-80].
The ECSS Standard specifies that PSH can be used to support the decision whether to reuse or not existing
software in a project; for that purpose the PSH data need to be available at the time when the decision is
taken. The approach of Galileo to decide upfront to reuse a software product, and then collect service history
data to verify whether that software fulfils the project requirements, is not in line with the [ECSS-Q-80]
provisions.

4.3.3 RTEMS operating system


It is typical for ESA space projects to develop their on-board software. For those developments, the ECSS
standards [ECSS-E-40] and [ECSS-Q-80] are applied. If components are reused, it is usually software or
software modules that have been developed previously within the project consortium. Those re-used
components are often developed under the ECSS regimes. Re-use based on Product Service History for
platform flight software is rare or even non-existing.
One possible exception could be the Real-Time Executive for Multiprocessor Systems (RTEMS), developed
by OAR Corporation for the United States army. The original goal of RTEMS was “to provide a portable,

Page 13/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

standards-based real-time executive for which source code was available and royalties were paid.” To date,
RTEMS is free software under "modified" GNU General Public License (any software linked to RTEMS
maintains its own licensing scheme).
RTEMS has not been developed following ECSS requirements, or apparently any similar prescriptive and
structured standards. While the source code and a comprehensive error database are publicly available, there
are important documents missing, among which software requirement documents, design documents and
tests reports. There is a test suite but its scope is limited to software system level testing. There are no test
suites for unit level testing or for integration level testing.
This operating system is being integrated in an increasing number of ESA spacecraft projects. As such,
RTEMS would be a good candidate for the establishment of a well-documented Product Service History.
However, the majority of ESA projects using RTEMS have considered the available documentation
insufficient and they all have embarked on additional validation campaigns to facilitate qualification, based
on a reverse engineering approach.
It should be noted that a major European space company has instantiated a specific version of RTEMS, called
"RTEMS product", which is being used in different projects and for which some in-service data are being
collected. These data, if complemented in line with the recommendations contained in this handbook, could
represent a valid starting point for a potential Product Service History approach.

Page 14/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

5
Guidelines

5.1 Context of application


In accordance with the definition provided in section 3, the PSH corresponds to the information relevant to a
period of time in which a certain software product has operated. This information can be used by a supplier
to support the claim of suitability of existing software for an intended application, e.g. in the frame of the
certification of an airborne system (see [DO-178B]), or as a justification for software reuse in space projects
(see [ECSS-Q-80]).
For the purpose of this handbook, the context of application of PSH is the one defined in [ECSS-Q-80], clause
6.2.7 (see section 4.1): Product Service History can be used as part of the process of assessing the suitability of
existing software for its reuse in a specific project.
The sequence of the requirements specified in clause 6.2.7 of ECSS-Q-ST-80C reflects the priority order of the
methods to be used to support the claim of suitability of existing software for reuse in a certain application.
The field of application of Product Service History is when some life cycle data from previous development
are not available and reverse engineering techniques are not fully applicable, possibly because of technical
constraints and/or legal restrictions. As such, PSH is not a primary means to demonstrate the suitability of
existing software for reuse. It is required in the ECSS-Q-ST-80C to give priority to the collection of evidence
through other means, such as analyses of software life cycle data and reverse engineering. PSH should only
be used to complement this evidence (see section 5.3). It should be noted that the use of PSH to justify the
reuse of software might imply that certain verification and validation activities on that software cannot be
fully performed. In addition, the PSH approach described in this handbook does not assume that the source
code of the existing software is available.
The criticality of the function to be implemented by the software proposed for reuse should be considered
(for software criticality classification, see ECSS-Q-ST-80C, clause 6.2.2 and Annex D). If the service history of
the existing software is only related to functions whose criticality was lower than the one of the current
application, then the relevance of PSH data as part of the reuse assessment has limited value.
Due consideration should also be given to the segment of the space application in which the software is
intended to be reused. For existing software that has been selected based on, among other criteria, PSH data,
the ground segment environment could allow to overcome or mitigate the consequences of a fault of that
software, and possibly accommodate investigation of fault causes and development of
corrections/workaround in a reasonable amount of time. On the contrary, similar faults in the flight segment
might result in critical system failures for which the necessary reaction time, in case of SW reuse based on
PSH, could be insufficient. In any case, this is ultimately related to the consequences of software-caused
failures, and thereby to the criticality of the software.
When documenting the PSH data in the Software Reuse File, the supplier should take into account the
different conditions and limitations that apply to these data, as detailed in section 5.2. These constraints are
meant to ensure that the collected information is meaningful for the current application.
It is worth mentioning that [ECSS-E-40] requires that the results of the reused software analysis, including
PSH, are documented in the Software Reuse File (see [ECSS-E-40], Annex N), which is delivered to the

Page 15/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

customer. The relevance and validity of the information provided in the Software Reuse File is subject to
customer’s approval.
When Product Service History represents a major constituent of a Software Reuse File, the customer should
consider the risks associated with the reuse of the proposed existing software. The above considerations
about the criticality of the function to be implemented and the validity limitations of PSH data play a
significant role in the reduction of such risk.

5.2 PSH data collection and validation

5.2.1 Overview
As mentioned if [FAA-HB], although Product Service History would seem to be a fairly simple concept,
problems may arise in its practical application for what concerns the PSH data collection and the relevance of
these data to the reuse justification in the current application.
This handbook provides recommendations for the process of collecting Product Service History data and for
the conditions for these data to be valid and acceptable.
The following data are relevant to the Product Service History of a software product:
• Identification of the software (including identification of SW generation tools, such as compiler and
linker)
• Observation time
• Operations performed
• Operational environment
• Problems observed and error rates/trends
• Timing for problem resolution
• Software modifications (both corrections of software faults and modifications driven by other reasons,
such as customer change requests and product upgrades)
The supplier should provide this information as support to the claim of software suitability for reuse through
PSH.
In order for these data to be valid and usable, the processes behind the generation and collection of the data
should be clearly identifiable, traceable and reliable. The following aspects related to the validity of the
Product Service History data are addressed in the sections 5.2.2 to 5.2.7:
• Configuration management of the software
• Correctness and traceability of data collection timing
• Relevance of previous software operations w.r.t. the current application
• Likeness of the previous operational environment(s) with the current one
• Methods of software problems detection and logging
• Management and documentation of software changes
Annex A contains a checklist that can be used to assess the validity of the Product Service History data of a
certain software product with respect to the topics discussed in sections 5.2.2 to 5.2.7.

Page 16/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

5.2.2 Configuration and change management


The version(s) of the software to which the Product Service History data is(are) related should be clearly
identified. Failure statistics and trends are only relevant if it is possible to trace them to the different software
versions.
It should be possible to trace all changes in the software and all reasons driving these changes. In particular,
all detected problems should be traced to the correct software version, and records should exist to identify all
changes made to the software, specifically of the executable code, regardless of the origin of the change.

5.2.3 Operations similarity


For the PSH to be a useful instrument, the target operational profile should be compared with the previous
operation domain(s). If similarity of operations cannot be demonstrated, the validity of PSH can be
substantially undermined. According to [FAA], for instance, “Service history data that reflect dissimilar
operations cannot be used for computing service history duration”.
In general, it is expected that the functions to be employed in the target domain are a subset of those from the
Product Service History domain.
The features of the software exercised in the observation periods covered by PSH should be compared with
the ones that are expected to be used in the current application. If some of them (e.g. operational modes) are
not covered by the PSH data, then PSH cannot be considered exhaustive and other verification means are
certainly necessary. This is particularly important if safety/dependability features exist in the software that
are expected to be employed in the current application (e.g. software monitors, safety wrappers, and other
fault tolerance mechanisms) and that were not exercised during Product Service History.
Conversely, software features that are not going to be reused in the target application could represent a risk
for the correct functioning of the system. An analysis should be carried out in order to show that the software
functions not necessary for the target application cannot harm the system operations.
As mentioned in section 5.1, the criticality of the previous and current applications also plays a role in the
credit that can be given to a software product based on Product Service History. If PSH data were collected
when the software was being used at a lower criticality than the target application, then particular attention
should be paid to the types and severity of problems logged, both resolved and still open. The lower
criticality of the previous applications could have affected the rigour applied in tracing and solving software
problems.

5.2.4 Platform similarity


The operational computer environment (hardware, software and middleware) may have a significant impact
on software behaviour. A software product with a good record of operations on a certain platform could
show an unexpected performance when operated in a different computer environment.
In general, Product Service History should only be accounted for if the original platform matches the target
one or it is substantially similar.
An analysis of the PSH and target computer environments should be carried out, to identify potential
dissimilarities. This should include, as applicable, type and version of hardware (e.g. processor, memory,
peripherals, communication devices), availability of resources (e.g. memory, CPU speed and available time),
middleware (e.g. operating system and distributed environment characteristics), interfacing software.
The analysis should address built-in test and fault tolerance features of the reused software which could not
have been exercised in the PSH computer environment, as well as any software problems due to interface
with the platform and subsequent modifications.

Page 17/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

5.2.5 Error detection, recording and reporting


The history of problems (or absence of problems) in an existing software product can only be trusted if the
process of error detection, recording and reporting applied during software operations is robust and reliable.
For existing software, this process is in general not directly visible to the users.
Detailed information about how problems are identified, documented and solved should be provided
together with Product Service History data.
It should be verified whether problem reporting is regularly performed for all the fielded versions and
instances of the software. There could be several reasons why not all software problems are logged. In some
cases, for instance, good communication channels between software suppliers and users could be missing, or
some users could chose to implement workarounds without informing the software producers. It is also
possible that software producers update the software to correct at the same time some problems detected by
the users and other problems detected internally, but only the external ones are well documented in the
update.
A clear link between problems detected and software versions should be established in all cases (see section
5.2.2).
The procedures for classifying software problems and prioritising their solution should also be analyzed, in
order to verify whether problems that are of major relevance for the target application have not been given
lower priority in the software in-service history. Results of impact analysis of software faults should be
always available. This is particularly important for problems that are left open.
The time employed in problem identification, correction, validation and update installation should also be
object of careful analysis, because it could show major shortcomings in the software maintainability, up to
the point that the software might not be suitable for certain applications, where possibility of fast
maintenance is a key issue. Increase of error rates after software problem correction could also indicate poor
software maintainability.

5.2.6 PSH observation time


The use of Product Service History is based on the assumption that PSH can give evidence that the software
functionalities have been (thoroughly) exercised. This is not strictly related to time. The fact that the software
has been operational for a long time does not mean that all its features have been active for that period of
time. Time plays a role in the sense that, from the statistical point of view, the probability of exercising all the
software functions increases as time passes.
There can be different ways of defining PSH observation time. The timing of Product Service History data
collection should be clearly documented and should represent a key element in the assessment of the PSH
data.
First of all, based on PSH definition given in section 3, the PSH observation time should be "contiguous". This
ensures that gaps in the observation periods do not have the effect of hiding software malfunctions which
could invalidate the PSH data.
When documenting the PSH observation time, consideration should be given to the relevance of the reported
time periods with respect to software operations. An assessment of the proportion of the stated run-time
should be made for which the software can be considered to have been exercised and physically in use. Data
may exist (such as machine logs, process logs, and operator or activity logs) that help to characterise the run-
time fault-free period proposed for the software with respect to its different operational modes. Idle times
and periods of low usage might be discounted for example, or chosen to contribute with a lower weighting
factor to the PSH data. If a software function is only active for a small fraction of the system operation time, it
is not reasonable to define as software in-service hours the overall operation time of the system. This is

Page 18/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

applicable, for instance, to software maintenance functions (patch and dump), that are only executed when
on-board software maintenance is necessary: for this kind of SW, the number of maintenance operations
performed would represent a better reference for PSH observation than the overall on-board software
operational time.
Consideration should also be given as to how to combine observation time obtained from multiple instances
of the same software. For example, ground software PSH data may exist for multiple instances of the same
software product in use across multiple hardware chains or environments, such as integration, validation
and operational chains. Each source of PSH data might be weighted accordingly and combined together to
provide a greater total observation time for the software.
Major software changes should be considered when setting the starting time for PSH observation. The
problem is twofold. On one hand, it can be argued that, after a major change, the software is no longer the
same as it was before the change, and thus the clock should be restarted. On the other hand, if the major
change was due to a severe software fault, restarting the clock with no mentioning of this major fault would
alter the PSH error statistics, and therefore the clock should not be restarted. With this respect, decisions
should be taken on a case by case basis, analyzing the reasons and effects of software changes.

5.2.7 Additional information from development processes


As described in section 5.1, [ECSS-Q-80] Product Service History should be used when insufficient
information is available about the processes applied when developing the software intended for reuse. This
does not mean that any existing data from development processes cannot be used in support to PSH (see
[FAA], section 4).
A good record of software in-service history can be used as an argument to claim that the software is suitable
to meet project's objectives. While it is reasonable to assert that a software product with a long operational
life and good problem statistics/trends is likely to meet the project's requirements for verification and
validation, PSH does not provide any information about the compliance of the software with the ECSS
requirements relevant to the other engineering and product assurance processes.
Therefore, any records generated by the processes applied during the development of the software, that can
support the claim of compliance with project's requirements, should be provided along with Product Service
History data. This includes, for instance, software requirement and design documents, process and product
metrics, results of tests, analyses and inspections, as well as any available plans and procedures (e.g. for
configuration management and error logging).

5.3 PSH acceptability criteria


No hard criteria can be defined for the acceptability of Product Service History data as a means to support
the claim of suitability of existing software for its reuse in an ESA space application.
Based on the information provided by the supplier, including the answers to the checklist defined in Annex
A of this handbook, engineering judgement and experience should be applied by the customer to decide
whether the in-service operational history of the software provides sufficient confidence for its use in the
current project.
The following acceptability guidelines are proposed (see [DO-278], section 4.1.6.3):
• The supplier should provide exhaustive answers to the checklist defined in Annex A of this handbook,
demonstrating the relevance and sufficiency of PSH data with respect to the points discussed in
section 5.2 of this handbook.

Page 19/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

• The Product Service History data should be compatible with any existing MTBF requirement
applicable to the system/subsystem/equipment in which the reused software is expected to run. In
other words, the Product Service History data should show that the software proposed for reuse has
been free from failure for a period which is equal to or greater than the applicable
system/subsystem/equipment MTBF.
• The acceptability criteria defined in Table 5-1 are proposed for the different software criticality
categories, distinguishing between flight and ground software.

Table 5-1: Proposed PSH acceptability criteria


Minimum duration of service experience with no failure
SW criticality (to be agreed with the customer)
category
Flight software Ground software
Product Service History should not be used as sole means to justify the suitability
of the existing software for reuse in the current application. Other engineering and
A
product assurance means, including delta verification and validation, should be
applied.
More than one year (in consideration of
B
[DO-278], section 4.1.6.3).
More than one year.
C One year.
D Six months.

5.4 PSH report


Product Service History is addressed in [ECSS-Q-80], clause 6.2.7: "Reuse of existing software". Therefore,
PSH information should be documented in the Software reuse file (SRF), whose DRD is defined in [ECSS-E-
40], Annex N. The SRF DRD defined in [ECSS-E-40] does not contain any placeholder for Product Service
History.
In case PSH is used as the main argument to support the claim of suitability of existing software for its reuse
in the current project, this should be recorded as follows:
a. It should be stated in section <4> of the software reuse file.
b. Bullets b and c of section <5> should provide any available information from the software
development processes, as described in section 5.2.7 of this handbook.
c. The following section <6> should be added.

<6> Product service history


a. The SRF should provide the relevant Product Service History data, including:
1. Observation time
2. Operations performed
3. Operational environment
4. Problems observed

Page 20/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

i. Categories of anomalies:
▪ Anomalies leading to a specification/requirements evolution.
▪ Anomalies related to the development process, design,
coding and verification.
▪ Anomalies related to hardware problems.
ii. For the two first anomaly categories (excluding hardware
anomalies), give the anomalies report during the product history
service and precise:
▪ Anomalies list and identification.
▪ Origin and cause of each anomaly (and the link to the
concerned modules and/or requirements for traceability).
▪ Correction of each anomaly (and the link to the concerned
modules for traceability); which can be either product
correction or process correction.
▪ The decision taken, such as to use the previous version, to
perform additional tests/verification.
▪ List of non-regressions tests performed.
5. Error rates/trends
6. Software modifications (modifications driven by other reasons than
anomalies, such as customer change requests and product upgrades).
b. The SRF should describe the status and relevance of PSH data with respect to
the topics described in section 5.2 of this handbook, preferably through detailed
answers to the check-list questions provided in Annex A of this handbook.

Page 21/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

Annex A
Product service history data validation
checklist
The following checklist contains a set of questions aimed at clarifying the relevance of Product Service
History data with respect to the topics discussed in sections 5.2.2 to 5.2.6 of this handbook.
The answers to these questions should be exhaustive and should be supported by evidence.
These questions are adapted from [FAA-HB].

Configuration and change management


1 Are the software versions tracked during the
service history duration?
2 Is revision/change history maintained for different
versions of the software?
3 Are problem reports tracked with respect to
particular versions of software?
4 Does the change history show that the software is
currently stable and mature?

Operations similarity
5 Is the intended software operation similar to the
usage during the service history? (Its interface with
the external world, operators, and procedures)
6 Are only some of the functions of the proposed
application used in service usage?
7 Is there a gap analysis of functions that are needed
in the proposed application but have not been used
in the service duration?
8 Have the differences between service usage and
proposed usage been analyzed?
9 If the input/output domains differ between service
history duration and intended use, has there been
an analysis of what functions are covered by the
service history?
10 Are there differences in the operating modes in the
new usage?

Page 22/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

Platform similarity
11 Are the hardware environment of service history
and the target environment similar?
12 Is the product compatible with the target computer
without making modifications to the product
software?
13 If the computer environments are different, are the
differences verified (through analysis and/or
testing)?
14 Is the data needed to analyse similarity of
environment available? If not, have the software
requirements and design data been reviewed to
support the service history claim?

Error detection, recording and reporting


15 Were in-service problems reported?
16 Were all reported problems recorded?
17 Were these problem reports stored in a repository
from which they can be retrieved?
18 Were in-service problems thoroughly analyzed and
are those analyses included or appropriately
referenced in the problem reports?
19 Have change impact analyses been performed for
changes?
20 Is each problem report tracked with its status of
whether it is fixed or open?
21 If a problem was fixed, is there a record of how the
problem was fixed?
22 Are problem reports associated with the
solutions/patches and an analysis of change
impact?
23 Is there a record of a new version of software with
the new release after the problem was fixed?
24 Are there problems with no corresponding record
of change in software version?
25 Are all problems within the problem report
repository classified?
26 Are dependability and safety-related problems
identified as such?
27 Can dependability and safety -related problems be
retrieved?
28 Is there a record of which problems are fixed and
which problems are left open?

Page 23/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use

29 Is there enough data after the last fix of


dependability and safety -related problems to
assess that the problem is solved and that no new
dependability and safety -related problems have
surfaced?
30 Do open problem reports have any dependability
and safety impact?
31 Are the problem reports and their solutions
classified to indicate how a fix was implemented?
32 Is it possible to separate those problems that caused
a design or code correction?
33 Is it possible to separate the problem reports that
were fixed in the hardware or change of
requirements?
34 Was there a procedure used to log the problem
reports as errors?
35 Is there evidence that use of this procedure was
enforced and used consistently throughout the
service history period?

PSH observation time


36 What is the definition of service period?
37 Is the service period defined appropriate to the
nature of software in question?
38 How many copies of the software are in use and
being tracked for problems?
39 What was the criterion for evaluating service
period duration?
40 Does service period include normal and abnormal
operating conditions?
41 How are error rates calculated?
42 Is the error rate computation appropriate to the
application in question?

Page 24/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013

You might also like