Professional Documents
Culture Documents
Issue 1 Revision 0
Reason for change Date Pages Paragraph(s)
Page 2/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Table of contents
2 References ...................................................................................................................... 6
4 Background ..................................................................................................................... 8
4.1 ECSS requirements related to Product Service History ..................................................... 8
4.2 Product Service History approach in other domains .......................................................... 8
4.2.1 Aviation................................................................................................................ 8
4.2.2 Nuclear Power Plants Systems ............................................................................ 9
4.2.3 Other domains ................................................................................................... 11
4.3 Current practices in ESA projects .................................................................................... 11
4.3.1 Overview ........................................................................................................... 11
4.3.2 Galileo project.................................................................................................... 11
4.3.3 RTEMS operating system .................................................................................. 13
5 Guidelines ..................................................................................................................... 15
5.1 Context of application ...................................................................................................... 15
5.2 PSH data collection and validation .................................................................................. 16
5.2.1 Overview ........................................................................................................... 16
5.2.2 Configuration and change management ............................................................ 17
5.2.3 Operations similarity .......................................................................................... 17
5.2.4 Platform similarity .............................................................................................. 17
5.2.5 Error detection, recording and reporting............................................................. 18
5.2.6 PSH observation time ........................................................................................ 18
5.2.7 Additional information from development processes .......................................... 19
5.3 PSH acceptability criteria................................................................................................. 19
5.4 PSH report ...................................................................................................................... 20
Page 3/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Tables
Table 5-1 Proposed PSH acceptability criteria .......................................................................... 20
Page 4/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
1
Purpose and scope
1.1 Purpose
The purpose of this handbook is to define guidelines for the use of Product Service History as one of the
means to assess the suitability of a given existing software item for its reuse in a specific project, in
accordance with ECSS requirements.
PSH responds to a basic concept: in a given project, the supplier is required to provide software which is
compliant with a specified set of engineering and product assurance requirements, normally corresponding
to a tailoring of the [ECSS-E-40] and [ECSS-Q-80] Standards, as well as with non-functional requirements
(concerning, for instance, dependability and safety) from project specific documents. These requirements are
meant to ensure that the software achieves certain objectives, in terms of, for example, functionality,
reliability, and maintainability. In case of reused software, the information necessary to claim compliance
with the above mentioned requirements could be unavailable or insufficient, but the software could have
operated well in previous projects, and therefore it could be a good candidate for the current application. In
such a case, PSH could be used as a means to support the claim that the proposed software is capable of
meeting the project's objectives (e.g. functionality, and reliability), also in absence of exhaustive development
process information.
It should be noted that this handbook, as it happens for other similar documents (e.g. [FAA-HB]), does not
define any hard criteria for the acceptability of Product Service History data. It rather addresses specific
aspects to be considered when evaluating the relevance and validity of PSH information, supporting the
supplier in the collection of the required evidence and documentation, and the customer in his final decision
about the reuse of the proposed existing software.
1.2 Scope
This handbook does not specify any new requirement. It is meant to help understand and implement the
ECSS requirements relevant to software reuse and Product Service History.
The guidelines defined in this handbook are addressed to ESA space and ground segment projects which
intend to make reuse of existing software whose development process details are not fully available. As
specified in the Scope section of [ECSS-Q-80], these projects can include manned and unmanned spacecraft,
launchers, payloads, experiments and their associated ground equipment and facilities, as well as the
software component of firmware.
The intended target users of this handbook are software development and maintenance engineers, software
product assurance engineers and software/system operators. The guidelines defined in this handbook can
assist the mentioned ESA personnel when supporting project managers in decisions concerning software
reuse.
Page 5/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
2
References
The documents listed below are called by this document. For each document listed, the mnemonic used to
refer to that source throughout the present document is shown in the left column, and then the complete
reference is provided in the middle column. The complete name of the document is provided in the right
column.
Page 6/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
3
Terms, definitions and abbreviated term
3.2.2 middleware
software layer located between the application software and the hardware
OS operational system
PDS pre-developed software
PSH product service history
Page 7/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
4
Background
4.2.1 Aviation
The aviation community appears to be one of the more active in the field of Product Service History. [DO-
178B] is the outcome of a joint effort of RTCA and EUROCAE, respectively American and European technical
commissions for aeronautics. It is used as guideline by the Federal Aviation Administration (FAA) to assess
the reliability of airborne software, and it explicitly mentions the use of Product Service History as an
alternative method to demonstrate compliance with one or more of the [DO-178B] objectives.
Page 8/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
The ECSS-Q-ST-80C approach to software reuse is in many aspects similar to the one of [DO-178B]. In
particular, [DO-178B] section 12.1.4 concerns software whose life cycle data from a previous application is
inadequate or does not satisfy the objectives of the Standard. In this case, the applicant for the software
certification is required to perform a set of activities meant to increase the confidence in the software
proposed for reuse. These activities may include reverse engineering and/or Product Service History.
Guidelines and requirements for the application of PSH are provided in section 12.3.5 of [DO-178B], where it
is specified that “some certification credit may be granted” based on Product Service History, provided that
specific requirements are met. These requirements span from configuration management to actual error rates
collection, and define firm conditions for PSH data acceptability.
In January 2002, the FAA issued a ‘Software Service History Handbook’ (DOT/FAA/AR-01/116) with the
purpose of providing more details and guidance on the application of Product Service History.
Besides an analysis and discussion of the various aspects of Product Service History and their relation to the
[DO-178B] objectives, the FAA handbook gives example worksheets to guide the evaluation in the domains
‘problem reporting’, ‘time’, ‘environment’ and ‘operation’. The aim of the sheets is to create uniformity and
structure in the assessments.
It is not known to what extent the FAA handbook and Product Service History is being used today in
certification of avionics software. An earlier report ([FAA-REP]) on the use of COTS in avionics software
suggests that this is only done on a very limited scale and only for category D software.
It is important to note that the FAA handbook makes an explicit statement about its status: “This handbook
is the output of a research effort. It does not, by itself, constitute policy or guidance. The FAA may use this
handbook in the creation of future policy or guidance.”
While the conclusions of the handbook express the expectation that the subject of Product Service History
will be revisited when [DO-178B] is revised, this is not listed as one of the focus points for DO-178C (the next
version of the document), which are: formal methods, software modelling, tool qualification, and object-
oriented software.
Another document from the aviation domain relevant for product service history is [DO-278], whose
objective is "to be an interpretive guide for the application of DO-178B/ED-12B guidance to non-airborne
CNS/ATM systems". In this document, the objectives of [DO-178B] for the assurance that airborne software
has the integrity needed for use in a safety-related application have been reviewed and, in some cases,
modified for application to non-airborne CNS/ATM systems.
For the purpose of this handbook, the aviation documents and handbooks where analysed taking into
consideration the fact that they are mainly concerned with safety aspect related to the certification of
airborne and ground systems, and therefore the corresponding objectives may be more stringent than the
ones underlying ESA requirements and standards.
Page 9/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Requirements for PDS are contained in clauses 5.7.2 and 5.7.4 (security), 7.1.4 (design and implementation),
and 8.2.3.3 (verification of configuration data).
Clause 14.3.4 on translators/compilers requires that the libraries contained in such tools and used in the
target system be considered as sets of pre-developed software components.
Key preconditions are prescribed for the integration of PDS items in nuclear power plants systems. In
particular the evaluation of the PDS comprises:
• Evaluation of functional and performance features
• Evaluation of required changes
• Evaluation of the quality of the software development process
• Evaluation of the operational experience of the PDS
• Documentation of the evidence gathered during the assessment process.
Hence, the evaluation of the operational experience of the PDS represents the link to the concept of Product
Service History addressed in this document.
In those cases where the PDS has been used in many applications similar to the intended use, operational
experience can be claimed during its evaluation in order to increase the confidence in the reliability of the
system.
It is important to note that all requirements applicable to the software components performing “Category A”
functions are still applicable to the PDS. Clause 15 of [IEC-60880], “Qualification of pre-developed software”,
requires compliance of any candidate PDS software comply with all the requirements defined in the
Standard itself.
Nevertheless, Product Service History is accepted as complimentary information to compensate weaknesses
in gathering evidences during the evaluation process.
Clause 15 defines an evaluation and assessment process for PDS reutilisation in Nuclear Power Plants
Systems encompassing a systematic analysis of four key areas:
• Operating Time: this evaluation area aims at ensuring that the accounting of time includes
representative operational experience.
• History of Defects and Errors: this evaluation area aims at identifying the relevance of the problems
detected during the execution of the PDS.
• Operating Environment: this evaluation area aims at ensuring that the operating environment of the
PDS is similar to the one expected during operations of the new system.
• History of changes: this evaluation area aims at identifying the relevance of the changes introduced in
different releases of the PDS.
Data gathered for these key areas includes:
• Operating time: including elapsed time since first start-up, last release, last severe error, last error
report.
• History of defects and errors: including severity, affected subsystem, PDS version where the problem
is detected, PDS version where the problem is fixed, detection date, detection phase, source of the
problem.
• Operating environment: including configuration data, features of the system being executed,
hardware conditions
Page 10/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
• History of changes: including software versions, list of changes per software version, affected
subsystems, pending changes.
This standard requires that gathered data follow a verification process to determine the relevance of the
Product Service History in the evaluation of the PDS:
• Operating time: the verification aims to demonstrate that a minimum pre-established time has been
achieved.
• History of defects and errors: the verification aims to demonstrate that all problems detected were
recorded and analysed.
• Operations environment: the verification aims to demonstrate that configuration data, features of the
system being executed and hardware conditions are comparable to the ones defined by the target
system.
• History of changes: the verification aims to demonstrate that all changes were recorded and analysed.
4.3.1 Overview
Product Service History (PSH) has not historically been widely used across ESA projects. Only a few
examples exist. Sections 4.3.2 and 4.3.3 describe the approach taken in the Galileo project and in the reuse of
the RTEMS operating system.
Page 11/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
operating systems, and databases) significantly difficult, particularly on the ground where their usage is
commonplace. Industry has struggled to meet these requirements for Category D software and higher, and
the ground control segment has instead sought an alternative approach for widely-used ‘industry-standard’
COTS software at assurance level D (e.g. Windows, or open-source software such as Linux operating
systems, MySQL databases, and XML parsers). With this kind of software there are limited (if any) lifecycle
and development data and artefacts available, and the full qualification of community-developed open-
source software to level D would require significant cost and schedule effort that cannot be accepted by the
project.
A “late PSH” approach is proposed whereby the PSH data does not exist a-priori at the time of subsystem
qualification or acceptance. The timescales and schedule characteristics of the Galileo programme, coupled
with the shifting timeline for safety-of-life certification, permit a non-standard approach to the use of PSH
data in terms of its scheduling and availability. With the major driver for qualification of COTS being the
eventual safety-of-life certification, the project is able to temporarily waive the normal COTS qualification
requirements at software component level and address them at a future point.
The approach therefore is to collect a-posteriori PSH data for particular COTS in-situ in the integrated ground
control segment during segment and system validation campaigns, operations preparation, validation and
simulation campaigns as permitted by [DO-278], and then into initial operations and onwards during the
transition to the final configuration and deployment of the full complement of satellites and ground systems.
By the time the system is ready to begin offering a publicly available open service it is expected that there
will be sufficient PSH-like data to claim the COTS products being qualified to level D.
This approach to PSH data collection has certain advantages:
1. The operational usage scenario to which the PSH pertains corresponds precisely to the
operational usage scenario of the software being qualified by PSH data, i.e. the PSH is entirely
relevant since it was gathered on, and consequently corresponds precisely to the usage of the
software in its intended manner in the system. Nominal and non-nominal scenarios and
functionality should be exercised adequately as part of system validation, operations
preparations, and actual early operation of the system, during formal test campaigns and in ad-
hoc system usage by the operators.
2. The hardware environment to which the PSH data pertains corresponds precisely to the
hardware environment in which the software to be qualified is running.
3. Configuration Management considerations for PSH data are fully addressed – the ground
segment software is under strict configuration control, anomalies and NCRs are monitored,
raised and tracked as in any other ESA project, and according to ECSS and GSWS. Software
anomaly reporting tools include fields to flag anomalies that are found to be caused by a
software product for which PSH data is being gathered. The anomaly process feeds directly into
the PSH data gathering process. Anomaly Review Boards ensure that PSH concerns are not
overlooked and that the appropriate flagging of anomalies as PSH-impacting is performed.
4. PSH data is gathered on a mature and stable system having already undergone lower level
software/subsystem/element-level qualification and acceptance campaigns, as well as
integration at ground segment level.
In this way, Galileo overcomes some of the main problems associated with the use of PSH data (e.g.
relevancy to the current project, dissimilarity of environments and usage scenarios, inadequacy of
configuration management and anomaly reporting process).
At the time of publication of the present document, some aspects of the process remain to be agreed with
industry:
Page 12/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
5. The measurement of total software run-time or in-service hours, and how actual “usage” of the software is
defined - the PSH should characterise valid usage time supported by, for example, analysis of
machine and process logs, operator logs, test records, and monitoring data, in order to have
granular run-time data reflecting the real usage period of the software. Industry prefers to
collect coarser data based simply on machine start/stop times independent of actual usage (i.e.
including idle times within the total runtime), arguing that idle times are part of the normal
usage scenario of the ground segment, and that some software (e.g. XML parsers) may only run
a discrete number of times per day for which it will be difficult to amass sufficient numbers of
hours if measurement is based on the few milliseconds each usage actually incurs. The issue of
how run-time combines together across multiple instances of software also remains to be agreed
along with the criteria for determining when software defects cause the fault-free runtime
period counter to be reset.
6. Software maintenance issues (e.g. patching, and upgrades) - careful consideration needs to be given
to the impact on the PSH data collection by any potential upgrade to the software COTS in
question (e.g. vendor patches, service packs, and security updates).
7. Constraints created by the PSH data collection process on the system’s future hardware evolution - the
need to keep the hardware environment stable for the purpose of PSH data collection is clear,
but the long duration of the project necessitates a strategy for dealing with hardware
obsolescence and migration. The obvious impact of hardware migration is to invalidate any
PSH gathered up to that point. An approach to mitigate risks is currently being introduced in
Galileo by using hardware virtualization such that PSH data is collected on virtualized instances
that maintain a constant configuration of qualified software (OS, COTS and application
software) on top of a thin virtualization layer, across hardware environments that evolve over
time.
The risk exists that Galileo’s PSH process ultimately fails to collect data that qualifies the products in
question to level D, though the programme’s duration should provide sufficient time to address this in
another way, particularly with safety certification deferred till some years after the achievement of full
operational capability. ESA’s involvement in shaping the PSH process and in the analysis of periodic PSH
data reports from industry will help provide early detection of potential problems. Sufficient time is
available to refine the process further if necessary. Recovery strategies might then be to replace the offending
COTS with qualifiable COTS or to undertake dedicated COTS qualification activities in timescales
compatible with the overall programme’s schedule for safety certification.
It should be noted that the Galileo example does not reflect the PSH data usage as intended in [ECSS-Q-80].
The ECSS Standard specifies that PSH can be used to support the decision whether to reuse or not existing
software in a project; for that purpose the PSH data need to be available at the time when the decision is
taken. The approach of Galileo to decide upfront to reuse a software product, and then collect service history
data to verify whether that software fulfils the project requirements, is not in line with the [ECSS-Q-80]
provisions.
Page 13/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
standards-based real-time executive for which source code was available and royalties were paid.” To date,
RTEMS is free software under "modified" GNU General Public License (any software linked to RTEMS
maintains its own licensing scheme).
RTEMS has not been developed following ECSS requirements, or apparently any similar prescriptive and
structured standards. While the source code and a comprehensive error database are publicly available, there
are important documents missing, among which software requirement documents, design documents and
tests reports. There is a test suite but its scope is limited to software system level testing. There are no test
suites for unit level testing or for integration level testing.
This operating system is being integrated in an increasing number of ESA spacecraft projects. As such,
RTEMS would be a good candidate for the establishment of a well-documented Product Service History.
However, the majority of ESA projects using RTEMS have considered the available documentation
insufficient and they all have embarked on additional validation campaigns to facilitate qualification, based
on a reverse engineering approach.
It should be noted that a major European space company has instantiated a specific version of RTEMS, called
"RTEMS product", which is being used in different projects and for which some in-service data are being
collected. These data, if complemented in line with the recommendations contained in this handbook, could
represent a valid starting point for a potential Product Service History approach.
Page 14/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
5
Guidelines
Page 15/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
customer. The relevance and validity of the information provided in the Software Reuse File is subject to
customer’s approval.
When Product Service History represents a major constituent of a Software Reuse File, the customer should
consider the risks associated with the reuse of the proposed existing software. The above considerations
about the criticality of the function to be implemented and the validity limitations of PSH data play a
significant role in the reduction of such risk.
5.2.1 Overview
As mentioned if [FAA-HB], although Product Service History would seem to be a fairly simple concept,
problems may arise in its practical application for what concerns the PSH data collection and the relevance of
these data to the reuse justification in the current application.
This handbook provides recommendations for the process of collecting Product Service History data and for
the conditions for these data to be valid and acceptable.
The following data are relevant to the Product Service History of a software product:
• Identification of the software (including identification of SW generation tools, such as compiler and
linker)
• Observation time
• Operations performed
• Operational environment
• Problems observed and error rates/trends
• Timing for problem resolution
• Software modifications (both corrections of software faults and modifications driven by other reasons,
such as customer change requests and product upgrades)
The supplier should provide this information as support to the claim of software suitability for reuse through
PSH.
In order for these data to be valid and usable, the processes behind the generation and collection of the data
should be clearly identifiable, traceable and reliable. The following aspects related to the validity of the
Product Service History data are addressed in the sections 5.2.2 to 5.2.7:
• Configuration management of the software
• Correctness and traceability of data collection timing
• Relevance of previous software operations w.r.t. the current application
• Likeness of the previous operational environment(s) with the current one
• Methods of software problems detection and logging
• Management and documentation of software changes
Annex A contains a checklist that can be used to assess the validity of the Product Service History data of a
certain software product with respect to the topics discussed in sections 5.2.2 to 5.2.7.
Page 16/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Page 17/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Page 18/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
applicable, for instance, to software maintenance functions (patch and dump), that are only executed when
on-board software maintenance is necessary: for this kind of SW, the number of maintenance operations
performed would represent a better reference for PSH observation than the overall on-board software
operational time.
Consideration should also be given as to how to combine observation time obtained from multiple instances
of the same software. For example, ground software PSH data may exist for multiple instances of the same
software product in use across multiple hardware chains or environments, such as integration, validation
and operational chains. Each source of PSH data might be weighted accordingly and combined together to
provide a greater total observation time for the software.
Major software changes should be considered when setting the starting time for PSH observation. The
problem is twofold. On one hand, it can be argued that, after a major change, the software is no longer the
same as it was before the change, and thus the clock should be restarted. On the other hand, if the major
change was due to a severe software fault, restarting the clock with no mentioning of this major fault would
alter the PSH error statistics, and therefore the clock should not be restarted. With this respect, decisions
should be taken on a case by case basis, analyzing the reasons and effects of software changes.
Page 19/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
• The Product Service History data should be compatible with any existing MTBF requirement
applicable to the system/subsystem/equipment in which the reused software is expected to run. In
other words, the Product Service History data should show that the software proposed for reuse has
been free from failure for a period which is equal to or greater than the applicable
system/subsystem/equipment MTBF.
• The acceptability criteria defined in Table 5-1 are proposed for the different software criticality
categories, distinguishing between flight and ground software.
Page 20/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
i. Categories of anomalies:
▪ Anomalies leading to a specification/requirements evolution.
▪ Anomalies related to the development process, design,
coding and verification.
▪ Anomalies related to hardware problems.
ii. For the two first anomaly categories (excluding hardware
anomalies), give the anomalies report during the product history
service and precise:
▪ Anomalies list and identification.
▪ Origin and cause of each anomaly (and the link to the
concerned modules and/or requirements for traceability).
▪ Correction of each anomaly (and the link to the concerned
modules for traceability); which can be either product
correction or process correction.
▪ The decision taken, such as to use the previous version, to
perform additional tests/verification.
▪ List of non-regressions tests performed.
5. Error rates/trends
6. Software modifications (modifications driven by other reasons than
anomalies, such as customer change requests and product upgrades).
b. The SRF should describe the status and relevance of PSH data with respect to
the topics described in section 5.2 of this handbook, preferably through detailed
answers to the check-list questions provided in Annex A of this handbook.
Page 21/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Annex A
Product service history data validation
checklist
The following checklist contains a set of questions aimed at clarifying the relevance of Product Service
History data with respect to the topics discussed in sections 5.2.2 to 5.2.6 of this handbook.
The answers to these questions should be exhaustive and should be supported by evidence.
These questions are adapted from [FAA-HB].
Operations similarity
5 Is the intended software operation similar to the
usage during the service history? (Its interface with
the external world, operators, and procedures)
6 Are only some of the functions of the proposed
application used in service usage?
7 Is there a gap analysis of functions that are needed
in the proposed application but have not been used
in the service duration?
8 Have the differences between service usage and
proposed usage been analyzed?
9 If the input/output domains differ between service
history duration and intended use, has there been
an analysis of what functions are covered by the
service history?
10 Are there differences in the operating modes in the
new usage?
Page 22/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Platform similarity
11 Are the hardware environment of service history
and the target environment similar?
12 Is the product compatible with the target computer
without making modifications to the product
software?
13 If the computer environments are different, are the
differences verified (through analysis and/or
testing)?
14 Is the data needed to analyse similarity of
environment available? If not, have the software
requirements and design data been reviewed to
support the service history claim?
Page 23/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013
ESA UNCLASSIFIED – For Official Use
Page 24/24
ESSB-HB-Q-002 Issue 1 Rev 0
Date 30 January 2013