You are on page 1of 105

ARP 4754

Systems Integration Requirements Task Group


AS-1C, ASD
SAE

April 10, 1996


Table of Contents

Introduction.................................................................................................................... 1

1.0 Scope.......................................................................................................................... 1
1.1 Purpose .................................................................................................................. 3
1.2 Document Organization...................................................................................... 4
1.3 Document Conventions....................................................................................... 5
1.4 Document Background........................................................................................ 5

2.0 References................................................................................................................. 7
2.1 Applicable Documents ........................................................................................ 7
2.2 Definitions ............................................................................................................. 7
2.3 Abbreviations and Acronyms ............................................................................ 7

3.0 System Development .............................................................................................. 8


3.1 A Conceptual System Development Process ................................................... 9
3.2 Development Assurance ................................................................................... 11
3.2.1 Development Assurance Activites ............................................................................... 11
3.2.2 Development Assurance Substantiation ..................................................................... 11
3.3 Safety Directed Development Concept ........................................................... 11

4.0 Certification Process and Coordination ............................................................. 13


4.1 Certification Planning........................................................................................ 13
4.2 Agreement on the Proposed Means of Compliance...................................... 14
4.3 Compliance Substantiation ............................................................................... 14
4.4 Certification Data ............................................................................................... 14
4.4.1 Certification Plan............................................................................................................. 16
4.4.2 Configuration Index ....................................................................................................... 16
4.4.2 Development Plan........................................................................................................... 17
4.4.4 Architecture and Design ................................................................................................ 17
ARP 4754

5.0 Requirements Determination and Assignment of Development


Assurance Level ............................................................................................. 18
5.1 Requirements Capture....................................................................................... 18
5.2 Types of Requirements ...................................................................................... 18
5.2.1 Safety Requirements ....................................................................................................... 19
5.2.2 Functional Requirements............................................................................................... 19
5.2.3 Additional Certification Requirements ....................................................................... 20
5.3 Derived Requirements...................................................................................... 21
5.4 Assignment of Development Assurance Levels ............................................ 22
5.4.1 Architecture Considerations.......................................................................................... 22
5.4.2 Implementation Error Management ............................................................................ 28
5.4.3 Software Level Assignment........................................................................................... 28
5.4.4 Hardware Level Assignment ........................................................................................ 28
5.5 Failure Condition Risk Assignment ................................................................ 29

6.0 Safety Assessment Process ................................................................................... 30


6.1 Functional Hazard Assessment........................................................................ 32
6.2 Preliminary System Safety Assessment .......................................................... 33
6.3 System Safety Assessment ................................................................................ 35
6.4 Common Cause Analysis .................................................................................. 35
6.5 Safety-Related Flight Operations or Maintenance Tasks ............................. 37

7.0 Validation of Requirements ................................................................................ 38


7.1 Validation Process Objectives........................................................................... 39
7.2 Validation Process Model ................................................................................. 39
7.3 Completeness Checks ........................................................................................ 41
7.4 Correctness Checks ............................................................................................ 42
7.5 Validation of Assumptions .............................................................................. 42
7.5.1 Operational and Environmental Assumptions .......................................................... 44
7.5.2 Design Related Assumptions ........................................................................................ 44
7.5.3 Manufacturing & Producibility Assumptions ............................................................ 45
7.5.4 Serviceability Assumptions ........................................................................................... 46
7.5.5 Installation Assumptions ............................................................................................... 46
7.6 Validation Rigor ................................................................................................. 46
7.6.1 Validation Methods ........................................................................................................ 46
7.6.2 Recommended Methods ................................................................................................ 47

ii
7.7 Validation Data................................................................................................... 48
7.7.1 Validation Plan................................................................................................................ 48
7.7.2 Supporting Data and Records....................................................................................... 49
7.7.3 Validation Tracking ........................................................................................................ 49
7.7.4 Validation Summary ...................................................................................................... 50

8.0 Implementation Verification................................................................................ 51


8.1 Verification Process Objectives ........................................................................ 51
8.2 Verification Process Model ............................................................................... 51
8.3 Verification Planning ......................................................................................... 53
8.4 Verification Methods ......................................................................................... 53
8.4.1 Inspection and Review ................................................................................................... 54
8.4.2 Analysis ............................................................................................................................ 54
8.4.3 Testing ............................................................................................................................. 54
8.4.4 Similarity / Service Experience .................................................................................... 55
8.4.5 Recommended Verification Activities ......................................................................... 56
8.5 Verification Data................................................................................................. 57
8.5.1 Verification Plan.............................................................................................................. 57
8.5.2 Verification Procedures and Results ............................................................................ 57
8.5.3 Verification Matrix.......................................................................................................... 58
8.5.4 Verification Summary .................................................................................................... 58

9.0 Configuration Management ................................................................................ 59


9.1 Configuration Management Process Objectives............................................ 59
9.2 Configuration Management Process Activities ............................................. 60
9.2.1 Configuration Identification.......................................................................................... 60
9.2.2 Problem Reporting and Change Control..................................................................... 60
9.2.3 Archive and Retrieval..................................................................................................... 61

10.0 Process Assurance ............................................................................................... 62


10.1 Objectives of Process Assurance .................................................................... 62
10.2 Process Assurance Plan ................................................................................... 62
10.3 Project Plan Reviews........................................................................................ 63
10.4 Evidence of Process Assurance ...................................................................... 63

11.0 Modified Aircraft................................................................................................. 64


11.1 Certification Basis............................................................................................. 64
ARP 4754

11.2 Means of Compliance ...................................................................................... 64


11.3 Considerations for Modification .................................................................... 64
11.3.1 Introducing a New Aircraft-Level Function ............................................................. 65
11.3.2 Replacing One System With Another on an Existing Aircraft ............................... 65
11.3.3 Adapting an Existing System to a Different Aircraft Type..................................... 66
11.3.4 Altering a System on an Existing Aircraft................................................................. 67
11.4 Additional Considerations.............................................................................. 68
11.4.1 Supplementing Existing Certification Data. ............................................................. 69
11.4.2 Use of Service History .................................................................................................. 69

Appendix A. An Overview of a Generic Approach to Aircraft


Systems Development .................................................................................A-1
A.1 Generic Systems Development Processes....................................................A-2
A.1.1 Identification of Aircraft-Level Functions, Functional Requirements
and Functional Interfaces...................................................................................A-2
A.1.2 Determination of Functional Failure Consequences and Implications................A-2
A.1.3 Allocation of Functions To Systems and People .....................................................A-3
A.1.4 Design of System Architecture and Allocation of Requirements to
Items......................................................................................................................A-3
A.1.5 Allocation of Item Requirements to Hardware and Software...............................A-4
A.1.6 Hardware and Software Design/Build ....................................................................A-5
A.1.7 Hardware/Software Integration ...............................................................................A-5
A.1.8 System Integration.......................................................................................................A-5
A.2 Information Flow Between System and Hardware/Software
Life-Cycle Processes .................................................................................A-6
A.2.1 Information Flow From System Process to Hardware/Software
Processes...............................................................................................................A-6
A.2.2 Information Flow From Hardware/Software Processes to System
Process...................................................................................................................A-7

Appendix B - Definitions, Abbreviations & Acronyms ....................................... B-1


B.1 Definitions......................................................................................................... B-1
B.2 Abbreviations and Acronyms ........................................................................ B-8

iv
Appendix C - Concept, Task and Function Index ................................................ C-1

Appendix D - Considerations Associated with the Allocation of Risk


in Systems Development.............................................................................D-1
D.1 Link Between Systems and Software Methodology ..................................D-1
D.2 Interpretation of Systems Interface with DO-178B ....................................D-2
D.3 Architectural Isolation and Independence ..................................................D-2
D.4 Systems Performing ‘Essential’ Functions...................................................D-3
D.5 Service Experience...........................................................................................D-4
D.6 Finite Resources...............................................................................................D-5
D.7 Conclusions ......................................................................................................D-5
ARP 4754

List of Figures

Figure 1-1 Certification Guidance Documents Covering System,


Safety, Software and Hardware Processes ............................... 2
Figure 3-1 Aircraft Function Implementation Process ........................................ 8
Figure 3-2 System Development Process Model ................................................ 10
Figure 6-1 Safety Assessment Process Model ..................................................... 31
Figure 7-1 Validation Process Model ................................................................... 40
Figure 8-1 Verification Process Model ................................................................. 52

vi
List of Tables

Table 1-1 Content and Purpose of the Document Sections................................. 4


Table 4-1 Certification Data................................................................................... 15
Table 5-1 System Development Assurance Level Assignment ........................ 22
Table 5-2 Examples of Architecturally Derived System
Development Assurance Levels and Constraints.................. 25
Table 6-1 Failure Classification and Safety Implications .................................. 32
Table 7-1 Requirements Validation Methods and Data .................................... 48
Table 8-1 Verification Methods and Data............................................................ 56
Introduction
Established industry practices and associated regulatory requirements have
developed over many years to ensure that safety standards are maintained
in civil aircraft. The increasing integration and complexity of aircraft
electronic systems has led to a need to review existing procedures and
provide additional guidance to ensure that the proper operation and safety
of future systems and system changes can be assured.

1.0 Scope
This document discusses the certification aspects of highly-integrated or
complex systems installed on aircraft, taking into account the overall aircraft
operating environment and functions. The term “highly-integrated” refers
to systems that perform or contribute to multiple aircraft-level functions.
The term “complex” refers to systems whose safety cannot be shown solely
by test and whose logic is difficult to comprehend without the aid of
analytical tools.

The guidance material in this Aerospace Recommended Practice (ARP) was


developed in the context of Federal Aviation Regulations (FAR) and Joint
Airworthiness Requirements (JAR) Part 25. It may be applicable to other
regulations, such as Parts 23, 27, 29 and 33. In general, this material is also
applicable to engine systems and related equipment. Final regulatory
approval of all systems is assumed to be accomplished in conjunction with
an aircraft certification.

This document has been prepared primarily for electronic systems which, by
their nature, may be complex and are readily adaptable to high levels of
integration. However, the guidance provided in this document may be
considered for other aircraft systems.

This document addresses the total life cycle for systems that implement
aircraft-level functions. It excludes specific coverage of detailed systems,
software and hardware design processes beyond those of significance in
establishing the safety of the implemented system. More detailed coverage
of the software aspects of design are dealt with in RTCA document
DO-178B, “Software Considerations in Airborne Systems and Equipment
Certification” and its EUROCAE counterpart, ED-12B. Coverage of complex
hardware aspects of design are dealt with in RTCA document DO-xxx,
(working title: “Design Assurance Guidance for Airborne Electronic
Hardware,”) currently under development by RTCA special committee
ARP 4754

SC-180. Methodologies for safety assessment processes are outlined in SAE


document ARP 4761, “Guidelines and Methods for conducting the Safety
Assessment Process on Civil Airborne Systems and Equipment”. Figure 1-1
outlines the relationships between the various documents which provide
guidance for system development, safety assessment, and the hardware and
software life-cycle processes.

This ARP is intended to be a guide for both the certification authorities and
applicants for certification of highly-integrated or complex systems,
particularly those with significant software elements. As such, the focus is
toward ensuring that safety is adequately assured through the development
process and substantiating the safety of the implemented system. Specific
guidance on how to do the substantiation work is beyond the scope of this
document, though references are provided where applicable.

Figure 1-1 Certification Guidance Documents Covering System, Safety, Softw


and Hardware Processes

2
This document is intended to cover the needs of current technology and, as
far as possible, emerging technology. It is anticipated that this document
will be revised periodically to incorporate future technological changes and
engineering process improvements.

1.1 Purpose
These guidelines are intended to provide designers, manufacturers,
installers and certification authorities a common international basis for
demonstrating compliance with airworthiness requirements applicable to
highly-integrated or complex systems. The guidelines are primarily directed
toward systems that integrate multiple aircraft-level functions and have
failure modes with the potential to result in unsafe aircraft operating
conditions. Typically, these systems are software based and involve
significant interactions with other systems in a larger integrated
environment. Frequently significant elements of these system are developed
by separate individuals, groups or organizations. Highly-integrated or
complex systems require added design discipline and development
structure to ensure that safety and operational requirements can be fully
realized and substantiated. While these guidelines could be applied to the
development of simpler systems, the formality of the development
structure, processes, and documentation should be reduced substantially.

Since this document is intended to provide a common basis for certification,


the guidelines concentrate primarily on safety requirements associated with
JAR/FAR 25.1309. Other requirements that determine the basis for
satisfactory functionality, such as JAR/FAR 25.1301, can be addressed using
this same guidance or simple extensions.

Much of the material covered in this document is not new and, where
relevant, references are included to related documents. Many of processes
referred to in this document are undergoing rapid evolutionary
development. Moreover, the extent to which different systems can be
classified as complex or highly-integrated is subject to wide variation. By
providing a single publication that addresses the generic, high-level aspects
of certification of highly-integrated or complex systems, it is believed that a
broader understanding of the fundamental issues will develop. This, in turn,
should aid both the applicant and the certification authorities in reaching
agreement on the detailed system certification process for a particular
aircraft model.

This document does not provide guidelines concerning the structure of the
applicant’s organization nor how the responsibilities for certification
ARP 4754

activities are divided. Neither should any such guidance be inferred from
the descriptions provided.

1.2 Document Organization


Table 1-1 outlines the content and purpose of each section of this document.

Section Content and Purpose


2 List of references.
Outlines the significant characteristics of highly-integrated and
3 complex systems and introduces the concept of system
development assurance as a means of certification.
Summarizes guidelines for the selection of certification data along
4 with the planning and coordination of the certification program.
Describes means used to determine and allocate system
5 requirements, including those requirements derived from
system architecture, with specific emphasis given to the
identification of development assurance levels.
Describes safety assessment activities that support the design
6 process and lead to development of the recommended
certification substantiation information.

7 Describes validation activities that substantiate the correctness


and completeness of requirements.

8 Describes verification activities that substantiate correct design


implementation.
Outlines elements of configuration management which support
9 substantiation of development assurance requirements.

10 Outlines elements of process assurance which support


substantiation of development assurance.

11 Explains the applicability of this ARP to modifications of existing


aircraft or existing systems.
Describes a generic approach to systems development as an aid
Appendix A to understanding the systems concepts and process
terminology used in this document.
Appendix B A list of definitions, abbreviations and acronyms used in this ARP.
Appendix C Index to descriptions and applications of concepts, tasks, and
functions particularly significant to this ARP.

Table 1-1 Content and Purpose of the Document Sections

4
1.3 Document Conventions
This document contains concepts and guidelines collected from
representatives of the civil avionics, airframe, engine and regulatory
communities. The contents are recommendations and are not mandated by
law. For these reasons, the use of words such as “shall” and “must” is
avoided. It is recognized that alternative methods to the processes described
or referenced in this document may be available to an organization desiring
to obtain certification of a highly-integrated or complex aircraft system.

The terms function and system can be applied at many different levels. Since
the terms system and function are used at all levels of the development
process, they create many opportunities for miscommunication. Any
development program should identify the intended scope of these terms
when they are used.

The term item is used in this document to describe any equipment, line
replaceable unit, or line replaceable module. All items are characterized by a
hardware definition and, where appropriate, a software definition.
Components and software that are included in equipment, LRUs, or LRMs,
and are not required to be controlled by part number at the aircraft level, are
not items for the purpose of this document.

In this document, system generally means a combination of interrelated


items arranged to implement a specific aircraft-level function or group of
functions. A typical system will include such items as: power sources,
sensors, control, processing, indications and functional outputs. This is a
broader meaning than the typical ATA 100 system designation.

The term partition is used in this document to describe the mechanism used
to separate portions of a system or an item with sufficient independence
such that a specific development assurance level can be substantiated within
the partitioned portion.

1.4 Document Background


During development of Revision B to RTCA document DO-178, it became
apparent that system-level information would be required as input to the
software development process. Since many system-level decisions are
fundamental to the safety and functional aspects of aircraft systems,
regulatory involvement in the processes and results relating to such
decisions is both necessary and appropriate.
ARP 4754

This document was developed in response to a request from the FAA to


SAE. The FAA requested that SAE define the appropriate nature and scope
of system-level information for demonstrating regulatory compliance for
highly-integrated or complex avionic systems. The Systems Integration
Requirements Task group (SIRT) was formed to develop an ARP that would
address this need.

The initial members of SIRT recognized that harmonization of international


understanding in this undertaking was highly desirable and encouraged
participation by both Federal Aviation Administration (FAA) and Joint
Aviation Authorities (JAA) representatives. A companion working group
was formed under EUROCAE, WG-42, to coordinate European input to the
SIRT group. The task group included people with direct experience in
design and support of large commercial aircraft, commuter aircraft,
commercial and general aviation avionics, jet engines, and engine controls.
Regulatory personnel with a variety of backgrounds and interests
participated in the work of the task group. Both formal and informal links
with RTCA special committees (SC-167 and SC-180) and SAE committee
(S-18) were established and maintained. Communication with the
harmonization working group addressing FAR/JAR 25.1309 was
maintained throughout development of this document.

Throughout development of this document, discussion return repeatedly to


the issue of guidance specificity. Strong arguments were presented in favor
of providing a list of very specific certification steps¾a checklist. Equally
strong arguments were made that the guidance should focus on
fundamental issues, allowing the applicant and the certification authority to
tailor details to the specific system. It was recognized that in either case
certification of all but the most idealized systems will require significant
engineering judgment by both parties. The quality of those judgments is
served best by a common understanding of, and attention to, fundamental
principles. The decision to follow this course was supported by several other
factors; the variety of potential systems applications, the rapid development
of systems engineering, and industry experience with the evolving guidance
contained in DO-178, DO-178A, and DO-178B being particularly significant.

The generic systems development road map presented in Appendix A,


together with the detailed definitions contained in Appendix B.1, provides
additional insight into systems development that can be of assistance to
those not directly familiar with this type of work. The concept, task, and
function index in Appendix C is intended to help users of the document
quickly find specific types of information.

6
2.0 References

2.1 Applicable Documents


AC 25.1309-1A, “System Design and Analysis, Advisory Circular”, FAA

AMJ 25.1309, “System Design and Analysis, Advisory Material Joint”, JAA

ARP 4761, “Guidelines and Methods for Conducting the Safety Assessment
Process on Civil Airborne Systems”, SAE

ATA-100, “ATA Specification for Manufacturer’s Technical Data”, Rev. 30,


Oct. 31, 1991; Airline Transport Association of America, 1709 New York
Ave. NW, Washington D. C. 20006

DO-178B, “Software Considerations in Airborne Systems and Equipment


Certification”, RTCA Inc., 1140 Connecticut Avenue, N. W., Suite 1020,
Washington, D. C. 20036

DO-xxx, Document under development by SC-180 Working title: “Design


Assurance Guidance for Airborne Electronic Hardware”, RTCA Inc., 1140
Connecticut Avenue, N. W., Suite 1020, Washington, D. C. 20036

FAR/JAR Part 25

2.2 Definitions
See Appendix B.1

2.3 Abbreviations and Acronyms


See Appendix B.2
ARP 4754

3.0 System Development


This section outlines an overall system development process model that
provides a structure for the remaining sections in this document. The
concept of development assurance is introduced.

Supporting
Processes
Certification
Coordination Aircraft Function x
Aircraft Function 2
Aircraft Function 1
Safety
Assessment

Requirements
Validation System i
System 2
System 1

Implementation
Verification
Item n
Item 2
Configuration Item 1
Management

Process
Assurance

Hardware Software
Life-Cycle Life-Cycle

Item Development
System Development
Aircraft Function

Figure 3-1 Aircraft Function Implementation Process

8
3.1 A Conceptual System Development Process
Figure 3-1 shows an aircraft function implementation process model. The
model includes multiple system development processes. Each system
development process can consist of multiple item development processes.
Nested within each item development process there can be multiple
hardware life cycles as well as multiple software life cycles. There are certain
supporting processes that take place repetitively during each of the
development activities. Figure 3-2 illustrates a generic systems development
process showing the scope of this ARP. A description of a generic system
development process is provided in Appendix A.1. This description is not
intended as a preferred method, nor does it imply a specific organizational
structure, but is intended to facilitate understanding. The interface between
the system development process and the hardware and software life-cycles
is described in Appendix A.2.

Most actual systems development processes involve many iterative cycles,


making the experience appear more cyclic than sequential. The entry point
for aircraft function implementation may occur at any point in the cycle. For
a new aircraft-level function, the process begins with the top-level definition
of requirements. For adding functions to an aircraft, the entry point may
occur in the context of changes to a particular piece of equipment. However,
regardless of the entry point, an assessment of the impact of the new or
modified function on other aircraft-level functions and their supporting
requirements is necessary. In practice many of the development activities
shown in figure 3-2 are concurrent and may involve interactive
dependencies that lead to alteration of previously established requirements.

Highly-integrated and complex systems present greater opportunities for


development error (requirements determination and design errors) and
undesirable, unintended effects. At the same time it is generally not practical
(and may not even be possible) to develop a finite test suite for highly-
integrated and complex systems which conclusively demonstrates that there
is no development error residue. Since these errors are generally not
deterministic and suitable numerical methods for characterizing them are
not available, other qualitative means should be used to establish that the
system can satisfy safety objectives. Development assurance (as defined
below and in appendix B.1) establishes confidence that system development
has been accomplished in a sufficiently disciplined manner to limit the
likelihood of development errors that could impact aircraft safety.
ARP 4754

Aircraft
Aircraft-Level
Functions Functional
Requirements
Failure Condition, Effects,
Classification, Safety
Objectives
Allocation of Aircraft
System Functions to
Functions Systems
Failure Condition, Effects,
Safety Classification, Safety
Processes Objectives

Architectural Development of
( see figure 6-1 ) Requirements System Architecture
System
Architecture

Allocation of Item
Item Requirements
Requirements to
Hardware &
Software

System
Implementation Implementation

Physical
Results System

Certification

Safety Assessment Process System Development Process

Figure 3-2 System Development Process Model

10
3.2 Development Assurance
Development assurance is a process involving specific planned and
systematic actions that together provide confidence that errors or omissions
in requirements or design have been identified and corrected to the degree
that the system, as implemented, satisfies applicable certification
requirements.

3.2.1 Development Assurance Activities


The development assurance activities are the supporting processes shown in
figure 3-1. In section 5 of this document, systems and items are assigned
“development assurance levels” based on failure condition classifications
associated with aircraft-level functions implemented in the systems and
items. The rigor and discipline needed in performing the supporting
processes will vary corresponding to the assigned development assurance
level.

Sections 4 through 10 give insight into the correspondence between


development assurance level and the recommended activities contained
within the supporting processes. The development assurance level
determines the necessary software and hardware design levels of DO-178B
and DO-xxx.

3.2.2 Development Assurance Substantiation


There are two key elements in the process of substantiating development
assurance. One involves timely visibility of the activities that make up the
supporting processes along with applicable results. The other illuminates
the manner in which the safety assessment process interacts with the other
supporting processes at key points in the development program. For most
highly integrated or complex systems, development assurance
substantiation extends throughout the majority of the development period.
This makes it convenient to view the certification process itself as a
supporting process.

3.3 Safety Directed Development Concept


ARP 4754

errors. Design features


are specifically employed to preclude or contain the safety consequences of
these errors. The safety directed development concept employs safety,
validation and verification processes that are similar in many respects to the
methods outlined in this ARP.

12
4.0 Certification Process and Coordination
The objective of the certification process is to substantiate that the aircraft
and its systems comply with applicable airworthiness requirements. In most
situations the aircraft certification is accomplished through compliance with
a series of system certification plans. Planning and coordination are vital to
establish effective communications between the applicant and the
certification authority and to reach agreement on the intended means of
showing that the aircraft and its systems comply with airworthiness
requirements.

One of the characteristics of highly-integrated or complex systems is the


need to use development assurance methods as part of the evidence
supporting system certification.

Careful attention to system architecture, item architecture, and component


selection may simplify the development assurance methods and may limit
the range of systems or items to which this certification strategy applies.

4.1 Certification Planning


Aircraft tend to have a high degree of variance in both system complexity
and integration. The certification process, therefore, should be flexible.
Certification planning provides applicants with that flexibility, while at the
same time providing both the applicant and certification authority a high
level of confidence in the certification approach.

Certification planning separates the regulatory aspects of the overall


development process into manageable tasks that can be accomplished in a
logical and sequential manner. A top-level plan describes the certification
basis (applicable regulations and special conditions) and outlines the means
by which the applicant expects to demonstrate compliance. Early
certification planning can minimize the effects of misinterpretations of the
regulations and advisory material.

The certification plan for a complex or highly-integrated system defines the


product and installation to be certified, outlines the product development
processes to be used for development assurance, and identifies the proposed
means of compliance with the regulations. Since many of the development
assurance activities occur well before an implementation is available, early
coordination with the certification authorities is recommended. An early
certification plan may be missing significant detail, necessitating subsequent
updates. Even so, early coordination of the plan is strongly encouraged.
ARP 4754

4.2 Agreement on the Proposed Means of Compliance


The applicant proposes a means of compliance that defines how the
development of the system or equipment will satisfy the applicable
certification basis. The applicant should:
a. Submit plans to the certification authority for review before relevant
development activities occur. If additional substantiating data
concerning the plans or means of compliance is requested, it should be
submitted in time to allow meaningful review.
b. Resolve issues identified by the certification authority concerning the
means by which the system will be shown to comply with
airworthiness requirements.
c. Obtain agreement with the certification authority on the plans.

4.3 Compliance Substantiation


The certification data is the evidence that the system satisfies airworthiness
requirements. This includes both the data that is submitted and the data that
is required to be retained. The certification authority determines the
adequacy of the data for showing regulatory compliance. The applicant
should develop a certification summary to describe how it was determined
that the system, as installed on the aircraft, complies with the agreed
certification plan.

The certification summary should provide an outline of the results of the


activities established in the certification plan. Any deviation from the agreed
plan should be described together with rationale to substantiate the
deviation. In addition to addressing each of the content items in the
certification plan, the certification summary should include:
a. A statement of compliance to the airworthiness requirements.
b. An outline of any open problem reports that impact functionality or
safety.

4.4 Certification Data


The range of possible certification data listed in table 4-1 includes a cross
reference to sections of this document where related information, including
information about the development assurance level, can be found.

14
System Certification Data Description
Certification Plan 4.4.1
Development Plan 4.4.3
Architecture and Design 4.4.4
Requirements 5.2 and 5.3
Validation Plan 7.7.1
Verification Plan 8.5.1
Configuration Management Plan 9.0
Process Assurance Plan 10.2
Configuration Index 4.4.2
Functional Hazard Assessment 6.1
Preliminary System Safety Assessment 6.2
System Safety Assessment 6.3
Common Cause Analysis 6.4
Validation Data 7.7
Verification Data 8.5
Evidence of Configuration Management 9.2
Evidence of Process Assurance 10.4
Certification Summary 4.3
Shading indicates minimum certification data.
Table 4-1, Certification Data

There is no need to submit certification data beyond that specifically


negotiated with the certification authority. The minimum certification data
to be submitted to the certification authority includes the certification plan,
certification summary, and configuration index.

The certification data described in this document does not imply a


requirement for specific arrangement or grouping of data or delivery format
(such as paper, computer files, or remote terminal displays.) Whatever form
is selected by the applicant should provide for efficient retrieval and review
by the certification authority as required by regulations and laws in effect
governing in-service aircraft.
ARP 4754

4.4.1 Certification Plan


The certification plan for a highly-integrated or complex system should
address both the system and the aircraft environment within which the
system will be used. The amount of detail contained in the plan should vary
depending on the classification of the associated aircraft hazard(s). Each
plan should include:
a. A functional and operational description of the system and the aircraft
on which the system will be installed. A description of the system
elements including hardware and software. This description should
establish the functional, physical, and information relationship between
the system and other aircraft systems and functions.
b. A statement of the relationship of this certification plan to any other
relevant system certification plan(s.)
c. A summary of the functional hazard assessment (aircraft hazards,
failure conditions, and classification).
d. A summary of the preliminary system safety assessment (system safety
objectives and preliminary system development assurance levels.)
e. A description of any novel or unique design features that are planned
to be used in meeting the safety objectives.
f. A description of the new technologies or new technology applications
to be implemented.
g. The system certification basis including any special conditions.
h. The proposed methods of showing compliance with the certification
basis, including an outline of the anticipated development assurance
processes (safety assessment, validation, verification, configuration
management, and process assurance.)
i. A list of the data to be submitted and the data to be retained under
configuration control, along with a description or sample of data
formats.
j. The approximate sequence and schedule for certification events.
k. Identification of the personnel or specific organization responsible for
certification coordination.

4.4.2 Configuration Index


The system configuration index identifies all of the physical elements that,
together, comprise the system. In addition the configuration index identifies
procedures and limitations that are integral to system safety. Any system
design features or capabilities provided in excess of those required to

16
establish system safety under the applicable regulations should be
identified.

A typical system configuration index will include the following information:


· Configuration identification of each system item
· Associated item software
· Interconnection of items
· Required interfaces with other systems
· Safety-related operational or maintenance procedures and limitations

When applicable, information describing permissible interchangeability of


alternate items within the system should be included.

4.4.3 Development Plan


While there is no specific recommended process for systems development, a
generic development model is described in Appendix A to assist in
establishing common terminology and understanding. The specific
development process selected should be described in sufficient detail to
achieve mutual understanding of the key elements and their relationships.

The development plan should identify the top-level processes planned for
use, the key events that mark the planned development cycle, and the
organizational structure and key individual responsibilities supporting the
development. The processes and events should be described in sufficient
detail to establish their relative significance to the system development, their
relative timing and interdependencies, and the nature of the results expected
at event or process completion.

4.4.4 Architecture and Design


The architecture and design description should be based on a common
understanding of: the intended aircraft-level functionality provided or
supported by the system, the anticipated system operating environment,
and the specific capabilities of the system as installed on the aircraft.
Sufficient system architectural and design detail should be provided to
establish how the system will achieve the intended functionality. The
description should also identify primary fault or failure containment means.
New or novel design elements should be identified; along with specific
architectural features and design elements that perform a specific role in
establishing or maintaining system safety.
ARP 4754

5.0 Requirements Determination and Assignment


of Development Assurance Level
This section discusses the determination of requirements, from aircraft
function identification, through derived requirements emanating from
hardware and software development. The relationships among functions,
related failure condition classifications, system and item requirements, and
the corresponding assignment of development assurance levels is shown.
Architectural alternatives are examined and examples provided that
illustrate the effect of architecture on the assignment of item development
assurance level.

The assignment of numerical risk requirements for deterministic hardware


implementations and the potential for mitigating some failure conditions
through human interaction are discussed.

5.1 Requirements Capture


Requirements, together with related hazards, provide the common basis for
the supporting processes. Because the hazards may have different levels of
importance, the allocation of requirements, through system architecture, has
significant impact on the ease of substantiating system certification.

The top level process in the aircraft development cycle includes the
identification of aircraft functions and the requirements associated with
these functions. The aircraft functions, including functional interfaces and
corresponding safety requirements, form the basis for establishing the
system architecture. Selection of the architecture establishes additional
requirements necessary to implement that architecture. At each phase of the
requirements identification and allocation process (i.e. system, item and
hardware/software) both additional detail for existing requirements and
new derived requirements are identified. Choices made and problems
encountered during implementation are a primary source for derived
requirements and may lead to identification of new system safety
requirements.

5.2 Types of Requirements


The requirements associated with a given function define the way the
function acts in its environment and include the definition of the
user/machine interface. The types of requirements detailed below should be
considered at various phases of the development activities (i.e. function,

18
system, item and hardware/software). There may be requirements that
address strictly business or economic issues and do not impact safety or
certification requirements.

5.2.1 Safety Requirements


The safety requirements for aircraft and system-level functions include
minimum performance constraints for both availability (continuity of
function) and integrity (correctness of behavior) of the function. These
safety requirements should be determined by conducting a functional
hazard assessment consistent with the processes in paragraph 6.1.

Safety requirements for aircraft and system functions are determined by


identifying and classifying associated functional failure conditions. All
functions have associated failure modes and associated aircraft effects, even
if the classification is “No safety effect.” Safety related functional failure
modes may have either contributory or direct effects upon aircraft safety.

Requirements that are defined to prevent failure conditions or to provide


safety related functions should be traceable through the levels of
development at least to the point of allocation to hardware and software.
This will ensure visibility of the safety requirements at the software and
hardware design level.

5.2.2 Functional Requirements


Functional requirements are those necessary to obtain the desired
performance of the system under the conditions specified. They are a
combination of customer desires, operational constraints, regulatory
restrictions, and implementation realities. These requirements define all
significant aspects of the system under consideration. Regardless of the
original source, all functions should be evaluated for their safety related
attributes.

5.2.2.1 Customer Requirements

Customer requirements will vary with the type of aircraft, the specific
function or the type of system under consideration. Requirements may
include those associated with the operator's intended payload, route system,
operating practices, maintenance concepts, and desired features.
ARP 4754

5.2.2.2 Operational Requirements

Operational requirements define the interfaces between the flight crew and
each functional system, the maintenance crew and each aircraft system, and
various other aircraft support people and related functions or equipment.
Actions, decisions, information requirements and timing constitute the bulk
of the operational requirements. Both normal and nonnormal circumstances
need to be considered when defining operational requirements.

5.2.2.3 Performance Requirements

Performance requirements define those attributes of the function or system


that make it useful to the aircraft and customer. In addition to defining the
type of performance expected, performance requirements include function
specifics such as: accuracy, fidelity, range, resolution, speed, and response
times.

5.2.2.4 Physical and Installation Requirements

Physical and installation requirements relate the physical attributes of the


system to the aircraft environment. They may include: size, mounting
provisions, power, cooling, environmental restrictions, visibility, access,
adjustment, handling, and storage. Production constraints may also play a
role in establishing these requirements.

5.2.2.5 Maintainability Requirements

Maintainability requirements include scheduled and unscheduled


maintenance requirements and any links to specific safety-related functions.
Factors such as the percent of failure detection or the percent of fault
isolation may also be important. Provisions for external test equipment
signals and connections should be defined in these requirements.

5.2.2.6 Interface Requirements

Interface requirements include the physical system and item


interconnections along with the relevant characteristics of the specific
information communicated. The interfaces should be defined with all inputs
having a source and all output destinations defined.

5.2.3 Additional Certification Requirements


Additional functions, functional attributes, or implementations may be
required by airworthiness regulations or may be necessary to show

20
compliance with airworthiness regulations. Requirements of this type
should be defined and agreed upon with the appropriate certification
authorities.

5.3 Derived Requirements


At each phase of the development activity, decisions are made as to how
particular requirements or groups of requirements are to be met. The
consequences of these design choices become requirements for the next
phase of the development. Since these requirements result from the design
process itself, they may not be uniquely related to a higher-level
requirement and are referred to as derived requirements.

Derived requirements should be examined to determine which aircraft-level


function (or functions) they support so that the appropriate failure condition
classification can be assigned and the requirement validated. While most
such requirements will not impact the higher-level requirements, some may
have implications at higher levels. Derived requirements should be
reviewed at progressively higher system levels until it is determined that no
further impact is propagated.

For example, derived requirements may result from the decision to select a
separate power supply for equipment performing a specific function. The
requirements for the power supply, including the safety requirements, are
derived requirements. The hazard resulting from the fault or failure of the
function supported by the power supply determines the necessary
development assurance level

Derived requirements may also result from architecture choices. For


example, selecting a triplex architecture for achieving a high integrity
functional objective would have different consequences and different
derived requirements from selection of a dual monitored architecture for
achievement of the same objective.

Derived requirements may result from a design decision to isolate function


implementations having more severe failure condition classifications from
the malfunction or failure effects of systems having less severe failure
condition classifications.

Derived requirements also include those defining the hardware-software


interface. Some of these requirements may be significant at the system level.
The remainder, dealing with detailed aspects of the hardware-software
interface, may be handled under the guidance of DO-178B and DO-xxx.
ARP 4754

Derived requirements should be captured and treated in a manner


consistent with other requirements applicable at that development phase.

5.4 Assignment of Development Assurance Levels


The system development assurance level is assigned based on the most
severe failure condition
classification associated with System
Failure Condition
the applicable aircraft-level Development
function(s) (see table 5-1.) Classification
Assurance Level
This table departs slightly
Catastrophic A
from AC 25.1309-1A and
Hazardous /
AMJ 25.1309 by establishing B
Severe Major
level E as “no safety effect”.
Major C
If the Preliminary System Minor D
Safety Assessment (PSSA - No safety effect E
see paragraph 6.2) shows Table 5-1, System Development Assurance
that the system architecture Level Assignment
provides containment for the
effects of design errors, so that the aircraft-level effects of such errors are
sufficiently benign, then development assurance activities can be conducted
at a reduced level of process rigor for the system items wholly within the
architectural containment boundary.

If a system has multiple categories of failure conditions associated with its


different functions, architectural means may be used to limit the interaction
between items. This may allow the separate items to be developed at
different assurance levels

Item development assurance level is based on the overall system


architecture through allocation of risk determined using the PSSA. For items
that support multiple aircraft functions, the applicable safety requirement
should be based on the most severe of the effects resulting from failure or
malfunction of any supported aircraft function or any combination of
supported aircraft functions.

5.4.1 Architecture Considerations

System architectural features, such as redundancy, monitoring, or


partitioning, may be used to eliminate or contain the degree to which an
item contributes to a specific failure condition. System architecture may
reduce the complexity of the various items and their interfaces and thereby
allow simplification or reduction of the necessary assurance activity. If

22
architectural means are employed in a manner that permits a lower
assurance level for an item within the architecture, substantiation of that
architecture design should be carried out at the assurance level appropriate
to the top-level hazard. This does not preclude assignment of item levels
that are lower than the level associated with the top-level hazard; but
assurance that the item level assignments and their independence are
acceptable should be validated at the higher level and verified by the System
Safety Assessment (SSA - paragraph 6.3.)

Redundancy is a technique for providing multiple implementations of a


function ¾ either as multiple items, or multiple lanes within an item. It is a
design technique based on the assumption that a given set of faults with the
same system effect will not occur simultaneously in two or more
independent elements. Redundancy is required to provide fail-safe design
protection from catastrophic failure conditions and may be necessary to
meet the requirements associated with the more severe failure condition
classifications. The redundant elements may be parallel or backup, and their
designs may be similar or dissimilar.

For all but the simplest systems, it is practically impossible to guarantee the
correctness and completeness of requirements or the correctness of all
necessary assumptions. An architectural strategy incorporating dissimilarity
can be a powerful means of reducing the potential for errors in requirements
or in design implementation to cause serious effects. The nature and
probability of such errors are, in general, not accurately or reliably
predictable. Hence the effectiveness of particular architectural strategies,
introduced to allow the allocation of a lower item risk level, generally can
not be quantified. In consequence, the justification to support such allocation
necessarily involves some degree of engineering judgment by the applicant
and the certification authorities. Timely discussions, adequately
documented, can reduce the likelihood of late design changes resulting from
unresolved disagreements about such judgments.

When dissimilarity is used as a means of design error containment, the


degree of credit should be related to the type and scope of design errors
shown to be covered by the dissimilarity method used. For example,
dissimilar designs implementing the same function may provide
containment coverage for some types of implementation errors but not for
function requirements errors. Dissimilar independent designs implementing
dissimilar function may provide containment coverage for both function
and implementation requirements errors. It should be noted that
architectural dissimilarity impacts both integrity and availability. Since an
increase in integrity may be associated with a reduction in availability, and
ARP 4754

vice-versa, the specific application should be analyzed from both


perspectives to ensure its suitability.

The principles guiding systems development assurance are, in some ways,


similar to those developed to cope with software development assurance.
However, there are significant differences between software and systems
development assurance. In many ways the fundamental need for this
document arises from the difference between software and systems
development assurance.

Note: The guidance material on this issue in ARP 4754 was developed following extensive discussion
and debate within the SIRT group. The process of reaching consensus took full account of the
extensive comment on this issue from outside the SIRT group. Appendix D outlines the various
arguments used by the SIRT group in reaching agreement on the principles and examples presented
in this section.

Table 5-2 and paragraphs 5.4.1.1 through 5.4.1.5 present a series of example
architectures and illustrate the effect those architectures may have on item
development assurance level. Actual aircraft systems may involve
combinations of these architectures or alternative architectures that do not
conform to any one of these examples. If architecture is used as a means of
coping with generic design error, it will be necessary to complement
quantitative assessments of the architecture with qualitative assessments.
Issues of design information source commonality, design methodology
commonality, technology maturity, and application understanding, among
others, are often dealt with qualitatively. The table 5-2 examples, together
with experience gained from successful systems, can augment the process of
translating a specific system architecture into development assurance level
requirements for each system item.

5.4.1.1 Partitioned Design

Partitioning is a design technique for providing isolation to contain and/or


isolate faults and to potentially reduce the effort necessary for the system
verification processes. Where an item contributes to aircraft functions of
different criticality, a partition can be used to limit the cross-functional effect
of system design errors in separate parts of the item.

The design that provides the partition should be developed at the


development assurance level corresponding to the highest applicable
function failure condition classification.

24
Architecture Failure Condition Classification
(see note 1) Catastrophic Severe-Major / Hazardous
1 Partitioned Design Level A for the system including Level B for the system including
(Multiple Failure establishment of the partition establishment of the partition
Categories)
Within each Level corresponds to the most Level corresponds to the most
partitioned portion severe failure condition severe failure condition
classification within that classification within that
partitioned portion partitioned portion
2 Dissimilar, Independent Level A for the system including Level B for the system including
Designs Implementing establishment of dissimilarity establishment of dissimilarity
an Aircraft-Level and independence and independence
Function (notes 2 & 3)
Item(s) Level B (note 4) Level C (note 4)
(paragraph 5.4.1.2) (paragraph 5.4.1.2)
3 Dissimilar Designs Level A for the system including Level B for the system including
Implementing an Aircraft establishment of partition establishment of partition
Level Function (note 2) between the portions between the portions
Primary Portion Level A Level B
Secondary Portion Level B (note 4) Level C (note 4)
(paragraph 5.4.1.3) (paragraph 5.4.1.3)
4 Active/Monitor Parallel Level A for the system Level B for the system
Design (note 2)
Active and Monitor At least one portion to At least one portion to Level B;
Portions Level A; the other portion to the other portion to at least
at least Level C (notes 4 & 5) Level C (notes 4 & 5)
5 Backup Parallel Level A for the system Level B for the system
Design (note 2)
Primary Portion Level A Level B
Backup Portion Level C (note 4) Level D (note 4)
Unshaded: applies to the system as a whole.
Shaded: applies to portions of the system, where system architecture permits
Note 1: These architectures illustrate specific Note 4: Providing availability
development assurance situations; practical requirements are met.
systems may employ a wide range of
alternative architectures.
Note 2: The logic to determine switching/voting/fault Note 5: The development assurance level
detection between elements should be is dependent on the classification
developed to the highest level applicable. of any failure condition not
Note 3: It is especially important to obtain the agree- constrained by the monitor
ment of the certification authority as outlined in
paragraph 4.1 before adopting this method.

Table 5-2, Examples of Architecturally Derived System Development


Assurance Levels and Constraints
ARP 4754

The design within a partitioned portion can be developed to the


development assurance level corresponding to the most severe failure
condition classification for that portion (see table 5-2, architecture 1).

5.4.1.2 Dissimilar, Independent Designs Implementing An Aircraft-Level Function

A parallel, dissimilar, multiple channel architecture may provide


protection from both random physical failures and anomalies
due to design errors. In these cases, it may be possible for the
development assurance levels for the individual channels to be
selected below that associated with the top level failure condition
classification (see table 5-2, architecture #2). An analysis should
substantiate the dissimilarity and independence of
implementation requirements, algorithms, implementations,
data, environment, and other potential sources of design error.

To be considered within this category, there must be substantial


differences between the designs in terms of the means of
preventing the top level failure condition, the methodology by
which the design is created, the technology through which the
design is implemented, and the operations through which the
function is used. Validation of any assumptions of independence
is of particular importance in demonstrating compliance using
this design strategy.
Note:
RTCA DO-178B,describes an architecture where at least one software component must have the
software level with the most severe failure condition classification for the system function.. Such an
architecture would fall into the category described in 5.4.1.3. However the SIRT committee took the
view that there were some system architectures where the dissimilarly and independence of channels
may be sufficiently clear as to warrant a reduction in level for each channel. One such example may
be an autopilot function which is monitored by two dissimilar, independent techniques; e.g. one using
control surface motion, and the other using airplane motion. In this example, the requirement
specification for the two monitors will be essentially dissimilar and independent, although deriving
the appropriate monitor parameter threshold settings may involve some common assumptions and
the use of common models. It is necessary that any such assumptions are straightforward to validate
and that they are validated at the development assurance level associated with the top level failure
condition classification.

Two other examples further illustrate this case. Wheel brakes and thrust reversers, separately or
together, could be used to decelerate the aircraft on the ground. Communications radios could be used
as an alternate source of navigation information in a radar-based air traffic control environment if the
VOR/DME systems were to fail. Properly executed these combinations might be accepted by the
regulatory authorities to satisfy aircraft-level functional integrity requirements one level greater
than the integrity of either element alone. In taking credit for such combinations, it is necessary to

26
show the dissimilarity and independence of the system designs at the development assurance level
associated with the aircraft-level function (ground deceleration and navigation, respectively in these
cases.)

5.4.1.3 Dissimilar Designs Implementing an Aircraft Level Function

If multiple portions cannot be shown to be dissimilar under 5.4.1.2, then the


primary portion should be developed to the development assurance level
associated with the most severe failure condition classification of the
function. In this case, the secondary portion(s) provides the function after a
random hardware failure of the primary portion. Both primary and
secondary portions can execute full-time, or the secondary portion can be a
“hot spare” that is reverted to after failure of the primary portion.

The secondary portion(s) can be assigned a development assurance level one


below that of the primary portion, but not less than Level C, if:
a. The primary portion has a random hardware failure rate less than
1.0x10-5 for catastrophic failure conditions, or less than 1.0x10-4 for
hazardous failure conditions; and
b. The primary portion is always used unless it has failed; and
c. The secondary portion does not contribute to fault detection, and can
not cause the primary portion to fail.

5.4.1.4 Active-Monitor Parallel Design

The active-monitor parallel architecture represents the situation where both


the active and monitor portions are necessary to meet the integrity
requirements. This architecture provides detection of random physical
failures and with sufficient independence may detect anomalies due to
design error. The most severe failure condition classification establishes the
development assurance level necessary for at least one channel and the
channel independence. The other channel may have a lower level of
development assurance, as necessary to meet availability requirements.

5.4.1.5 Backup Parallel Design

There may be items in a system that function as a backup to other items.


That is, they are required to operate only after the other system items have
failed. If the primary system satisfies the integrity requirements without the
backup and the probability of the random failure rate of the primary items is
very low (1.0x10-7, if the failure condition is catastrophic, or 1.0x10-5, if the
failure condition is hazardous), then the development assurance level of the
backup may be assigned up to two levels below that of the top-level hazard,
but not less than level D. This assignment is subject to the agreement of the
ARP 4754

certification authority. Substantiation of the system architecture leading to


the determination of the availability requirement of the backup should be
assured at the level of the most critical system failure condition.

5.4.2 Implementation Error Management


The methods of 5.4.1 apply to both top-level functional requirements and
requirements resulting from implementation choices. The chosen
implementation architecture can play a significant role in determining the
system effects of design errors and failures. The use of certain techniques
specified by the PSSA, such as proper architecture selection, may be able to
mitigate or eliminate the effects of certain types of design errors and failures.
These architectural features and other types of development techniques may
reduce or avoid the need for meeting the objectives which need to be
satisfied for a given development assurance level. Any reduction or
elimination of system development assurance objectives should be
supported by analysis including the PSSA. These techniques may be applied
whether the implementation involves hardware alone or both hardware and
software.

5.4.3 Software Level Assignment


The software levels and processes for compliance as defined in
RTCA/DO-178B are related to the failure condition classifications and may
be assigned taking account of the item development assurance level defined
in paragraph 5.4, tables 5-1, and table 5-2.

5.4.4 Hardware Level Assignment


The substantiation of hardware item designs can be accomplished through
rigorous testing and/or analysis unless a specific hardware component(s) is
too complex to be analyzed by deterministic techniques. The necessary
degree of rigor in the testing or analysis is determined by the failure
condition classification.

When deterministic techniques are not used or are determined to be


insufficient, the development assurance assignment methods of this section
can be applied. The five levels of system development assurance listed in
table 5-1 correspond to five specific hardware levels. (The methods of
substantiating hardware development assurance for each level are to be
defined by RTCA SC-180 and documented in DO-xxx.) The requirements
validation and design verification processes at system and item level should
ensure that the appropriate hardware design requirements, and

28
development assurance expectations, are available for the hardware portion
of the design.

5.5 Failure Condition Risk Assignment


Risks that are attributable to hardware failures may be assigned to items
based on the PSSA. The assigned numerical risk becomes an item hardware
requirement or can be further allocated to item hardware architecture
requirements. Item hardware requirements should also contain applicable
qualitative requirements, such as “no single failure”.

Some failure conditions can be mitigated through human interaction.


Recognizing that incorrect human interactions could exacerbate, rather than
mitigate, the situation, it is essential to examine the type and independence
of the support provided to ensure the correctness of such interaction.
Support should be provided for both human recognition of the system or
item failure condition and human action to mitigate the failure effects.
Where such support substantiates the potential for appropriate human
action, the system or item may be assigned a lower risk. The substantiation
should consider, at least:
a. the failure recognition provided,
b. the type and timeliness of required failure response,
c. the priority relative to other crew tasks,
d. the total crew workload,
e. the probability of occurrence of the failure,
f. the independence of the human recognition and action support
provided.

The development process should assign risk to items in a rational manner


which results in compatible levels of development assurance and reliability
for each item.
ARP 4754

6.0 Safety Assessment Process


The safety assessment process provides analytic evidence showing
compliance with airworthiness requirements. The process includes specific
assessments conducted and updated during system development and
interacts with the other system development supporting processes. The
primary safety assessment processes are listed below.
l Functional Hazard Assessment (FHA) - examines aircraft and system
functions to identify potential functional failures and classifies the
hazards associated with specific failure conditions. The FHA is
developed early in the development process and is updated as new
functions or fault conditions are identified.
l Preliminary System Safety Assessment (PSSA) - establishes specific
system and item safety requirements and provides preliminary
indication that the anticipated system architecture can meet those safety
requirements. The PSSA is updated throughout the system development
process.
l System Safety Assessment (SSA) - collects, analyzes, and documents
verification that the system, as implemented, meets the system safety
requirements established by the FHA and the PSSA.
l Common Cause Analysis (CCA) - establishes and validates physical and
functional separation and isolation requirements between systems and
verifies that these requirements have been met.

A summary of the four primary safety assessments is included in this


section. The process for determining safety-related maintenance tasks and
task intervals for support of all four assessments is explained.

Figure 6-1 shows the fundamental relationships between these four specific
assessments and the system development processes. In reality, there are
many feedback loops within and among these relationships, though they
have been omitted from the figure for clarity.

This section of the document provides guidance and recommendations


covering:
l What safety analyses are most appropriate for each failure condition
classification.
l How to apply the results of the various safety assessment processes at
each stage of system development. This includes identifying functional
safety requirements and applicable derived safety requirements.

30
Aircraft Level Aircraft Aircraft Level
FHA 6.1
Functions
Requirements
Failure Condition, Effects,
Classification, Safety Requirements
Functional Interactions
Allocation of
System-level System Aircraft Functions
FHA Sections
Functions to Systems
Failure
Conditions
Failure Condition, Effects,
& Effects
Classification, Safety Objectives
Development
Architectural of System
Requirements
PSSAs Architecture
CCAs Separation System
6.2 Architecture
Requirements
Allocation of
Item Requirements
Item Requirements,
Requirements
Safety Objectives, to Hardware
Analyses Required
& Software

6.4 SSAs System


Separation & Implementation
6.3 Implementation
Verification
see figure 3-2

Results Physical System


Paragraph reference
shown in lower right
corner of process
boxes.
Certification

Safety Assessment Process System Development Process

Figure 6-1 Safety Assessment Process Model

The level of detail needed for the various safety assessment activities is
dependent on the aircraft-level failure condition classification, the degree of
integration, and the complexity of the system implementation. In particular,
the safety assessment process should take into account any
interdependencies that arise due to common architecture or use of common
complex components across systems or within system integration. The
safety assessment process should be planned and managed so as to provide
ARP 4754

the necessary assurance that all relevant failure conditions have been
identified, and that all significant combinations of failures that could cause
those failure conditions have been considered. The safety assessment
process is of fundamental importance in establishing appropriate safety
objectives for the system and determining that the implementation satisfies
these objectives.

The safety assessment activities and their objectives for each Failure
Condition Classification are described in the subsequent sections of this
document as referenced in table 6-1. (Note: Safety-related requirements at
the item level are derived during the PSSA process.) Detailed guidelines and
methods for conducting the various assessments are described in ARP 4761.

Derived from FHA Safety Objectives Safety Analyses


{see 6.1}
Development Quantitative
Failure Condition Common
Assurance Fail-Safe Requirement PSSA SSA
Classification Cause
Level ( Note 1 )
Required, P <10-9
Catastrophic A {5.4}
{6.2, 6.5} {6.3, 6.5} {6.4}

Hazardous / May be P <10-7 {6.2, 6.5} {6.3, 6.5} {6.4}


B needed, {5.4}
Severe-Major
May be P <10-5 May be
Major C needed, {5.4}
{6.2} {6.3}
needed, {6.4}
Minor D No None ( Note 2 ) ( Note 2 ) None
No Safety Effect E No None (Note 3) None None
Paragraph references shown in { }.
Note 1: According to AC/AMJ 25.1309; rate shown per flight hour
Note 2: According to AC/AMJ 25.1309 par 8.2 ... an analysis of physical and functional
isolation from other functions and systems may be required.
Note 3: Required to the level necessary to establish that no safety effect exists.
Table 6-1 Failure Classification and Safety Implications

6.1 Functional Hazard Assessment


The functional hazard assessment (FHA) should provide the following
information relative to each aircraft-level function and combination of
aircraft-level functions:
l Identification of related failure condition(s).
l Identification of the effects of the failure condition(s).

32
l Classification of each failure condition based on the identified effects
(i.e., Catastrophic, Hazardous/Severe-Major, Major, Minor, or No
Safety Effect) and assignment of the necessary safety objectives, as
defined in AC 25.1309-1A and AMJ 25.1309 extended to include the No
Safety Effect classification.
l Identification of the required system development assurance level.
l A statement outlining what was considered and what assumptions
were made when evaluating each failure condition (e.g., adverse
operational or environmental conditions and phase of flight).

The goal in conducting this step is to clearly identify the circumstances and
severity of each failure condition along with the rationale for its classification.
Since it is possible that use of common architectures or complex components
in separate systems could introduce additional aircraft-level failure
conditions involving multiple functions, the FHA should identify and
classify these new failure conditions. When aircraft-level functions are
integrated by a system or combination of systems, the FHA should be re-
evaluated to identify and classify failure conditions involving multiple
functions. If the FHA is constructed in system-oriented sections, traceability
of hazards and failure conditions between the aircraft-level and system-level
is necessary.

Implementation choices made during development may introduce common


causes for multiple aircraft-level failure conditions or interactions between
systems resulting in malfunction. These common causes could cross system
or function boundaries. A review of the implementations of systems should
be performed to determine if there are such conditions and if they should be
added to the aircraft-level FHA (see paragraph 6.4).

6.2 Preliminary System Safety Assessment


A PSSA is used to ensure completeness of the failure conditions list from the
FHA and complete the safety requirements. It is also used to demonstrate
how the system will meet the qualitative and quantitative requirements for
the various hazards identified. The PSSA processes identify derived system
safety requirements and may identify the need for alternative protective
strategies (e.g., partitioning, built-in-test, dissimilarity, monitoring, and
safety maintenance task intervals, etc.). The PSSA outputs should be used as
an input to the SSA and other documents, including, but not limited to,
system requirements, software requirements, and hardware requirements.
ARP 4754

The PSSA is an iterative analysis embedded within the overall development.


This is an ongoing process starting in the early phases of the development
with the allocation of aircraft functions and their requirements to the system
level. System-level requirements are then allocated to items, and, finally,
item requirements are allocated to hardware and software (see Appendix A)
Common Cause Analysis should determine the applicable separation and
isolation requirements to be included in the PSSA.

The PSSA should identify failures and combinations of failures contributing


to the failure conditions identified in the FHA. Possible contributing factors
leading to failure conditions can be identified by using Fault Tree Analyses
(FTA), Dependence Diagrams, Markov analysis, or other analysis methods.
Hardware faults and possible software/hardware errors, as well as faults
arising from common causes, should be included in the analysis to show
their contribution and to derive what system, item, hardware, and software
safety requirements are needed. This process is the basis for the assignment
of numerical risk budgets to random hardware failures of items (paragraph
5.5).

Calculating the event probability of a failure condition should include the


time during which a latent failure can persist. The time exposure during
which backups and/or protective mechanisms could fail and/or remain
failed prior to repair should be considered. In many cases, the failures are
detected by normal flight crew observation, or during periodic power-up or
self test routines. In some cases, the detection of latent failures is associated
with the interval between equipment shop tests or specific aircraft
maintenance tests. These tasks and time intervals are identified during the
PSSA and verified in the SSA by the use of FTA, Dependence Diagrams,
Markov Analyses, or other similar analyses.

The inclusion of hardware and software errors, plus operational errors, in a


qualitative manner in this analysis shows their contribution and can provide
valuable information on deriving the needed development assurance levels
(see paragraph 5.4). Also the PSSA can be used to identify specific safety-
related requirements for hardware and software, such as containment
boundary definitions, partitioning strategies, and specific verification
strategies. The existence of such requirements should be captured as part of
the system requirements.

All safety requirements should be traceable from the PSSA to the system
requirements.

34
6.3 System Safety Assessment
A system safety assessment (SSA) is a systematic, comprehensive evaluation
of the implemented system functions to show that relevant safety
requirements are met. The analysis process is similar to the activities of the
PSSA, but different in intent. The PSSA is used to derive system and item
safety requirements, whereas the SSA is used to verify that the implemented
design meets those safety requirements.

The SSA combines the results of the various analyses to verify the safety of
the overall system and to cover all of the specific safety considerations
identified in the PSSA. The SSA process data includes results of the relevant
analyses and substantiation. A typical SSA might include:
l The list of agreed-to external event probabilities
l A system description including contained functions and interfaces
l The list of failure conditions (FHA, PSSA)
l The classification of each failure condition (FHA, PSSA)
l The results of qualitative analyses for failure conditions (FTA, FMES,
Markov analysis, dependence diagrams)
l Confirmation that any hazards resulting from the implementation of
this system in combination with other systems implementations have
been addressed
l The results of quantitative analyses for failure conditions (FTA, FMES,
Markov analysis, dependence diagrams)
l The results obtained from Common Cause Analyses
l A list of safety maintenance tasks and intervals (FTA, FMES, Markov
analysis, dependence diagrams)

6.4 Common Cause Analysis


Advisory material relating to systems establishes the need to address
common cause faults (AC/AMJ 25.1309). The potential for this type of fault
exists in any system architecture that relies on redundancy or uses
components or software that are also used by other systems. The need to
provide a fail-safe design will serve to separate a function from its
applicable backups and/or protective mechanisms, or may separate
redundant backups and/or protective mechanisms from each other. Once
the applicable separation and isolation requirements have been identified,
the common cause analysis should proceed to address the common cause
fault potential across each boundary, and should identify the fault
ARP 4754

containment strategies to be used, along with the rationale supporting the


fault coverage provided.

This type of fault can also be caused by common development errors.

Common-cause fault sources often fall into one of the following categories:
· Software design error · Hardware design error
· Software coding error · Compiler error
· Requirements error · Production process error
· Repair process error · Installation error
· Environmental factors · Operational error
· Hardware failure · Cascading failures

Common Cause Analysis is sub-divided into the following areas of study to


aid in the assessment:
a. Zonal Safety Analysis
b. Particular Risks Assessment
c. Common Mode Analysis

These analyses may be performed at any stage of the design process.


Obviously, the most cost-effective time is early in the design process because
of the potential influence on system architecture. However, confirmation
may not always be feasible until implementation is complete.

A Zonal Safety Analysis should examine each physical zone of the aircraft to
ensure that equipment installation and potential physical interference with
adjacent systems do not violate the independence requirements of the
systems.

A Particular Risk Assessment should examine those common events or


influences that are outside the system(s) concerned but which may violate
independence requirements. These particular risks may also influence
several zones at the same time, whereas zonal safety analysis is restricted to
each specific zone. Some of these risks may also be the subject of specific
airworthiness requirements.

The Common Mode Analysis provides evidence that the failures assumed to
be independent are truly independent. The analysis also covers the effects of
design, manufacturing, and maintenance errors and the effects of common
component failures.

36
6.5 Safety-Related Flight Operations or Maintenance Tasks
The functions allocated to aircraft operations and maintenance personnel
result in tasks and procedures that may have an associated safety
requirement. Safety-effects of identified failure conditions may be resolved
through assignment of specific tasks or identification of specific limitations
to these personnel. Where safety-related tasks or limitations form part of the
certification substantiation, they should be identified and recorded in the
certification data (see paragraph 4.1.2.) For certification maintenance
requirements, see AC 25.XX.
ARP 4754

7.0 Validation of Requirements


Validation of requirements and specific assumptions is the process of
ensuring that the specified requirements are sufficiently correct and
complete so that the product will meet applicable airworthiness
requirements. Validation is a combination of objective and subjective
processes.

In showing compliance with JAR/FAR 25.1301 and JAR/FAR 25.1309, the


validation process supports the development of requirements from
functional needs and safety considerations. This development should
generate a complete set of requirements. The validation process addresses
each of these requirements. While the format is left to the developer's
definition, a structured process should be defined in the validation plan (see
paragraph 7.7.1).

Ideally from the point of view of facilitating a smooth development process,


requirements should be validated before design implementation
commences. However, in practice, particularly for complex and integrated
systems, the necessary visibility of the whole set of consequences that flow
from the requirements may not be obtainable until the system
implementation itself is available and can be tested in its operational
context. In consequence, validation is normally a staged process continuing
through the development cycle. At each stage the validation activity
provides increasing confidence in the correctness and completeness of the
requirements.

The validation process at each level of the requirements hierarchy should


involve all relevant technical disciplines, including the safety assessment
process. Experience indicates that careful attention to requirements
development and validation can identify subtle errors or omissions early in
the development cycle and reduce exposure to subsequent redesign or
inadequate system performance.

Individual tests may simultaneously serve the purposes of verification as


well as validation when the system implementation is used as part of the
requirements validation process. One purpose of this activity is to check that
the requirements are met by the implemented system, while a separate
purpose is checking that the requirements are appropriate to the context in
which the system is operating. Such dual purposes should be reflected by
coordination of the verification and validation plans.

38
7.1 Validation Process Objectives
Ensuring correctness and completeness of requirements are the objectives of
the validation process. Errors in the definition of system requirements can
arise from three primary causes: (1) ambiguity, (2) incorrect statements, or
(3) incomplete statements (i.e., omissions). The validation process should
adequately cover all of these potential deficiencies. Examination of
requirements to ensure they are both necessary and sufficient is a key aspect
of validation. A further objective of the validation process is to limit the
potential for unintended functions in the system or for unintended functions
to be induced in interfacing systems.

For the purpose of this document, correctness and completeness are defined
as follows:
l Correctness of a requirement statement means the absence of
ambiguity or error in its attributes.
l Completeness of a requirement statement means that no attributes
have been omitted and that those stated are essential.

7.2 Validation Process Model


Requirements and assumptions should be validated at each hierarchical
level of requirements definition. This includes validation of requirements at
the aircraft function, system and item levels as well as validation of the
FHA. Generally, validation of requirements and assumptions at higher
levels serves as a basis for validation at lower levels.

The relationship of validation to system development is shown in figure 3-1.


An expanded process model is shown in figure 7-1. Inputs to the validation
process may include a description of the system (including the operating
environment), the system requirements, a definition of system architecture,
and the development assurance level.

An overview of the requirements and assumptions validation process is


outlined below. These processes may be used for validation at the various
hierarchical levels. These processes, or suitable alternatives, may be used to
support certification.
a. Validation plan:
The validation plan should define the specific methods to be used for
validation of system requirements and assumptions. (Additional
information on validation planning is provided in paragraph 7.7.1.)
ARP 4754

Requirements

Development Assurance Level,


Development Plans, Validation Plan
Design Process 7.7.1
Requirements, & Description

Design
Requirements
Matrix Completeness Matrix Validation
Assumptions & Correctness Summary
(Initial) Checks (Final)
Report
7.7 7.3 & 7.4 7.7
7.7

Experience Based
Knowledge Validation Level
Determination
7.6
Customer & Certification
Requirements and Industry
Standards Validation of
Assumptions
7.5

Development Process Data Validation Process

Figure 7-1 Validation Process Model


b. Determination of the level of validation:
The necessary level of validation is determined by the development
assurance level of the function addressed by the requirement (see
paragraph 7.6).
c. Completeness and correctness checks:
The checks for requirements completeness and correctness may require
engineering judgment, as well as analysis or test. The validity of such
judgments will be established more easily if they and their supporting
rationale are recorded at the time that the related requirement is
developed. (Additional information on these checks is provided in
paragraphs 7.3 and 7.4.)
d. Validation of assumptions:
The process of validation of assumptions focuses on ensuring that
assumptions are explicitly stated, appropriately disseminated, and
justified by supporting data (see paragraph 7.5).

40
e. Validation matrix:
This process includes preparation of a validation matrix (see paragraph
7.7.3) that references requirements and validation results, including, as
appropriate, those for hardware/software performance, derived
requirements, environmental and operational considerations,
assumptions and supporting data. The source of each requirement
should be identifiable. This matrix should be updated regularly during
the development and included in the validation summary.
f. Validation summary:
Data describing the process, as well as the results, can be an effective
means of ensuring that communication is based on a consistent and
balanced understanding of the issues significant to the system design
(see paragraph 7.7.4).

7.3 Completeness Checks


The following is an example set of questions for assessing completeness at
each hierarchical level of requirements. This list should be tailored for the
specific application.
a. Do requirements trace to identified sources?
– Intended functions — aircraft-level, system-level
– All functions, hazards, and failure condition classifications identified
in FHA
– All failure conditions incorporated in PSSA
– Derived requirements — design decisions or assumptions
– Applicable regulatory standards and guidelines
– Anticipated operating environment
– Established flight operations or maintenance procedures
b. Are constraints and assumptions adequately defined, substantiated and
addressed?
– Market considerations
– Safety considerations (e.g., FHA, FMEAs, PSSAs)
– Environmental constraints
– Industry and company standards
c. Has the system implementation been adequately specified?
– All aircraft and system functions fully allocated
– All interfaces defined — internal, external, physical, functional,
human
ARP 4754

– System architecture defined and requirements allocated to hardware


and software
d. Are prohibited behavior characteristics explicitly stated?

7.4 Correctness Checks


During the validation process both the correctness of failure condition
classification and the correctness of the stated requirements content should
be reviewed and justified. Correctness checks should be carried out at each
level of the requirements hierarchy. The following questions may help
assess correctness of requirements. This list should be tailored and expanded
for the specific application.
a. Are all requirements correctly stated?
– What is required (as opposed to how it should be designed)
– Unambiguous
– Statements leading to appropriate design
– Realizable and verifiable to the level of rigor appropriate to the
system development assurance level
– Stated for all required environmental conditions
– Stated for degraded and normal modes
– Derived requirements are correct and supported by analysis
– Source(s) of each requirement identified
b. Are assumptions correct?
– Significant to/inherent in the requirements
– Documented
– Traced
– FHA failure condition classification assumptions confirmed
c. Do requirements correctly reflect the safety analyses?
– Appropriate safety analyses completed correctly
– All system hazards identified and classified correctly
– Impact of unsafe design or design errors
– Reliability, availability, and fault tolerance requirements

7.5 Validation of Assumptions


In the majority of system development programs, a number of assumptions
(or judgments) are made that are not directly provable at the time the
information is needed. This is specifically true of the initial version of the
FHA. The existence of such assumptions is not, by itself, a certification or a

42
safety concern. However, the possibilities for miscommunication about the
basis and scope of such assumptions are numerous and the related
consequences can jeopardize satisfactory implementation of safety
requirements. Thus, assumptions (either explicit or implicit) should be
identified and their reasonableness and rationale established based on the
specific system and its development assurance level.

Assumptions may be used early in the development process as a substitute


for more explicit knowledge that will be available later. In these cases,
validation consists of showing that the explicit knowledge or acceptable
rationale was indeed obtained and that any inconsistencies between the
explicit knowledge and the related assumption were resolved.

The process of validation of assumptions focuses on ensuring that


assumptions are:
a. Explicitly stated
b. Appropriately disseminated
c. Justified by supporting data

The processes used to validate assumptions may include: reviews, analyses,


and tests. Where the consequences of an erroneous assumption appear to
have significant potential to reduce safety, one possible validation strategy
consists of showing how the system design, in fact, limits or bounds the
achievable consequences of an assumption error.

The remainder of this section provides guidance for identifying and judging
the reasonableness of assumptions. To facilitate this purpose, assumptions
are categorized as:
a. Operational / environmental
b. Design
c. Manufacturing (or producibility)
d. Serviceability
e. Installation

Within each category, examples of specific assumptions and related


considerations are provided.
ARP 4754

7.5.1 Operational and Environmental Assumptions


The operational assumptions include those associated with:
º air traffic º air traffic control systems
º maintenance º aircraft or engine performance
º cargo º operational procedures
º personnel º passengers
º flight dynamics º policies and goals of the operator and applicable
governmental organizations

Environmental assumptions involve the physical conditions in and around


the aircraft or through which it is expected to operate. As a minimum these
include: atmospheric, electromagnetic and lightning conditions; and
hazardous objects and materials.

Some examples of operational assumptions include:


º exposure times º maintenance intervals
º traffic densities º performance limitations
Accepting assumptions in these categories may be accomplished by review
of related engineering data, airline/operator experience, or similarity to
other conditions or circumstances that are explicitly known.

7.5.2 Design Related Assumptions


Design related assumptions are grouped into crew interface, system
interface, and reliability. Accepting assumptions in this area may be
accomplished by review against existing industry experience and practice.
a. Crew Interface Assumptions:
The crew interface assumptions may include the interaction of the crew
with the equipment and the operational environment under normal
and emergency conditions, crew member performance characteristics
(e.g. response times, display interpretation, physical limitations), and
crew interaction. Some examples of assumptions about the crew
interface are: crew response times to various types of messages, event
recognition times (e.g., recognition of hardovers), decision making
strategies, perceptual error rates, the discrimination accuracy on the
basis of physical shape, visual form, color, or dynamic performance.
b. System Interface Assumptions:
System interface assumptions may address issues associated with the
meaning or logical interpretation of the data exchanged (e.g., format,

44
integrity, latency, resolution) or they may focus on the physical
characteristics of the data signal (e.g., voltage levels, impedance,
signal-to-noise ratio).
Some examples of assumptions about the system interface would be the
probability of misreads of data bus information, correct processing of
fault data by all related interfacing systems, fault containment, and
immunity to external faults.
c. Reliability Assumptions:
Reliability topics for which assumptions are often made may include:
º the adequacy of failure rate º consideration of potential failure
modeling over the life cycle latency and exposure periods

º dispatch inoperative º the completeness of the failure


considerations modes analysis

º the adequacy of scheduled º the adequacy of test data to


maintenance tasks and their establish or demonstrate MTBF
frequency predictions

º the adequacy of parts derating º the applicability of in-service


proven parts

7.5.3 Manufacturing & Producibility Assumptions


Manufacturing and producibility assumptions include the effectiveness of
inspection and production test. Acceptance of these assumptions may be
accomplished by review against company standards and practices.
a. Inspection assumptions:
The engineering analyses usually assume that the inspection system
adheres to company and relevant standards (i.e., FAA/JAA).
b. Production test assumptions:
Production test may be assumed to adequately verify that the
equipment manufacturing process will maintain the equipment
compliance with the specification (operation, environment and safety
wise) during the equipment production life. Production test may also
address the performance defects that cannot be easily detected by
normal functional testing, (e.g., functionality of protection devices).
Typical assumptions may include:
- factory test tolerances ensure serviceability
- test tolerances do not degrade safety
- special manufacturing tests are defined to detect errors that would
otherwise remain undiscovered
ARP 4754

7.5.4 Serviceability Assumptions


It is usually assumed that provisions for service and repair do not degrade
safety. This assumption may be validated by review of service and
maintenance procedures, and associated equipment.

7.5.5 Installation Assumptions


Typical assumptions about installations may involve separation, isolation,
cable binding, wire sizing, environment, power hook-up, circuit breaker
sizing, ventilation, drainage, sources of contamination, mount integrity,
grounding and shielding. Validating assumptions in this area may be
accomplished by review against industry standards and practice, selective
testing and/or inspections of mockup, prototype, or production
drawings/hardware.

7.6 Validation Rigor


The level of rigor of validation is determined by the development assurance
level(s) of the system, item, or partitioned portion addressed by the
requirement. (see paragraph 5.4). Each development assurance level and the
basis for it should be validated. Validation methods are identified in
paragraph 7.6.1 and their acceptable use is described in paragraph 7.6.2.

7.6.1 Validation Methods


Several methods may be needed to support validation. These methods
include: traceability, analysis, modeling, test, similarity, and engineering
judgment. Validation should consider both intended and unintended
functions. Intended function requirements validation involves evaluation
against objective pass/fail criteria. Vigilance during all analysis and testing
can be used to identify unintended system/item operations or side-effects.
While the absence of unintended functions can not be validated directly, ad
hoc testing and targeted analyses can be used to reduce the probability of
their presence.
a. Traceability
Traceability is an essential component of all validation. The
requirement should either be traceable to a parent requirement, or by
identification of the specific design decision or data from which the
requirement was derived. An assumption should be traceable to a
standard, practice, analysis, or test.

46
b. Analysis
A wide range of analysis methods and techniques may be used to
determine requirements acceptability. Several specific safety-related
analysis methods are described in ARP 4761. Early discussion with
regulatory authorities on the acceptability of the FHA and PSSAs will
assist in the validation of the safety-related requirements.
c. Modeling
Modeling of complex systems may be used to validate the
requirements.
d. Test
Special tests, simulations, or demonstrations may be used to validate
requirements. These activities may occur at anytime during
development based on availability of mock-ups, prototypes,
simulations or end-item equipment. Care should be exercised to ensure
any simulation is sufficiently representative of the actual system, its
interfaces, and the installation environment.
e. Similarity (Service Experience)
This method allows validation of a requirement by comparison to the
requirements of similar in-service certified systems. The similarity
argument gains strength as the period of experience with the in-service
system increases. Arguments of similarity should not be used until
there is adequate confidence that the period of experience is
satisfactory and any safety-related problems of the in-service system
are understood and resolved. Service experience can be classified into
the following two types:
1. Directly Applicable Similarity - Can be claimed if the two
systems/items have the same function and failure condition
classification, and operate in the same environment with similar
usage.
2. Applicable Similarity - Can be claimed if the two systems/items
perform similar functions in equivalent environments .
f. Engineering judgment
Application of personal experience through reviews, inspections and
demonstrations can support determination of completeness and
correctness. The properly justified engineering judgment (i.e., the
rationale/logic used) should be traceable.

7.6.2 Recommended Methods


Table 7-1 identifies validation methods and data as a function of the
allocated development assurance level A-E. For example, to validate
ARP 4754

requirements to level A or B, analysis, tests of intended function, and


directly applicable similarity may be used to establish correctness and
completeness. Validation of some requirements may use one method for
correctness checks and another method for completeness checks.

Methods and Data Development Assurance Level


(see paragraphs 7.6.1.a - f and 7.7) A&B C D E
PSSA (paragraph 6.2) R R A A
Validation Plan R R A N
Validation Matrix R R A N
Validation Summary R R A N
Requirements Traceability R A A N
Analysis, Modeling, or Test R One A N
Similarity (Service Experience) A recom- A N
Engineering Judgment A mended A N
Cross System Implementation Effects R A A N

R - Recommended for certification.


A - As negotiated for certification.
N - Not required for certification.
Table 7-1 Requirements Validation Methods and Data

For each requirement, a combination of the recommended and allowable


methods, necessary to establish the required confidence in the validation of
that requirement, should be identified and then applied.

7.7 Validation Data

7.7.1 Validation Plan


A requirements validation plan should be in place throughout the
development process. This plan should outline how the requirements and
assumptions will be shown to be complete and correct. The plan should
include descriptions of:
l The methods to be used.
l The data to be gathered or generated.
l What should be recorded (such as: summaries, reviews, or
investigations).
l The means for timely access to requirements validation information.

48
l How the status of validation will be maintained, or managed, when
changes are made to requirements.
l Roles and responsibilities associated with the validation.
l A schedule of key validation activities.

Aspects of the validation process that may also serve as part of verification
should be coordinated with the verification plan.

7.7.2 Supporting Data and Records


Data and records need to meet the following criteria if they are to be used to
support certification:
l The data and records should be retrievable for later reference.
l The source of the data generated, such as by analysis or test, and the
methods used, should be sufficiently controlled so as to allow
regeneration of the same data.

This provides archived evidence for future enhancements, problem


resolution, and review by certification authorities.

7.7.3 Validation Tracking


A validation matrix or other adequate approach is desirable to track the
status of the requirements validation process. The level of detail should
depend upon the development assurance level of the function addressed by
the requirement and should be described in the validation plan. It is
recommended that a preliminary tracking process be described in the
certification plan, and that it be updated as required. The final data should
be included in the validation summary. The specific format is up to the
applicant, but it should at least contain:
l Requirement or Assumption
l Source of the Requirement or Basis for the Assumption
l Associated Function(s)
l Development Assurance Level
l Validation Method(s) Applied
l Validation Conclusion (Valid/Not valid)
ARP 4754

7.7.4 Validation Summary


The validation summary should provide assurance that the requirements
were properly validated. The summary should include:
l A reference to the validation plan and a description of any significant
deviations from the plan.
l The validation matrix, as described in paragraph 7.7.3.
l Identification of supporting data or data sources (see paragraph 7.7.2).

50
8.0 Implementation Verification
The purpose of verification is to ascertain that each level of the
implementation meets its specified requirements.

The verification process ensures that the system implementation satisfies the
validated requirements. Verification consists of inspections, reviews,
analyses, tests, and service experience applied in accordance with a
verification plan. These activities are described in the paragraphs that
follow.

8.1 Verification Process Objectives


The verification process:
l Confirms that the intended functions have been correctly implemented.
l The requirements have been satisfied.
l Ensures that the safety analysis remains valid for the system as
implemented.

8.2 Verification Process Model


Figure 8-1 shows an overview of a generic process model for verification at
each level of system implementation.

The verification process is composed of three distinct activities described as


follows:

a. Planning — includes planning for the resources required, the sequence of


activities, the data to be produced, collation of required information,
selection of specific activities and assessment criteria, and generation of
verification-specific hardware or software (see paragraph 8.3.)

b. Activities — includes the activity in which the verification methods are


employed (see paragraph 8.4).

c. Data — includes evidence of the results developed in the process (see


paragraph 8.5).

Level of verification is determined by the development assurance level of


the system or item (see paragraph 8.4.5).
ARP 4754

The inputs to the verification process include the set of documented


requirements for the implemented system or item and a complete
description of the system or item to be verified.

More than one verification method may be necessary to substantiate


compliance with the requirements. For example, an analysis may be
required in conjunction with a physical test to assure that worst case issues
have been covered.

During the process of verifying intended functions any anomalies


recognized (such as an unintended function or incorrect performance)
should be reported so that they can be reviewed.

Verification
Design Process
Implementation Plan
8.5.1

Inspection,
Review, Analysis, Verification
Matrix Matrix
(Initial)
Test, & Service (Final) Summary
8.5.3 Experience 8.5.3 8.5.4
8.4
Design Requirements &
Development Assurance
Level Data

Development Process Verification Process

Figure 8-1 Verification Process Model

52
8.3 Verification Planning
The purpose of this phase is to define the processes and criteria to be applied
to the verification of each requirement to achieve the verification objectives.
The following activities should be performed during the planning phase:
l Identification of the system or item configuration, including the
definition of any special test equipment, facilities, and any special
hardware or software features to be verified.
l Collation of all requirements appropriate to the level under
consideration, including derived requirements and their traceability
l Definition of the specific verification methods to be employed to show
compliance with each requirement, based on the development
assurance level.
l Definition of the criteria to be used to assess the evidence resulting
from each verification method applied.
l Identification of system verification credit taken for hardware or
software verification activities.

8.4 Verification Methods


The purpose of these activities is to verify that the implemented system
satisfies its functional and design requirements including the intended
operating environment. Four basic methods may be employed in the
verification of any system or item:
l Inspection and Review
l Analysis
l Test
l Service Experience

Each of these methods is discussed in the paragraphs that follow.


ARP 4754

8.4.1 Inspection and Review


Inspection and review is performed to demonstrate adequate consensus that
the product complies with its requirements. Generally, a checklist or similar
aid is used. The typical types of reviews performed are as follows:
l Inspection that the system or item meets established physical
implementation and workmanship requirements.
l Design reviews showing how the system or item is expected to perform
in normal and nonnormal conditions.
l Test reviews establishing the applicability of test cases to system or
item requirements.

8.4.2 Analysis
An analysis provides evidence of compliance by performing a detailed
examination (e.g., functionality and performance) of a system or item.
Analysis methods include, but are not limited to, those described in the
following paragraphs.

8.4.2.1 Modeling

Modeling of complex, deterministic systems may be entirely computational


or a combination of computation and test. Modeling may be used for system
parameter evaluation, to provide early system information, or other
purposes.

8.4.2.2 Coverage Analysis

Coverage analysis is performed to determine the degree to which the


requirements are addressed throughout the development and verification
activities. This is typically implemented using some form of traceability.

8.4.3 Testing
Testing provides repeatable evidence of correctness by exercising a system
or item to verify that the requirements are satisfied. Testing has the
following two objectives:

a. To demonstrate that the system or item implementation performs its


intended functions. Testing an intended function involves evaluation
against objective pass/fail criteria established by the safety requirements.

54
b. To provide confidence that the implemented system does not perform
unintended functions (i.e., not consciously part of the design) that impact
safety. Ad hoc testing, and special vigilance during normal testing, may
be used to identify unintended system or item operation or side-effects.
It should be noted that complete absence of unintended function can never be
established by test.

Tests are performed on all or part of the physical system or item or an


appropriate validated model using procedures documented in sufficient
detail so that a second party could reproduce the test results. Problems
uncovered during testing should be reported, corrective action tracked, and
the modified item retested.

For each test, the following should be specified:


l Required inputs - variability should be considered in setting the test
criteria.
l Actions required.
l Expected results and the tolerances associated with those results.

Test result data should contain, as a minimum, the following:


l The version of the test specification used.
l The version of the system or item being tested.
l The version or reference standard for tools and equipment used,
together with applicable calibration data.
l The results of each test including a PASS or FAIL declaration.
l The discrepancy between expected and actual results.
l A statement of success or failure of the testing process including its
relationship to the verification program.

8.4.4 Similarity / Service Experience


Verification credit may be derived from design and installation appraisals
and evidence of satisfactory service experience on other aircraft using the
same or other systems that are similar in their relevant attributes. This
method should use documented experience along with engineering and
operational judgment to demonstrate that no significant failures remain
unresolved in these installations. See paragraph 11.4.2 for more detail.
ARP 4754

8.4.5 Recommended Verification Activities

Table 8-1 lists a variety of recommended and allowable verification methods


and data as a function of the development assurance level. The necessary
scope and coverage associated with these methods and data also depends on
the development assurance level and, if known, may be further influenced
by the specific related fault condition.

For example, an implementation being verified to level A or level B may


involve inspection or review and analysis, and should involve some form of
test. The extent to which each method needs to be applied or data developed
will be the result of agreement with the certification authorities, based on
the specific system to be certified.

Methods and Data Development Assurance Level


(see paragraphs 8.4 and 8.5) A&B C D E
Verification Matrix R R A N
Verification Plan R R A N
Verification Procedures R A A N
Verification Summary R R A N
SSA (paragraph 6.3) R R N N

Inspection, Review, Analysis, or Test Test and One or N


(note 1) one or more A (note 2)
more
of others
Test, unintended function R A A N
Service Experience A A A A
R - Recommended.
A - As negotiated.
N - Not required for certification.
Note 1: These activities provide similar degrees of verification. The selection of
which activities will be most useful may depend on the specific system
architecture or the specific function(s) implemented.
Note 2: Installation and environmental compatibility should be shown.
Table 8-1 Verification Methods and Data

56
8.5 Verification Data
The purpose of verification data is to provide evidence that the verification
process was conducted as planned. This evidence may be required for
compliance substantiation and to support certification data requirements
described in paragraph 4.3 and 4.4. A reasonable approach is to maintain a
verification matrix during development and to produce a verification
summary report.

Requirements for software verification are included in DO-178B and for


hardware verification in DO-xxx, a summary of software and hardware
verification should be included in the verification data of the system in
which it is embedded.

8.5.1 Verification Plan


The verification plan establishes the strategies to show how the
implementation satisfies its requirements. A typical verification plan might
include:
l Roles and responsibilities associated with conducting the verification
activities.
l A description of the degree of independence of the design and
verification activities.
l Application of verification method(s).
l Data to be produced.
l Sequence of dependent activities.
l A schedule of key verification activities.

Some aspects of the verification process may also support validation of


specific requirements and should be coordinated with the validation plan.

8.5.2 Verification Procedures and Results


Data describing the verification procedures together with the results
achieved provides the evidence necessary to establish the appropriateness of
the verification effort.
ARP 4754

8.5.3 Verification Matrix


A verification matrix or an equivalent tracking document should be
produced to track the status of the verification process. The level of detail of
this matrix should depend on the development assurance level of the system
or item being verified. While the specific format may be determined by the
applicant, it should contain, at least:
l Requirement
l Associated Function
l Development Assurance Level
l Verification Method(s) Applied
l Verification Conclusion (Pass or Fail)
l Verification Coverage Summary (relating procedures and results to
system or item requirements)

8.5.4 Verification Summary


The verification summary provides visibility for the evidence used to show
that the system or item implementation satisfies its requirements. The
summary should include:
l A reference to the verification plan and a description of any significant
deviations from the plan.
l The verification matrix as described in paragraph 8.5.3.
l A reference to the problem report system*.
l A description of any open problem reports* and an assessment of the
related impact on safety.
l Identification of supporting data or data sources (see paragraph 7.7.2
for supporting data criteria)*.
* As appropriate for the development assurance level.

58
9.0 Configuration Management
This section discusses the objectives and activities of the system
configuration management process. It is applicable to the system, item(s)
that make up the system, certain facilities and tools, and the required
certification data. The Configuration Management Plan should include the
methods used to show the objectives of the configuration management
process are satisfied.

The existence of an independent entity or organization to perform the


configuration management activities should not be implied by the title or
content of this section.

9.1 Configuration Management Process Objectives


The objectives of the configuration management process are to provide:
a. Technical and administrative control of the configuration of:
1. System requirements
2. Items that implement the system
3. Applicable certification data (table 4-1)
4. Facilities and tools, where configuration is essential to
establishing development assurance certification compliance
b. Control of changes to the system configuration or certification data by:
1. Providing a known point for review - identifying modification
status and change control by control of system configuration in
relation to a configuration baseline.
2. Providing controls that ensure that identified problems and their
resolution are recorded, approved, and implemented.
c. Assurance that physical archiving, recovery, and control are
maintained for relevant system data.
d. Traceability of system compliance with requirements.

Configuration management is both a system development and a certification


activity. In terms of certification schedule, a configuration baseline, along
with appropriate change control procedures, should be established by the
point in the system development where certification credit is first desired.
The traceability of the final proposed configuration to that configuration
baseline is a necessary element of demonstrating development assurance.
ARP 4754

9.2 Configuration Management Process Activities


The configuration management process includes configuration
identification, problem reporting/change control, and archiving/retrieval
activities. Continuity of these activities significantly enhances their
effectiveness and the credibility of the overall configuration management
process. For certification purposes, evidence of a continuous configuration
management process may include, but is not limited to, historical records or
successive reports from these activities.

9.2.1 Configuration Identification


The objective of the configuration identification activity is to label
unambiguously each item and certification data (and their successive
versions) so that a basis is established for their control and reference. If
applicable, each separately controlled component or software version of an
item should be labeled. The format for the labeling is the responsibility of
the applicant.

9.2.2 Problem Reporting and Change Control


The objective of certification-related problem reporting is to record
certification-related problems and their resolutions. Change control
provides a means for the evaluation and approval of certification-related
changes. The following guidelines highlight the aspects of problem
reporting and change control that are significant in demonstrating
development assurance:

a. Means should be established to document the nature and resolution of


problems and changes to an item, its data, or certification data. The data
should include any changes resulting from derived requirements.

b. Change control should preserve the integrity of the items and


certification data by providing protection against unauthorized change.

c. Change control should ensure that any change to an item or certification


data is appropriately identified by a change to its configuration
identification.

d. Change control should ensure that changes to an item require applicable


changes to the data associated with the item.

60
9.2.3 Archive and Retrieval
The objective of the archive and retrieval activity is to ensure that the
certification data associated with the system and related items can be
retrieved. Data retention procedures should be established to satisfy
airworthiness requirements. The following guidance is provided:

a. Data associated with the system or item should be retrievable from a


controlled source (for example, the organization or company that
developed the system).

b. Procedures should be established to ensure the integrity of the stored


data for as long as may be required by the certification authority. These
procedures should include:

1. Ensuring that no unauthorized changes can be made.

2. Selecting storage media that minimize regeneration errors or


deterioration.

3. Exercising and/or refreshing archived data at a frequency compatible


with the storage life of the medium.

4. Storing duplicate copies in physically separate archives that minimize


the risk of loss in the event of a disaster.
ARP 4754

10.0 Process Assurance


This section describes the activities that ensure the system development and
supporting processes are appropriate, maintained, and followed. The
assurance activities described are not intended to imply or impose specific
organizational structures or responsibilities.

10.1 Objectives of Process Assurance


The objectives of the process assurance activities are:
a. To ensure the necessary plans are in place or developed, and then
maintained for all aspects of system development.
b. To ensure development activities and processes are conducted in
accordance with those plans.
c. To ensure evidence is available to show the implementation of the
plans.

10.2 Process Assurance Plan


The Process Assurance Plan describes the means to assure that the practices
and procedures to be applied during system development are followed.
Particular emphasis should be placed on the certification-related activities.
The following issues should be considered when producing the Process
Assurance Plan:
a. The scope and content of the other project plans (development,
certification, validation, verification, and configuration management)
are consistent with the development assurance level of the system or
item.
b. Project communications, coordination and sequencing, and progress
monitoring mechanisms are defined.
c. Change control, operational, and maintenance procedures are defined.
d. Sufficient project review opportunities are defined to best achieve the
timely detection of development errors.
e. Sufficient coordination with the certification authorities is planned.

62
10.3 Project Plan Reviews
The following issues should be considered when assessing the project plans:
a. Applicable procedures and practices are documented.
b. Defined communication practices ensure the timely exchange of
information between the applicable processes and affected personnel.
c. Procedures for plan updates due to process, schedule, or technical
changes are defined.
d. Plan updates are appropriately tracked and controlled.

10.4 Evidence of Process Assurance


Evidence of conformance with the project plans can include:
a. Dated and approved project plans.
b. Reports, metrics, and summaries of reviews, as required by the plans.
c. Actual data developed from design, verification, validation,
configuration management, and certification activities.
d. Confirmation (e.g., completed checklists and meeting minutes) of
timely process assurance reviews.
ARP 4754

11.0 Modified Aircraft


Many of the processes in this document rely on product and process
information obtained during aircraft and system development. Such data
may not always be available for existing aircraft. In such cases, alternative
means may be necessary to satisfy the certification requirements. The
objective of this section is to describe how the material in this document
could be applied when modifying an aircraft or system that was not
developed in accordance with these guidelines.

11.1 Certification Basis


The certification basis defines the applicable regulations to be met by the
applicant and includes any special conditions required to supplement the
published regulations. When introducing a system or aircraft modification,
the certification authority considers the impact the modification has on the
existing aircraft certification. In some cases, a supplement to the existing
certification basis may need to be added (for example, when a radically
different technology is employed).

11.2 Means of Compliance


The applicant proposes a means of compliance that defines how it will be
demonstrated that the system satisfies the certification basis. Whether or not
the certification basis is changed, it will be necessary to assess the
anticipated means of showing compliance to ensure compatibility with the
agreed certification basis.

11.3 Considerations for Modification


Modifications to existing aircraft may take various forms, including:
l Introducing a new aircraft-level function.
l Replacing one system with another on an existing aircraft.
l Adapting an existing system to a different aircraft type.
l Altering an existing system on an existing aircraft.

Since most highly-integrated or complex systems implement multiple


functions, it is likely that a specific modification for such a system would
involve more than one of these forms.

64
11.3.1 Introducing a New Aircraft-Level Function
A new aircraft-level function can be introduced into an aircraft type by
modification of an existing system previously installed on the aircraft or by
installing a new system. Considerations for addressing the introduction of a
new aircraft-level function include:
a. The applicant should develop the new aircraft function in accordance
with Sections 3 through 10 of this document. Emphasis should be given
to the following:
– The Functional Hazard Assessment should address the failure
conditions and associated hazards for the new function and
identify the safety objectives for the systems to be modified.
– The FHA should also identify and substantiate the manner in
which other functions and systems are affected by the
introduction of the new aircraft function. This may be achieved
by conducting analysis on functional interactions and
interdependencies, and by determining the degree to which the
aircraft function is integrated with other aircraft functions.
b. If credit is sought for development assurance activities performed on a
previously certificated “baseline” aircraft or system, the proposed
system or item and its certification data should be traceable to that
baseline.
c. If certification data does not exist or is not available to the applicant for
unchanged areas of the aircraft on which the aircraft function is reliant,
the applicant should identify and substantiate the assumptions that
were made about those areas to support the results of the safety
assessment process.

11.3.2 Replacing One System With Another on an Existing Aircraft


Installing a replacement system in a previously certified aircraft type may
change the implementation of an aircraft function or functions without
adding a new aircraft-level function. If a new aircraft-level function is
added, see paragraph 11.3.1. The replacement system may be installed for a
number of reasons including: replacement of obsolescent equipment,
improvement of reliability or integrity, or in compliance with a regulatory
change. Considerations for addressing the installation of replacement
systems in previously certified aircraft type include:
a. The applicant should review the results of the existing safety
assessment process considering: (if not available, see c.)
– Installation of the replacement system.
– Areas on which the replacement system is reliant.
ARP 4754

– Availability of certification data for the system being replaced.


– Certification basis for the aircraft.
It may be necessary to develop safety assessment data to ensure that
the safety objectives for the replacement system are correct and
complete and to ensure that the aircraft-level safety objectives have
been satisfied.
b. If credit is sought for development assurance activities performed on a
previously certified “baseline” aircraft or system, the proposed system
or item and its certification data should be traceable to that baseline.
c. If certification data is not available for unchanged areas of the aircraft
on which the system is reliant, the applicant should identify and
substantiate the assumptions that were made about those areas to
support the results of the safety assessment process.
d. The applicant should develop the replacement system in accordance
with Sections 3 through 10 of this document. Emphasis should be given
to the following:
– The Functional Hazard Assessment should address the failure
conditions and associated hazards for the replacement system
and identify the safety objectives for the systems to be modified.
– The FHA should also identify and substantiate the manner in
which other functions and systems are affected by the
introduction of the replacement system. This may be achieved by
conducting analysis on functional interactions and
interdependencies, and by determining the degree to which the
aircraft function(s) performed by the replacement system is
integrated with other aircraft functions.

11.3.3 Adapting an Existing System to a Different Aircraft Type


Systems previously approved for operation in one aircraft type must be
reexamined for use in a different aircraft. When the applicant chooses to
seek credit from the previous certification, the certification authority will
require evidence that the design, installation, and application are similar.
When this evidence is not available, the relevant parts of Sections 3 through
10 may be applied as necessary to provide evidence that the system to be
installed satisfies the safety objectives. Considerations for addressing
installation in a different aircraft include the following:
a. The applicant should review the results of the existing safety
assessment process considering:
– Similarity of system installation and operation on both the existing
and proposed aircraft.
– Similarity of functions on which the transferred system is reliant.

66
– Impact of the transferred system on other systems in the
proposed aircraft.
– Certification basis.
– Adequacy of the certification data available from the previous
installation.
If the safety objectives are the same for the proposed installation as
they were in the previous installation and provided that an appropriate
level of aircraft similarity is established, no additional effort will be
required.
Otherwise, the safety assessment process should identify and
substantiate the old functions affected by the new installation. The
assessment should address unchanged functionality on which the new
installation relies. This may be achieved by conducting analyses of the
interaction and interdependencies between the new system and other
aircraft systems.
b. If credit is sought for development assurance activities performed on a
previously certified “baseline” aircraft or system, the proposed system
or item and its certification data should be traceable to that baseline.
c. If certification data is not available for unchanged functions of the
aircraft on which the system is reliant, the applicant should identify
and substantiate the assumptions that were made about those functions
to support the results of the safety assessment process.
d. The applicant should supplement the existing certification data in
accordance with the guidelines of paragraph 11.4.1 under the following
conditions:
1. The certification data from the previous installation is not
available to substantiate that the new installation satisfies the
safety objectives of the proposed installation.
2. The certification data from the previous installation is inadequate
to define and substantiate the affected area.
e. Any requirements impacted by the new installation should be
validated and the new installation verified in accordance with the
guidance provided in sections 7 and 8.

11.3.4 Altering a System on an Existing Aircraft


An alteration to a previously approved system may change the
implementation of an aircraft function without adding a new aircraft-level
function. If a new aircraft-level function is added, see paragraph 11.3.1. An
alteration may result from a change in requirements or desired performance,
correction of an implementation error, or an enhancement to equipment
ARP 4754

reliability. Considerations for addressing a system alteration on a previously


certificated aircraft include:
a. The applicant should review the results of the safety assessment
process considering the impact of the alteration on:
– Areas on which the system is reliant.
– Availability of certification data.
– Certification basis for the aircraft.
– Existing requirements (including the impact on unchanged
requirements).
– System architecture.
The safety assessment process should identify and substantiate the area
affected by the alteration. This may be achieved by conducting, in
accordance with the relevant parts of Section 5, an analysis of the
interaction of the altered system with other aircraft systems.
b. If credit is sought for development assurance activities performed on a
previously certified “baseline” aircraft or system, the proposed system
or item alteration and its certification data should be traceable to that
baseline.
c. If certification data is not available for unchanged areas of the aircraft
on which the system alteration is reliant, the applicant should identify
and substantiate the assumptions that were made about those areas to
support the results of the safety assessment.
d. The applicant should supplement the existing certification data in
accordance with the guidelines of paragraph 11.4.1 under the following
conditions:
1. The certification data from the previous installation is not
available to substantiate that the altered system satisfies the
safety objectives of the proposed installation.
2. The certification data from the previous development is
inadequate to define and substantiate area(s) affected by the
system alteration.
e. The requirements impacted by the system alteration should be
validated and the altered system implementation verified in
accordance with the guidance provided in Sections 7 and 8.

11.4 Additional Considerations


The guidelines contained in this document presume that evidence of an
appropriately structured development and safety assessment process will be
used to substantiate any development assurance requirements. If credit is
sought for development assurance activities performed on a previously

68
certified “baseline” aircraft or system, the proposed system or item and its
certification data should be traceable to that baseline. In some cases, such
evidence may not be adequate. The following paragraphs provide guidance
for:
l Supplementing the existing certification data to support certification of
modifications to existing systems or new installations of previously
certified systems.
l Using system service history obtained from an installation on one
aircraft type to support the certification of that system on a different
aircraft type or a similar system on the same aircraft type.

11.4.1 Supplementing Existing Certification Data


To supplement existing certification data, the applicant may:
a. Evaluate the data available from the previous certification effort to
determine which objectives of this document are satisfied for the new
application and which objectives require additional consideration.
b. Use reverse engineering to develop certification data necessary to
satisfy the objectives of this document.
c. Use service history in accordance with the guidelines of paragraph
11.4.2 to satisfy the objectives of this document.
d. Specify the strategy for accomplishing compliance with this document
in the Certification Plan.

11.4.2 Use of Service History


Service history may be used to support certification of a new or modified
system if an analysis shows the history to be applicable and changes to the
referenced system configuration have been appropriately controlled and
documented. This method allows validation of a requirement by comparison
to the requirements of similar in-service certified systems. The similarity
argument gains strength as the applicable period of system service
experience increases. Arguments of similarity should not be used until any
significant system problems experienced in service have been understood
and resolved.

Considerations for the use of service history include:


a. The applicant should propose, in the Certification Plan, how service
history will be used (e.g., the amount of service experience available
and a description of how the service data will be analyzed).
ARP 4754

b. The applicant should conduct an analysis to determine the extent to


which the service history is applicable. Such an analysis should show
that:
1. Problem reporting procedures during the period of applicable
service history were sufficient to provide an appropriate cross-
section of in service problems.
2. Changes to the referenced system during the service history
period did not materially alter the safety or performance of the
system.
3. Actual usage of the referenced system during the service history
period was consistent with the intended usage for the new or
modified system. If the operational environments of the existing
and proposed applications differ, additional validation and
verification activities related to the differences should be
conducted in accordance with Sections 7 and 8.
c. The applicant should analyze any reported safety-related problems,
together with their causes and corrective actions, and establish whether
or not they are relevant to the proposed system, system modification,
or system application.

70
Appendix A - An Overview of a Generic Approach
to Aircraft Systems Development
This section provides an overview of a generic approach for developing
aircraft systems from conceptual definition of the desired functionality to
certification. The purpose of this appendix is to establish common
terminology and expectations associated with common processes and their
interrelationships in order to understand the intent and applicability of the
substantiating material. The generic process outlined in this section is
intended to assist in establishing a framework for that understanding. This
section does not imply a preferred method or process; nor does it imply a
specific organizational structure.

A top-down sequence for developing a specific system implementation from


knowledge of an intended aircraft function provides a convenient
conceptual model for the system development process. Typical system
development progresses in an iterative and concurrent fashion using both
top-down and bottom-up strategies. In this document, primary attention is
focused on the top-down aspect since it provides the necessary links
between system development and aircraft safety for the purpose of
substantiating system airworthiness compliance.

The following is a list of generic systems development process activities:


l Identification of Aircraft-Level Functions, Functional Requirements,
and Functional Interfaces
l Determination of Functional Failure Consequences and Implications
l Allocation of Functions to Systems and People
l Design of System Architecture and Allocation of Requirements to Items
l Allocation of Item Requirements to Hardware and Software
l Hardware and Software Design/Build
l Hardware/Software Integration
l System Integration

Because of the iterative nature of all but the simplest development


programs, the systems development process should be thought of as cyclic
rather than sequential. Furthermore, the entry point for any given function
may occur at any point in the cycle. For a new aircraft-level function, the
process begins with the top level definition of requirements. For functional
additions to an aircraft, the entry point may occur in the context of changes
to a particular item. However, regardless of the entry point, an assessment

A-1
ARP 4754

of the impact of the new or modified function on other aircraft-level


functions and their supporting requirements is necessary.

In practice, many of the development activities are concurrent and may


involve interactive dependencies which lead to alteration of previously
established requirements. Derived requirements can arise at any level and
may alter or constrain design decisions associated with higher-level
requirements (see paragraph 5.3).

A.1 Generic Systems Development Processes

A.1.1 Identification of Aircraft-Level Functions, Functional Requirements


and Functional Interfaces
This activity begins with the establishment of basic aircraft performance and
operational requirements. From these basic requirements, the functional
requirements can be established and the functional interfaces with the
external physical and operational environment identified.

Aircraft-level functions are high-level activities and are not necessarily


associated with a single, physical system implementation. Typical aircraft
functions include:
Flight Control Engine Control Communication
Ground Steering Collision Avoidance Guidance
Aircraft Aspects of ATC Ground Deceleration Navigation
Automatic Flight Control Environmental Control Passenger Safety
Cargo Handling Passenger Comfort

The output of this activity is a list of aircraft-level functions and the


associated functional requirements and interfaces.

A.1.2 Determination of Functional Failure Consequences and Implications


A Functional Hazard Assessment at the aircraft level is performed to
determine the classification of the failure conditions associated with each
function. Guidelines for assigning failure condition classifications are
contained in paragraph 6.1.

The output of this activity is the association of each aircraft function (see
paragraph A.1.1) with specific failure conditions and the associated failure
condition classifications based on hazard severity.

A-2
A.1.3 Allocation of Functions To Systems and People
The next level of activity consists of establishing the appropriate grouping of
aircraft functions and the allocation of the related requirements to people or
systems. The process for selecting the appropriate functional grouping out
of the range of possible groupings is often complex and controversial. No
specific recommendations for accomplishing the grouping activity are
provided in this ARP. However, careful attention to the basis for the
selection decisions including related assumptions is fundamental to the
success of subsequent processes. The functional groupings interact with the
aircraft architecture and are the basis for system architecture development.
While it is not necessary to know in detail how a system will be
implemented to accomplish the necessary functional groupings,
implementation constraints, failure effects, and life-cycle support may all
play a significant role in selecting the most appropriate groupings.
Assumptions that are made in the course of this process become a vital part
of the overall system requirements package and are subject to the same
validation activity as are other requirements.

The specific nature of each functional grouping plays a significant role in


influencing which elements of the functions will be allocated to people and
which will be allocated to machines. In either case the allocation should
define inputs, processes performed, and outputs. Both operational and
support aspects should be considered.

From the function allocations and the associated failure consequences,


further specific system requirements necessary to achieve the safety
objectives are determined. Derived requirements and additional
assumptions will emerge during this phase as the consequences of the
various combinations of functions, allocations to systems and to people are
considered. These, in turn, may alter the aircraft-level function
requirements.

The output of this activity is a set of requirements for each human activity
and aircraft system together with associated interfaces. The interfaces
should be defined with all inputs having a source and all outputs having
destination(s), either human or another system.

A.1.4 Design of System Architecture and Allocation of Requirements to


Items
The system architecture establishes the structure and boundaries within
which specific item designs are implemented to meet all established safety,
and business requirements. More than one candidate system architecture

A-3
ARP 4754

may be considered for implementation. A Common Cause Analysis (CCA)


supports the selection of the system architecture through determination that
appropriate independence can be achieved. Practical considerations
associated with technology readiness, implementation schedules,
producibility, economics, and industry acceptance, place additional
constraints on the system architecture. Guidelines for performing the CCA
are summarized in paragraph 6.4 and detailed in ARP 4761.

A Preliminary System Safety Assessment (PSSA) is performed to evaluate


the candidate architecture(s), to establish a budget for numerical item safety
requirements, to establish system and item development assurance levels,
and to confirm reasonableness of the item-level safety objectives. This gives
confidence that the top-level system safety objectives can be met. Guidelines
for performing the PSSA are summarized in paragraph 6.2 and detailed in
ARP 4761.

Derived requirements stemming from technology, architecture, system and


item interfaces, or implementation choices become more clearly visible as
work on this activity progresses. These derived requirements may represent
significant constraints or alterations to higher-level requirements.

The outputs of this activity are the definition of the system architecture to
the item level and the allocation of system functional and associated safety
requirements to the appropriate items. Requirements governing item
interfaces and integration should be included.

A.1.5 Allocation of Item Requirements to Hardware and Software


In practice, system architecture development and the allocation of item
requirements to hardware and software are tightly-coupled, iterative
processes. With each cycle the identification and understanding of derived
requirements increases and the rationale for the allocation of system-level
requirements to hardware or software at the item level becomes clearer. The
process is complete when all requirements can be accommodated within the
final architecture.

As part of the function requirements allocation process, the associated


failure condition classification information should be stated explicitly so that
they are traceable to substantiate any advantage of further architectural
partitioning. Where traceability is impractical, the worst case failure
condition classification should be used to establish the safety requirement
for all possible item failure modes. Derived requirements again emerge from
this portion of the allocation process, and should be validated and their
impact on safety determined.

A-4
Derived requirements arising from allocation may be system related or
software-hardware related. Similarly the verification of implementation
against allocated requirements may be significant at system level or at
hardware-software level.

Outputs of this allocation effort are requirements allocated to hardware,


with appropriate reliability objectives and development assurance levels,
and requirements allocated to software, including software development
assurance levels, for each item. Requirements governing hardware-software
integration should be included, where necessary. The results of this activity
are also used to update the preliminary system safety assessment.

A.1.6 Hardware and Software Design/Build


The hardware and software design/build processes should provide
traceability to the requirements allocated to hardware and software for each
item. If hardware and software implementation proceeds in parallel with the
requirements allocation and architecture definition phases, then sufficient
discipline is needed to ensure that derived requirements are captured and
that all functional requirements are achieved in the implementation.

The outputs of this phase include hardware-software integration


procedures, released hardware drawings, software source code together
with related documentation, applicable development assurance data,
breadboard or prototype hardware, if applicable; and lab/flight test items.
The RTCA documents referenced in paragraph 2.1 provide additional
information about hardware and software development assurance activities.

A.1.7 Hardware/Software Integration


Depending on the nature of the system and the development process used
initial item hardware and software integration may occur on breadboards,
prototypes, computer emulation, lab- or flight-worthy items. The output is a
physical item under configuration control together with development
assurance data. Detailed procedures developed during the design-build
process should be used to verify that all hardware and software integration
requirements are met.

A.1.8 System Integration


Normally, systems integration begins with item by item integration and
progresses to complete system integration. The difficulty of fully
anticipating or modeling the aircraft environment may dictate that some

A-5
ARP 4754

integration activities be performed on the aircraft. While the validity of


on-the-aircraft integration is generally assumed to be high, more meaningful
or cost-effective results often can be achieved in laboratory or simulation
environments. Specific procedures for system integration vary widely across
the industry.

During the integration process, identified deficiencies should be referred


back to the appropriate development or support activity (requirements
identification, allocation or validation; implementation; verification; etc.) for
resolution and the process iterated. When all iterations are concluded, the
output of this activity is a verified integrated system, along with the data
demonstrating that the system satisfies all functional and safety
requirements.

System integration is the task of ensuring all the aircraft systems operate
correctly individually and together as installed on the aircraft. This provides
the means to show that intersystem requirements, taken as a group, have
been satisfied. It also provides an opportunity to discover and eliminate
undesired unintended functions.

A.2 Information Flow Between System and


Hardware/Software Life-Cycle Processes

A.2.1 Information Flow From System Process to Hardware/Software


Processes
The point where requirements are allocated to hardware and software is
also the point where the guidelines of this document transition to the
guidelines of DO-178B (for software), DO-xxx (for complex hardware), and
other existing industry guidelines. The following data is passed to the
software and hardware processes as part of the requirements allocation:
a. Requirements allocated to hardware.
b. Requirements allocated to software.
c. Development assurance level for each requirement and a description of
associated failure condition(s), if applicable.
d. Allocated failure rates and exposure interval(s) for hardware failures of
significance.
e. Hardware/software interface description (system design).

A-6
f. Design constraints, including functional isolation, separation, and
partitioning requirements.
g. System validation activities to be performed at the software or hardware
development level, if any.
h. System verification activities to be performed at the software or
hardware development level.

A.2.2 Information Flow From Hardware/Software Processes to System


Process
The information listed below needs to be included in the data passed to the
system process in support of system level development activities and
supporting processes:
a. Description of the implementation of the allocated and derived
requirements
b. Software requirements, including derived requirements (both system
and software), to be evaluated against the system or item requirements
and the PSSA
c. Hardware requirements, including derived requirements, to be
evaluated against the system or item requirements and the PSSA
d. Description of the implementation architecture, including achieved
independence and fault containment capabilities
e. Evidence of the proper level(s) of development assurance, including any
assurance achieved through tools
f. Evidence of system level requirements validation activities performed at
the software or hardware development level, if any
g. Evidence of system/item verification activities performed at the software
or hardware development level
h. Hardware FMES, FMEA, failure rates, fault detectability coverage, and
fault latency intervals, for incorporation in the SSA
i. Problem or change reports that may impact system or item requirements,
to be evaluated against the system or item requirements and the PSSA

A-7
Appendix B - Definitions, Abbreviations &
Acronyms

B.1 Definitions
Acceptance Acknowledgment by the certification authority that a
submission of data, argument, or claim of equivalence
satisfies applicable requirements. (derived from CAST
discussions)
Agreement Acknowledgment by the certification authority that a plan or
proposal relating to, or supporting, an application for
approval of a system or equipment, is an acceptable
statement of intent with respect to applicable requirements.
(derived from CAST discussions)
Airworthiness The condition of an item (aircraft, aircraft system, or part) in
which that item operates in a safe manner to accomplish its
intended function.
Analysis An evaluation based on decomposition into simple elements.
Applicable The comprehensive and detailed airworthiness codes
Requirements established by an ICAO Contracting State for the class of
aircraft under consideration (See ICAO Annex 8 Part II, 2.2)
(Derived from ICAO)
Approval The act of formal sanction of an implementation by a
certification authority.
Approved Accepted by the certification authority as suitable for a
particular purpose. (ICAO)
Assembly A number of parts, subassemblies, or any combination
thereof, joined together to perform a specific function and
which can be disassembled without destruction of
designed use.
Assessment An evaluation based upon engineering judgment.
Assumptions Statements, principles, and/or premises offered without
proof.
Assurance The planned and systematic actions necessary to provide
adequate confidence that a product or process satisfies
given requirements. (DO-178B)

B-1
ARP 4754

Authority The organization or person responsible within the State


(Country) concerned with the certification of compliance
with applicable requirements.
Note: A matter concerned with aircraft, engine, or
propeller type certification or equipment approval
would normally be dealt with by the certification
authority; matters concerned with continuing
airworthiness might be dealt with (by what would
be referred to as) the airworthiness authority.
(JAR 1)
Availability Probability that an item is in a functioning state at a given
point in time.
Cascading Failure A failure for which the probability of occurrence is
substantially increased by the existence of a previous
failure.
Certification “Certification” means the legal recognition that a product,
service, organization or person complies with the
applicable requirements. Such certification comprises the
activity of technically checking the product, service,
organization or person, and the formal recognition of
compliance with the applicable requirements by issue of a
certificate, license, approval or other document as required
by national laws and procedures.
In particular, certification of a product involves:
a. The process of assessing the design of a product to
ensure that it complies with a set of standards
applicable to that type of product so as to demonstrate
an acceptable level of safety.
b. The process of assessing an individual product to
ensure that it conforms with the certified type design.
c. The issue of any certificate required by national laws to
declare that compliance or conformity has been found
with applicable standards in accordance with
paragraph (a) or (b) above. (E.C. Directive, CAST
modified)
Certification Organization or person responsible for granting approval on
Authority behalf of the nation of manufacture.

B-2
Change Control / 1. The process of recording, evaluating, approving or
Configuration disapproving, and coordinating changes to configuration
Control items after formal establishment of their configuration
identification or to baselines after their establishment.
2. The systematic evaluation, coordination, approval, or
disapproval and implementation of approved changes in
the configuration of a configuration item after formal
establishment of its configuration identification or to
baselines after their establishment. (DO-178B)
Common Cause Generic term encompassing zonal analysis, particular risk
Analysis analysis, and common mode analysis
Common Mode An event which simultaneously affects a number of elements
Failure otherwise considered to be independent.
Complexity Complexity is an attribute of systems or items which makes
their operation difficult to comprehend. Increased system
complexity is often caused by such items as sophisticated
components and multiple interrelationships.
Compliance Successful performance of all mandatory activities;
agreement between the expected or specified result and the
actual result.
Component Any self-contained part, combination of parts, subassemblies
or units, that perform a distinctive function necessary to
the operation of the system.
Configuration A system configuration first established no later than that
Baseline point in the development where credit for development
assurance activities is first desired. Documented change
control processes should be followed subsequently.
Configuration Item One or more hardware or software elements treated as a unit.
(modified IEEE, SC-167, WG-5)
Configuration 1. The process of identifying and defining the configuration
Management items in a system, controlling the release and change of
these items throughout the System Development,
recording and reporting the status of configuration items
and change requests, and verifying the completeness and
correctness of configuration items.
2. A discipline applying technical and administrative
direction and surveillance to:
a. Identify and document the functional and physical
characteristics of a configuration item
b. Control changes to those characteristics
c. Record and report change processing and
implementation status. (modified IEEE, SC-167 WG-5)

B-3
ARP 4754

Conformance Established as correct with reference to a standard,


specification or drawing. (derived from CAST discussions)
Criticality Indication of the hazard level associated with a function,
hardware, software, etc., considering abnormal behavior
(of this function, hardware, software, etc.) alone, or in
combination with external events.
Defect State of an item consisting of the non-performance of
specified requirements by a characteristic of the item. A
defect may, but need not, lead to a failure.
Demonstration A method of proof of performance by observation.
Derived Additional requirements resulting from design or
Requirements implementation decisions during the development process.
Derived requirements are not directly traceable to higher-
level requirements; though derived requirements can
influence higher-level requirements.
Design Error A mistake in the design process resulting from incorrect
methods or incorrect application of methods or knowledge.
Design Process The process of creating a system or an item from a set of
requirements.
Development All of those planned and systematic actions used to
Assurance substantiate, at an adequate level of confidence, that
development errors have been identified and corrected
such that the system satisfies the applicable certification
basis.
Development Error A mistake in requirements determination or design.
Error 1. An occurrence arising as a result of an incorrect action or
decision by personnel operating or maintaining a system.
(JAR AMJ 25.1309)
2. A mistake in requirements, design, or implementation.
Failure The inability of an item to perform its intended function
Failure Condition A condition with an effect on the aircraft and its occupants,
both direct and consequential, caused or contributed to by
one or more failures, considering relevant adverse
operational or environmental conditions.
In terms of commercial airplane airworthiness, a Failure
Condition is classified in accordance to the severity of its
effects as defined in FAA AC 25.1309-1A or
JAR AMJ 25.1309.
Failure Effect A description of the operation of an item as the result of a
failure.
Failure Mode The way in which the failure of an item occurs.

B-4
Failure Rate The performance figures within a hardware item population,
calculated by dividing the number of failures by the total
unit operating hours.
Functional Hazard A systematic, comprehensive examination of aircraft
Assessment functions to identify and classify Failure Conditions of
those functions according to their severity.
Guidelines Recommended procedures for complying with regulations.
Hardware An item that has physical being. Generally refers to such
items as line replaceable units or modules, circuit cards,
and power supplies.
Hazard A potentially unsafe condition resulting from failures,
malfunctions, external events, errors, or combinations
thereof.
Implementation The act of creating a physical reality from a specification.
Independence 1. A design concept that ensures that the failure of one item
does not cause a failure of another item. (derived from JAR
AMJ 25.1309)
2. Separation of responsibilities that assures the
accomplishment of objective evaluation.
Inspection An examination of an item against a specific standard.
Integration 1. The act of causing elements of an item to function together.
2. The act of gathering a number of separate functions within
a single implementation.
Integrity Attribute of a system or an item indicating that it can be
relied upon to work correctly on demand.
Interchangeability The ability to substitute one item for another within a system
and have the system perform to its specification.
Malfunction The occurrence of a condition whereby the operation is
outside specified limits.
Mean Time Between Mathematical expectation of the time interval between two
Failures (MTBF) consecutive failures of a hardware item.
Note: The definition of this statistic has meaning only for
repairable items. For non-repairable items, the term
“mean life” is used.
Particular Risk Risk associated with those events or influences that are
outside the system(s) and item(s) concerned, but which
may violate failure independence claims.

B-5
ARP 4754

Preliminary System A systematic evaluation of a proposed system architecture


Safety Assessment and its implementation, based on the Functional Hazard
Assessment and failure condition classification, to
determine safety requirements for all items in the
architecture.
Product Hardware, software, item or system generated in response to
a defined set of requirements.
Redundant Multiple independent means incorporated to accomplish a
given function.
Notes:
1. Distinction is made between the following
redundant architecture principles:
- similar redundancy (the multiple means are
identical)
- dissimilar redundancy (the multiple means are of
different types)
- temporal redundancy (redundancy given by
repetition of the operation)
2. The operation of redundant architectures may be
classified as follows:
- active redundancy (multiple means are routinely
in operation and participating in carrying out the
task)
- passive redundancy (the additional means
participate in carrying out task only in case of
malfunction or failure)
- warm passive redundancy (the additional means
are always switched on)
- cold passive redundancy (the additional means
are switched on only in case of malfunction or
failure)
Reliability The probability that an item will perform a required function
under specified conditions, without failure, for a specified
period of time.
Requirement An identifiable element of a function specification that can be
validated and against which an implementation can be
verified.
Risk The frequency (probability) of an occurrence and the
associated level of hazard.
Safety The state in which risk is lower than the boundary risk.
The boundary risk is the upper limit of the acceptable risk.
It is specific for a technical process or state.

B-6
Similarity Applicable to systems similar in characteristics and usage to
(as a certification systems used on previously certified airplanes. In
strategy) principle, there are no parts of the subject system more at
risk (due to environment or installation) and that
operational stresses are no more severe than on the
previously certified system.
If no identified failure condition of the subject system results
in a more severe effect on the subject aircraft than the
similar failure condition of the similar system on the
reference aircraft, the proven reliability record of the
similar system can be used to show compliance with the
intent of JAR 25.1309(b) / FAR 25.1309.
Software Computer programs, procedures, rules, and any associated
documentation pertaining to the operation of a computer
system (derived from SC-167, WG-4 and ISO 2382/1).
Specification A collection of requirements which, when taken together,
constitute the criteria that define the functions and
attributes of a system, or an item.
System A combination of inter-related items arranged to perform a
specific function. (WATOG).
System Safety A systematic, comprehensive evaluation of the implemented
Assessment system to show that the relevant safety requirements are
met.
Test A quantitative procedure to prove performance using stated
objective criteria with pass or fail results.
Time Limited A means of allowing aircraft dispatch for a limited time with
Dispatch certain faults present.
Traceability The characteristic by which requirements at one level of a
design may be related to requirements at another level.
Unintended A function that is visible at the airplane level and was neither
Function intended nor a predicted fault condition in the PSSA. Only
when an unintended function leads to an airplane-level
hazard or a degradation of an intended function is it
considered significant relative to certification.
Validation The determination that the requirements for a product are
sufficiently correct and complete.
Verification The evaluation of an implementation of requirements to
determine that they have been met.
Zonal Safety The safety standard with respect to installation, interference
between systems, and potential maintenance errors

B-7
ARP 4754

B.2 Abbreviations and Acronyms

AC Advisory Circular (FAA)


ADF Automatic Direction Finding
AMJ Advisory Material Joint (JAA)
ATA Airline Transportation Association
ATC Air Traffic Control
ARAC Aviation Rulemaking Advisory Committee (FAA)
ARP Aerospace Recommended Practice (SAE)
ASD Avionic Systems Division (SAE)
CAST Certification Authority Software Team (FAA-JAA)
CCA Common Cause Analysis
DD Dependence Diagram
DME Distance Measurement Equipment
EUROCAE European Organization for Civil Aviation Equipment
FAA Federal Aviation Administration
FAR Federal Aviation Regulations
FHA Functional Hazard Assessment
FMEA Failure Mode and Effect Analysis
FMES Failure Mode and Effect Summary
FTA Fault Tree Analysis
GPS Global Positioning System
HW or H/W hardware
ICAO International Civil Aviation Organization
IEEE The Institute of Electrical and Electronics Engineers,
Inc.
ILS Instrument Landing System
JAA Joint Aviation Authorities
JAR Joint Aviation Requirements
MLS Microwave Landing System
MRB Maintenance Review Board
MTBF Mean Time Between Failure

B-8
PSSA Preliminary System Safety Assessment
RTCA RTCA, Inc.
SAE Society of Automotive Engineers, Inc.
SC-167 RTCA special committee responsible for DO-178.

SC-180 RTCA special committee responsible for preparing a


document (DO-xxx) covering complex hardware.
SIRT Systems Integration Requirements Task group
SSA System Safety Assessment
SW or S/W software
VOR Very high frequency Omnidirectional Range
WG-# A numbered working group (EUROCAE).
WATOG World Airlines Technical Operations Glossary

B-9
ARP 4754

B-10
Appendix C - Concept, Task, and Function Index
Development Phase
Concept,
Task, Primary
or Reference
Function Paragraph

Analysis x x 8.4.2
Architecture,
considerations x 5.4.1
selection x 5.1
Archival & retrieval x x x x x 9.2.3
Assumptions,
design related x x 7.5.2
initial x 7.5.5
manufacturing x x 7.5.3
producibility x x 7.5.3
operational environment x x 7.5.1
serviceability x x 7.5.4
validation x x 7.5
Certification,
coordination x x x x 4.
data x x x x x 4.4
plan x 4.1.1
planning x x x 4.1
process x x x x 4.
Change control x x x x x 9.2.2
Checks,
completeness x x 7.3
correctness x x 7.4
Common cause analysis x 6.4
Compliance,
agreement on means x 4.2
substantiation x x 4.3
Configuration management,
identification x x x x x 9.2.1
process activities x x x x x 9.2
process objectives x x x x 9.1
Development assurance,
activities x x x x x 3.2.1
level assignment x 5.4
meaning x x x x 3.2
substantiation x x x x 3.2.2
Failure condition risk assignment x 5.5

C-1
ARP 4754

Development Phase

Concept, Primary
Task, Reference
or Paragraph
Function
Flight operations, safety-related x 6.5
Hardware level assignment x 5.4.3
Inspection & review x x 8.4.1
Maintenance tasks, safety-related x 6.5
Operational limits x 4.1.2
Preliminary system safety assessment x x 6.2
Problem reporting x x x x x 9.2.2
Process assurance,
evidence x x x x 10.4
objectives x x x x x 10.1
plan x 10.2
Project plan reviews x x 10.3
Requirements,
additional certification x x 5.2.3
capture x x x x 5.1
derived x x 5. 3
functional x x 5.2.2
safety x 5.2.1
types x x x 5.2
validation x x 7.
Safety assessment process
general x x x x 6.
system x x 6.3
Service experience x 8.4.4
Similarity x 8.4.4
Software level assignment x 5.4.2
System development process x x x x x x 3.1
Testing x x 8.4.3
Validation,
methods x x x 7.6.1&.2
process model x x 7.2
process objectives x x 7.1
recommended activities x 7.6.2
rigor x x 7.6
Verification,
methods x x 8.4
planning x 8.3
process model x x 8.2
process objectives x x 8.1
recommended activities x x 8.4.5

C-2
Appendix D - Considerations Associated with the
Allocation of Risk in Systems
Development
This appendix provides background which may make the principles that
govern the allocation of risk in the process of systems development easier to
understand and use in real world situations. The material presented in this
appendix has been derived from discussions within the SIRT membership
and from a series of interactions with other regulatory and industry groups
with the aim of understanding the areas of common rationale between
software and systems development assurance and making precise the areas
in which they differ.
The understanding of the nature of the system problem was strongly
influenced by the efforts of the software community leading to DO-178B.
There are many similarities between systems development and software
development; at the same time the systems problem is not identical to the
software problem.

D.1 Link Between Systems and Software Methodology


The SIRT group recognized that system level concepts should apply to
systems implemented with or without software. There are established
concepts in both software and systems architecture development contexts.
The SIRT group believes it is important to link these concepts for the
following reasons:
1. To help bridge the understanding gap for people making the
transition from simple discrete systems to complex or highly
integrated systems.
2. To encourage broader thinking about top level system architecture in
the initial phase of system development.
3. To provide a means for retaining the gains made by successful past
systems and ensure that the lessons learned from previous systems
are not ignored solely because of process differences.
4. To provide a basis for dealing with additions to, and modifications of,
existing systems that were not originally developed using the
specific methodology of ARP 4754, but whose operation is
otherwise recognized as safe and effective.
Moreover, if these two methodologies are not linked, then there is a very
real probability that all but the most critical systems will be developed in

D-1
ARP 4754

ways lacking any documented framework for systems development


oversight that retains utility over the life of the system

D.2 Interpretation of Systems Interface with DO-178B


In discussion with many of the people who worked on DO-178B, it was
apparent that the issue of systems methodology vs. software methodology
was at the core of the final decision to leave the systems level work to
another document (and another committee.) The SIRT group understands
the dilemma SC-167 faced and supports their decision. It is important that
the references to the system and safety processes in DO-178B are interpreted
in the light of the guidance in ARP 4754 for those parts of the development
process. For example, in the definitions of software levels [DO-178B 2.2.2 a) -
e)] the wording ‘…cause or contribute to a failure…’ needs to be interpreted
by the safety assessment process which takes due account of system
architecture in assessing the contribution software can make to the aircraft-
level function effects ARP 4754 also offers more specific guidance for
architectural means, such as dissimilarity, which can protect against
anomalous software behavior. As a consequence the SIRT group determined
that, where separate channels, which are demonstrably independent and
dissimilar in their design, technology and implementation, are used to
provide redundant means of achieving a particular aircraft-level function, it
can be acceptable to assign lower development assurance levels, to each of
those separate channels than would have been appropriate if redundant
identical channels had been used to provide the function. DO-178B assumes
that there is a common functional specification for the channels and
therefore requires that one of the channels should be developed to the level
appropriate to the most severe failure condition for the total system.

D.3 Architectural Isolation and Independence


If the architecture of a system adequately isolates one portion from another,
and the functions implemented by each portion are sufficiently independent,
then the development assurance level for each portion within the
architecture boundary is determined by the failure effects contributed by
that portion. The development assurance level for the architecture along
with any software used to establish, maintain, or monitor that architecture,
must be consistent with the highest risk system fault or failure condition.
Any portion within the architecture boundary which can cause a failure or
incorrect operation of another otherwise independent portion must be
assigned a development assurance level consistent with the highest risk fault
or failure condition of the combined portions.

D-2
The key words in the preceding paragraph are adequately and sufficiently.
Application of this concept would be much easier if it were possible to create
an objective and quantified series of tests that would determine adequacy
and sufficiency in all situations. Such is not the case; nor is it reasonable to
expect to have that capability for any but the narrowest range of systems in
the foreseeable future. Therefore, those people charged with developing and
certifying complex systems, must exercise engineering judgment in defining
“adequately and sufficiently” for each system.
For some systems, establishing sufficient architectural isolation and
adequate independence may be undesirable or impractical. In these cases,
the entire system must be considered to be involved with the highest level
risk.

D.4 Systems Performing “Essential” Functions


At first glance it may seem that systems performing flight critical functions
are the areas of primary concern that this ARP is meant to address. For all
systems there are a number of factors that tend to increase system size and
complexity while other factors tend to reduce size and complexity. In
general the reduction factors have the greatest impact for systems
performing “critical” functions. This keeps both the number and the
complexity of “critical” systems down. The reduction factors are least for
non-essential systems, but then so too are the opportunities for major or
hazardous failure conditions.
The mid ground is the broad array of systems performing what are
popularly recognized as “essential” functions. Some of these may be related
to specific hazardous failure conditions, though the vast majority of failure
conditions are no riskier than “major.” As a group, these systems encompass
the greatest opportunities for complexity and high levels of integration.
These systems are frequently characterized by having large sets of less well
defined top level requirements. Experience suggests that requirements are
missed or established in error more frequently in this class of system than in
systems with more “critical” functionality which generally have a more
focused requirement set. Additionally the cost and complexity of these
systems encourages their application to more than one aircraft type, further
stressing the importance of clarity and validity of requirements and the
adequacy of implementation verification. Finally, it is these systems that
benefit the most if the top level system architecture establishes a framework
that makes understanding, operating and adapting the system simpler and
easier over the life of the system.
Research into software development programs cites results showing that far
more than 50% of so called software errors actually result from missed or

D-3
ARP 4754

incorrect requirements or the incorrect interpretation of requirements. This


fact becomes important when considering what alternatives might exist for
coping with such errors.
Fundamentally the alternatives are quite limited:
· Minimize the span of influence for individual system requirements.
(e.g. parallel systems implementing dissimilar functions.)
· Provide overriding supervisory control of the system output while
ensuring that the supervisory control is not dependent on the proper
functioning of the system. (e.g. pilot override of system or the use of
certain control limiters.)
· Maximize the number of top-level requirements that are obtained from
successfully operating systems. (e.g. evolutionary systems design.)
The effectiveness of various systems architectures in limiting the impact of a
requirements error is a significant factor in determining which architectures
are suitable for use in complex or highly integrated systems.

D.5 Service Experience


Beginning with the introduction of the Airbus A-310 and Boeing 757 and 767
class of aircraft in the early 1980s, the industry has accumulated significant
experience with complex airborne systems containing significant software
functionality. Most of this software was developed to level 2 (RTCA/DO-
178A) or less stringent requirements. While some individual functions have
failed or exhibited anomalous behavior due to a requirements error, there is
little or no evidence of separate requirements errors having nearly
simultaneous impacts on multiple dissimilar functions. For example
information obtained through the navigation radios or the communications
radios can be used to navigate an aircraft in many areas of the world.
Complete loss of either type of radio would generally be classified as major
failure condition. In aircraft that rely on radio navigation, complete loss of
both radio types would likely be classified as a hazardous failure condition.
Even where these radios operate in similar ways using similar components
and operate in adjacent frequency bands, there is little or no evidence for a
common requirements error effecting both types of radio systems at the
same time.
Where simultaneous failures of more than one LRU with different designs
have occurred, they have been the result of common physical causes such as
power failure or loss of cooling. In most situations the risk and impact of
physical failures can be assessed directly from the equipment design and the
added burden of highly structured development process controls is
unwarranted.

D-4
D.6 Finite Resources
The SIRT group is mindful that the regulatory authorities are charged with
assessing the safety of an aircraft in terms of compliance with regulations
not as an economic endeavor. In actual practice, the resources available to
the applicant and to the regulator are limited. Available knowledge, skill,
methodology, time, and facilities constrain what can be accomplished.
Whatever the resources, system safety will be higher if those resources are
applied to the right tasks. For complex or highly integrated systems,
selecting the “right” tasks is, itself, a complex decision; one that involves
understanding and judgment by both the applicant and the regulatory
authorities. The experience of the SIRT group members strongly suggests
that the “right” tasks do vary significantly from one system to another and
from one application to another. This being the case, it follows that reasoned
variability in design and in the means of showing compliance is both
necessary and appropriate. Indeed such variability in methods and designs
is essential for the safe and successful evolution of systems.

D.7 Conclusions
The issues summarized in this appendix were discussed and debated in
many forms during the deliberations of the SIRT group. Though the starting
points were highly varied and the path to conclusion often tortuous, the
conclusions consistently converged on support for the use of architecture
and a structured development assurrance process to establish independence
and limit risk within a complex or highly integrated system.

D-5

You might also like