You are on page 1of 297

National Aeronautics and

Space Administration

design

test
NASA
SYSTEMS ENGINEERING
HANDBOOK
fly
integrate
www.nasa.gov
NASA SP-2016-6105 Rev2 supersedes SP-2007-6105 Rev 1 dated December, 2007.

Cover photos: Top left: In this photo, engineers led by researcher Greg Gatlin have sprayed fluorescent oil on a 5.8 percent scale
model of a futuristic hybrid wing body during tests in the 14- by 22-Foot Subsonic Wind Tunnel at NASA’s Langley Research Center
in Hampton, VA. The oil helps researchers “see” the flow patterns when air passes over and around the model. (NASA Langley/
Preston Martin) Top right: Water impact test of a test version of the Orion spacecraft took place on August 24, 2016, at NASA
Langley Research Center (NASA) Bottom left: two test mirror segments are placed onto the support structure that will hold them.
(NASA/Chris Gunn) Bottom right: This self-portrait of NASA’s Curiosity Mars rover shows the vehicle at the “Mojave” site, where its
drill collected the mission’s second taste of Mount Sharp. (NASA/JPL-Caltech/MSSS)

Comments, questions, and suggestions regarding this document can be sent to:

Steven R. Hirshorn
Chief Engineer, Aeronautics Research Mission Directorate (ARMD)
Office of the Chief Engineer
NASA Headquarters, Room 6D37
300 E St SW
Washington, DC 20546-0001
202-358-0775
steven.r.hirshorn@nasa.gov
Table of Contents

Preface. . . . . . . . . . . . . . . . . . . . viii 3.10 Funding: The Budget Cycle. . . . . . . 33
Acknowledgments. . . . . . . . . . . . . . . ix 3.11 Tailoring and Customization of
NPR 7123.1 Requirements . . . . . . . 34
1.0 Introduction 1 3.11.1  Introduction 34
3.11.2  Criteria for Tailoring  34
1.1 Purpose . . . . . . . . . . . . . . . . . 1
3.11.3 Tailoring SE NPR Requirements
1.2 Scope and Depth . . . . . . . . . . . . 1 Using the Compliance Matrix  35
3.11.4 Ways to Tailor a SE Requirement  36
2.0 Fundamentals of Systems Engineering 3 3.11.5 E xamples of Tailoring and
Customization 37
2.1 The Common Technical Processes
and the SE Engine . . . . . . . . . . . . 5 3.11.6 Approvals for Tailoring  40

2.2 An Overview of the SE Engine by
Project Phase . . . . . . . . . . . . . . 8 4.0 System Design Processes 43
2.3 Example of Using the SE Engine . . . . 10 4.1 Stakeholder Expectations Definition . . 45
2.4 Distinctions between Product 4.1.1  Process Description  45
Verification and Product Validation . . . 11
4.1.2 
Stakeholder Expectations Definition
2.5 Cost Effectiveness Considerations. . . 11 Guidance 53
2.6 Human Systems Integration (HSI) 4.2 Technical Requirements Definition . . . 54
in the SE Process . . . . . . . . . . . . 12
4.2.1  Process Description  54
2.7 Competency Model for
4.2.2 
Technical Requirements Definition
Systems Engineers. . . . . . . . . . . 13 Guidance 62
4.3 Logical Decomposition. . . . . . . . . 62
3.0 NASA Program/Project Life Cycle 17 4.3.1  Process Description  62
3.1 Program Formulation. . . . . . . . . . 20 4.3.2 
Logical Decomposition Guidance  65

3.2 Program Implementation. . . . . . . . 20 4.4 Design Solution Definition . . . . . . . 65
3.3 Project Pre-Phase A: Concept Studies. 21 4.4.1  Process Description  66
3.4 Project Phase A: Concept and 4.4.2 
D esign Solution Definition Guidance  76
Technology Development. . . . . . . . 23
3.5 Project Phase B: Preliminary Design 5.0 Product Realization 77
and Technology Completion . . . . . . 25
5.1 Product Implementation. . . . . . . . . 78
3.6 Project Phase C: Final Design and
Fabrication. . . . . . . . . . . . . . . 27 5.1.1  Process Description  79
3.7 Project Phase D: System Assembly, 5.1.2 
Product Implementation Guidance  83
Integration and Test, Launch. . . . . . 29 5.2 Product Integration. . . . . . . . . . . 83
3.8 Project Phase E: Operations 5.2.1  Process Description  85
and Sustainment. . . . . . . . . . . . 31 5.2.2  Product Integration Guidance  88
3.9 Project Phase F: Closeout. . . . . . . . 31 5.3 Product Verification. . . . . . . . . . . 88

NASA SYSTEMS ENGINEERING HANDBOOK iii
Table of Contents

5.3.1  Process Description  89 Appendix A Acronyms . . . . . . . . . . . . . . . 173
5.3.2  Product Verification Guidance  99 Appendix B Glossary . . . . . . . . . . . . . . . . 176
5.4 Product Validation. . . . . . . . . . . 99 Appendix C How to Write a Good Requirement—
5.4.1  Process Description  99 Checklist. . . . . . . . . . . . . . . .197
5.4.2  Product Validation Guidance  106 Appendix D Requirements Verification Matrix . . . .201
5.5 Product Transition. . . . . . . . . . . 106 Appendix E Creating the Validation Plan with a
Validation Requirements Matrix. . . . 203
5.5.1  Process Description  106
Appendix F Functional, Timing, and State Analysis. 205
5.5.2  Product Transition Guidance  112
Appendix G Technology Assessment/Insertion . . 206
Appendix H Integration Plan Outline. . . . . . . . .214
6.0 Crosscutting Technical Management 113
Appendix I Verification and Validation Plan Outline. 216
6.1 Technical Planning . . . . . . . . . . . 113 Appendix J SEMP Content Outline . . . . . . . . .223
6.1.1  Process Description  114 Appendix K Technical Plans. . . . . . . . . . . . 235
6.1.2  Technical Planning Guidance  130 Appendix L Interface Requirements Document
6.2 Requirements Management . . . . . . 130 Outline . . . . . . . . . . . . . . . . 236
6.2.1  Process Description  131 Appendix M CM Plan Outline . . . . . . . . . . . . 239
6.2.2 
Requirements Management Appendix N Guidance on Technical Peer
Guidance 135 Reviews/Inspections . . . . . . . . . .240
6.3 Interface Management . . . . . . . . .135 Appendix O Reserved. . . . . . . . . . . . . . . . 241
6.3.1  Process Description  136 Appendix P SOW Review Checklist. . . . . . . . .242
6.3.2  Interface Management Guidance  138 Appendix Q Reserved. . . . . . . . . . . . . . . .243
6.4 Technical Risk Management . . . . . .138 Appendix R HSI Plan Content Outline. . . . . . . .244
6.4.1 
R isk Management Process Appendix S Concept of Operations
Description 141 Annotated Outline . . . . . . . . . . . 251
6.4.2 
R isk Management Process Appendix T Systems Engineering in Phase E . . . .254
Guidance 143
6.5 Configuration Management. . . . . . .143
6.5.1  Process Description  144 References Cited. . . . . . . . . . . . . . 260
6.5.2  CM Guidance  150 Bibliography . . . . . . . . . . . . . . . . 270
6.6 Technical Data Management. . . . . . 151
6.6.1  Process Description  151
6.6.2 
Technical Data Management
Guidance 155
6.7 Technical Assessment . . . . . . . . .155
6.7.1  Process Description  157
6.7.2  Technical Assessment Guidance  160
6.8 Decision Analysis. . . . . . . . . . . .160
6.8.1  Process Description  164
6.8.2  Decision Analysis Guidance  170

NASA SYSTEMS ENGINEERING HANDBOOK iv
Table of Figures

Figure 2.0-1 SE in Context of Overall Project Figure 5.2-1 Product Integration Process . . . . . 86
Management . . . . . . . . . . . . . . 5 Figure 5.3-1 Product Verification Process . . . . . . 91
Figure 2.1-1 The Systems Engineering Engine Figure 5.3-2 Example of End-to-End Data Flow
(NPR 7123.1) . . . . . . . . . . . . . . 6 for a Scientific Satellite Mission . . . . 96
Figure 2.2-1 Miniature Version of the Poster-Size Figure 5.4-1 Product Validation Process . . . . . .100
NASA Project Life Cycle Process Figure 5.5-1 Product Transition Process . . . . . .107
Flow for Flight and Ground Systems
Figure 6.1‑1 Technical Planning Process. . . . . . 115
Accompanying this Handbook . . . . . 8
Figure 6.2‑1 Requirements Management Process. 131
Figure 2.5-1 Life-Cycle Cost Impacts from Early
Phase Decision-Making. . . . . . . . 13 Figure 6.3‑1 Interface Management Process. . . .136
Figure 3.0-1 NASA Space Flight Project Life Cycle Figure 6.4-1 Risk Scenario Development. . . . . .139
from NPR 7120.5E . . . . . . . . . . . 18 Figure 6.4-2 Risk as an Aggregate Set of
Figure 3.11-1 Notional Space Flight Products Risk Triplets. . . . . . . . . . . . . .139
Tailoring Process . . . . . . . . . . . 36 Figure 6.4-3 Risk Management Process . . . . . . 141
Figure 4.0‑1 Interrelationships among the System Figure 6.4-4 Risk Management as the Interaction
Design Processes. . . . . . . . . . . 44 of Risk-Informed Decision Making
Figure 4.1‑1 Stakeholder Expectations Definition and Continuous Risk Management . . 142
Process. . . . . . . . . . . . . . . . 46 Figure 6.5‑1 Configuration Management Process. 145
Figure 4.1‑2 Information Flow for Stakeholder Figure 6.5‑2 Five Elements of Configuration
Expectations . . . . . . . . . . . . . 48 Management . . . . . . . . . . . . .145
Figure 4.1‑3 Example of a Lunar Sortie DRM Early in Figure 6.5‑3 Evolution of Technical Baseline . . . . 147
the Life Cycle. . . . . . . . . . . . . 51 Figure 6.5‑4 Typical Change Control Process . . . 148
Figure 4.2‑1 Technical Requirements Definition Figure 6.6‑1 Technical Data Management
Process. . . . . . . . . . . . . . . . 55 Process. . . . . . . . . . . . . . . . 151
Figure 4.2-2 Flow, Type and Ownership of Figure 6.7‑1 Technical Assessment Process. . . .158
Requirements. . . . . . . . . . . . . 57 Figure 6.7‑2 Planning and Status
Figure 4.2‑3 The Flowdown of Requirements. . . . 58 Reporting Feedback Loop . . . . . . 159
Figure 4.3‑1 Logical Decomposition Process. . . . 63 Figure 6.8‑1 Decision Analysis Process . . . . . . 165
Figure 4.4‑1 Design Solution Definition Process . . 66 Figure 6.8-2 Risk Analysis of Decision Alternatives.166
Figure 4.4‑2 The Doctrine of Successive Figure G.1-1 PBS Example. . . . . . . . . . . . 208
Refinement . . . . . . . . . . . . . . 67 Figure G.3-1 Technology Assessment Process. . 209
Figure 4.4‑3 A Quantitative Objective Function, Figure G.3-2 Architectural Studies and
Dependent on Life Cycle Cost and Technology Development. . . . . . .210
All Aspects of Effectiveness. . . . . . 71
Figure G.4-1 Technology Readiness Levels. . . . . 211
Figure 5.0-1 Product Realization. . . . . . . . . . 78
Figure G.4-2 TMA Thought Process . . . . . . . . 212
Figure 5.1-1 Product Implementation Process . . . 79
Figure G.4-3 TRL Assessment Matrix. . . . . . . .213

NASA SYSTEMS ENGINEERING HANDBOOK v
Table of Tables

Table 2.1-1 Alignment of the 17 SE Processes to Table 6.1-2 Examples of Types of Facilities to
AS9100. . . . . . . . . . . . . . . . . 7 Consider during Planning. . . . . . .120
Table 2.2-1 Project Life Cycle Phases. . . . . . . . 9 Table 6.6‑1 Technical Data Tasks . . . . . . . . .156
Table 2.7-1 NASA System Engineering Table 6.7-1 Purpose and Results for Life-Cycle
Competency Model. . . . . . . . . . 14 Reviews for Spaceflight Projects . . . 161
Table 3.0-1 SE Product Maturity from NPR 7123.1. 19 Table 6.8‑1 Typical Information to Capture in a
Table 3.11-1 Example of Program/Project Types. . 38 Decision Report. . . . . . . . . . . . 171
Table 3.11-2 Example of Tailoring NPR 7120.5 Table D-1 Requirements Verification Matrix . . . 202
Required Project Products . . . . . . 39 Table E-1 Validation Requirements Matrix. . . 204
Table 3.11-3 Example Use of a Compliance Matrix. 41 Table G.1-1 Products Provided by the TA as a
Table 4.1-1 Stakeholder Identification Function of Program/Project Phase. . 207
throughout the Life Cycle. . . . . . . 47 Table J-1 Guidance on SEMP Content per
Table 4.2-1 Benefits of Well-Written Requirements. 59 Life-Cycle Phase. . . . . . . . . . . 233
Table 4.2‑2 Requirements Metadata . . . . . . . 59 Table K-1 Example of Expected Maturity of
Table 5.3-1 Example information in Verification Key Technical Plans. . . . . . . . . 235
Procedures and Reports. . . . . . . 94 Table R.2-1 HSI Activity, Product, or Risk Mitigation
Table 6.1-1 Example Engineering Team by Program/Project Phase . . . . . . 250
Disciplines in Pre-Phase A for
Robotic Infrared Observatory. . . . . 118

NASA SYSTEMS ENGINEERING HANDBOOK vi
Table of Boxes

The Systems Engineer’s Dilemma . . . . . . . . . . . 12 Differences between Verification and
Space Flight Program Formulation. . . . . . . . . . 20 Validation Testing. . . . . . . . . . . . . . . . . . . 89
Space Flight Program Implementation . . . . . . . . 21 Differences between Verification, Qualification,
Space Flight Pre‑Phase A: Concept Studies . . . . . 22 Acceptance and Certification. . . . . . . . . . . . . 90

Space Flight Phase A: Methods of Verification. . . . . . . . . . . . . . . . 93
Concept and Technology Development . . . . . . . . 24 Methods of Validation . . . . . . . . . . . . . . . . 101
Space Flight Phase B: Crosscutting Technical Management Keys. . . . . . 114
Preliminary Design and Technology Completion . . . 26 Types of Hardware. . . . . . . . . . . . . . . . . . 124
Space Flight Phase C: Final Design and Fabrication. . 28 Environments. . . . . . . . . . . . . . . . . . . . .127
Space Flight Phase D: Definitions. . . . . . . . . . . . . . . . . . . . . . 130
System Assembly, Integration and Test, Launch . . . 30 Types of Configuration Management Changes. . . .149
Space Flight Phase E: Operations and Sustainment . 32 Data Collection Checklist. . . . . . . . . . . . . . .155
Phase F: Closeout . . . . . . . . . . . . . . . . . . 33 HSI Relevance . . . . . . . . . . . . . . . . . . . .246
System Design Keys . . . . . . . . . . . . . . . . . 44 HSI Strategy . . . . . . . . . . . . . . . . . . . . .246
Concept of Operations vs. Operations Concept. . . . 51 HSI Domains. . . . . . . . . . . . . . . . . . . . .247
Example of Functional and Performance HSI Requirements . . . . . . . . . . . . . . . . . .247
Requirements . . . . . . . . . . . . . . . . . . . . 56 HSI Implementation . . . . . . . . . . . . . . . . . 249
Rationale. . . . . . . . . . . . . . . . . . . . . . . 60 HSI Plan Updates . . . . . . . . . . . . . . . . . . 250
Product Realization Keys. . . . . . . . . . . . . . . 78

NASA SYSTEMS ENGINEERING HANDBOOK vii
Preface

S ince the initial writing of NASA/SP-6105 in 1995
and the following revision (Rev 1) in 2007, systems
engineering as a discipline at the National Aeronautics
The update of this handbook continues the methodol-
ogy of the previous revision: a top-down compatibility
with higher level Agency policy and a bottom-up infu-
and Space Administration (NASA) has undergone sion of guidance from the NASA practitioners in the
rapid and continued evolution. Changes include field. This approach provides the opportunity to obtain
using Model-Based Systems Engineering to improve best practices from across NASA and bridge the infor-
development and delivery of products, and accommo- mation to the established NASA systems engineering
dating updates to NASA Procedural Requirements processes and to communicate principles of good prac-
(NPR) 7123.1. Lessons learned on systems engi- tice as well as alternative approaches rather than spec-
neering were documented in reports such as those ify a particular way to accomplish a task. The result
by the NASA Integrated Action Team (NIAT), the embodied in this handbook is a top-level implemen-
Columbia Accident Investigation Board (CAIB), and tation approach on the practice of systems engineer-
the follow-on Diaz Report. Other lessons learned were ing unique to NASA. Material used for updating this
garnered from the robotic missions such as Genesis handbook has been drawn from many sources, includ-
and the Mars Reconnaissance Orbiter as well as from ing NPRs, Center systems engineering handbooks and
mishaps from ground operations and the commercial processes, other Agency best practices, and external
space flight industry. Out of these reports came the systems engineering textbooks and guides.
NASA Office of the Chief Engineer (OCE) initia-
tive to improve the overall Agency systems engineer- This handbook consists of six chapters: (1) an intro-
ing infrastructure and capability for the efficient and duction, (2) a systems engineering fundamentals dis-
effective engineering of NASA systems, to produce cussion, (3) the NASA program/project life cycles,
quality products, and to achieve mission success. This (4) systems engineering processes to get from a con-
handbook update is a part of that OCE-sponsored cept to a design, (5) systems engineering processes to
Agency-wide systems engineering initiative. get from a design to a final product, and (6) crosscut-
ting management processes in systems engineering.
In 1995, SP-6105 was initially published to bring the The chapters are supplemented by appendices that
fundamental concepts and techniques of systems engi- provide outlines, examples, and further information
neering to NASA personnel in a way that recognized to illustrate topics in the chapters. The handbook
the nature of NASA systems and the NASA environ- makes extensive use of boxes and figures to define,
ment. This revision (Rev 2) of SP-6105 maintains that refine, illustrate, and extend concepts in the chapters.
original philosophy while updating the Agency’s sys-
tems engineering body of knowledge, providing guid- Finally, it should be noted that this handbook provides
ance for insight into current best Agency practices, top-level guidance for good systems engineering prac-
and maintaining the alignment of the handbook with tices; it is not intended in any way to be a directive.
the Agency’s systems engineering policy.
NASA/SP-2016-6105 Rev2 supersedes SP-2007-6105 Rev 1
dated December, 2007.

NASA SYSTEMS ENGINEERING HANDBOOK viii
Acknowledgments

The following individuals are recognized as contrib- Grantier, Julie, NASA/Glenn Research Center
uting practitioners to the content of this expanded Hack, Kurt, NASA/Glenn Research Center
guidance: Hall, Kelly, NASA/Glenn Research Center
Hamaker, Franci, NASA/Kennedy Space Center
Alexander, Michael, NASA/Langley Research Center Hange, Craig, NASA/Ames Research Center
Allen, Martha, NASA/Marshall Space Flight Center Henry, Thad, NASA/Marshall Space Flight Center
Baumann, Ethan, NASA/Armstrong Flight Research Center Hill, Nancy, NASA/Marshall Space Flight Center
Bixby, CJ, NASA/Armstrong Flight Research Center Hirshorn, Steven, NASA/Headquarters
Boland, Brian, NASA/Langley Research Center Holladay, Jon, NASA/NASA Engineering and Safety Center
Brady, Timothy, NASA/NASA Engineering and Safety Center Hyatt, Mark, NASA/Glenn Research Center
Bromley, Linda, NASA/Headquarters/Bromley SE Consulting Killebrew, Jana, NASA/Ames Research Center
Brown, Mark, NASA/Jet Propulsion Laboratory Jannette, Tony, NASA/Glenn Research Center
Brumfield, Mark, NASA/Goddard Space Flight Center Jenks, Kenneth, NASA/Johnson Space Center
Campbell, Paul, NASA/Johnson Space Center Jones, Melissa, NASA/Jet Propulsion Laboratory
Carek, David, NASA/Glenn Research Center Jones, Ross, NASA/Jet Propulsion Laboratory
Cox, Renee, NASA/Marshall Space Flight Center Killebrew, Jana, NASA/Ames Research Center
Crable, Vicki, NASA/Glenn Research Center Leitner, Jesse, NASA/Goddard Space Flight Center
Crocker, Alan, NASA/Ames Research Center Lin, Chi, NASA/Jet Propulsion Laboratory
DeLoof, Richard, NASA/Glenn Research Center Mascia, Anne Marie, Graphic Artist
Demo, Andrew/Ames Research Center McKay, Terri, NASA/Marshall Space Flight Center
Dezfuli, Homayoon, NASA/HQ McNelis, Nancy, NASA/Glenn Research Center
Diehl, Roger, NASA/Jet Propulsion Laboratory Mendoza, Donald, NASA/Ames Research Center
DiPietro, David, NASA/Goddard Space Flight Center Miller, Scott, NASA/Ames Research Center
Doehne, Thomas, NASA/Glenn Research Center Montgomery, Patty, NASA/Marshall Space Flight Center
Duarte, Alberto, NASA/Marshall Space Flight Center Mosier, Gary, NASA/Goddard Space Flight Center
Durham, David, NASA/Jet Propulsion Laboratory Noble, Lee, NASA/Langley Research Center
Epps, Amy, NASA/Marshall Space Flight Center Oleson, Steven, NASA/Glenn Research Center
Fashimpaur, Karen, Vantage Partners Parrott, Edith, NASA/Glenn Research Center
Feikema, Douglas, NASA/Glenn Research Center Powell, Christine, NASA/Stennis Space Center
Fitts, David, NASA/Johnson Space Flight Center Powell, Joseph, NASA/Glenn Research Center
Foster, Michele, NASA/Marshall Space Flight Center Price, James, NASA/Langley Research Center
Fuller, David, NASA/Glenn Research Center Rawlin, Adam, NASA/Johnson Space Center
Gati, Frank, NASA/Glenn Research Center Rochlis-Zumbado, Jennifer, NASA/Johnson Space Center
Gefert, Leon, NASA/Glenn Research Center Rohn, Dennis, NASA/Glenn Research Center
Ghassemieh, Shakib, NASA/Ames Research Center Rosenbaum, Nancy, NASA/Goddard Space Flight Center

NASA SYSTEMS ENGINEERING HANDBOOK ix
Acknowledgments

Ryan, Victoria, NASA/Jet Propulsion Laboratory Trase, Kathryn, NASA/Glenn Research Center
Sadler, Gerald, NASA/Glenn Research Center Trenkle, Timothy, NASA/Goddard Space Flight Center
Salazar, George, NASA/Johnson Space Center Vipavetz, Kevin, NASA/Langley Research Center
Sanchez, Hugo, NASA/Ames Research Center Voss, Linda, Dell Services
Schuyler, Joseph, NASA/Stennis Space Center Walters, James Britton, NASA/Johnson Space Center
Sheehe, Charles, NASA/Glenn Research Center Watson, Michael, NASA/Marshall Space Flight Center
Shepherd, Christena, NASA/Marshall Space Flight Center Weiland, Karen, NASA/Glenn Research Center
Shull, Thomas, NASA/Langley Research Center Wiedeman, Grace, Dell Services
Singer, Bart, NASA/Langley Research Center Wiedenmannott, Ulrich, NASA/Glenn Research Center
Slywczak, Richard, NASA/Glenn Research Center Witt, Elton, NASA/Johnson Space Center
Smith, Scott, NASA/Goddard Space Flight Center Woytach, Jeffrey, NASA/Glenn Research Center
Smith, Joseph, NASA/Headquarters Wright, Michael, NASA/Marshall Space Flight Center
Sprague, George, NASA/Jet Propulsion Laboratory Yu, Henry, NASA/Kennedy Space Center

NASA SYSTEMS ENGINEERING HANDBOOK x
1.0 Introduction
1.1 Purpose systems requires a systematic and disciplined set of

T his handbook is intended to provide general
guidance and information on systems engineer-
ing that will be useful to the NASA community. It
processes that are applied recursively and iteratively
for the design, development, operation, maintenance,
and closeout of systems throughout the life cycle of
provides a generic description of Systems Engineering the programs and projects. The scope of this hand-
(SE) as it should be applied throughout NASA. A goal book includes systems engineering functions regard-
of the handbook is to increase awareness and consis- less of whether they are performed by a manager or
tency across the Agency and advance the practice of an engineer, in-house or by a contractor.
SE. This handbook provides perspectives relevant to
NASA and data particular to NASA. There are many Center-specific handbooks and direc-
tives as well as textbooks that can be consulted for
This handbook should be used as a companion for in-depth tutorials. For guidance on systems engi-
implementing NPR 7123.1, Systems Engineering neering for information technology projects, refer
Processes and Requirements, as well as the Center- to Office of Chief Information Officer Information
specific handbooks and directives developed for Technology Systems Engineering Handbook Version 2.0.
implementing systems engineering at NASA. It pro- For guidance on entrance and exit criteria for mile-
vides a companion reference book for the various stone reviews of software projects, refer to NASA-
systems engineering-related training being offered HDBK-2203, NASA Software Engineering Handbook.
under NASA’s auspices. A NASA systems engineer can also participate in
the NASA Engineering Network (NEN) Systems
Engineering Community of Practice, located at
1.2 Scope and Depth https://nen.nasa.gov/web/se. This Web site includes
This handbook describes systems engineering best many resources useful to systems engineers, includ-
practices that should be incorporated in the develop- ing document templates for many of the work prod-
ment and implementation of large and small NASA ucts and milestone review presentations required by
programs and projects. The engineering of NASA the NASA SE process.

NASA SYSTEMS ENGINEERING HANDBOOK 1
1.0  Introduction

This handbook is applicable to NASA space flight graphics, drawings, or other appropriate forms that
projects of all sizes and to research and development capture the intended information.
programs and projects. While all 17 processes are
applicable to all projects, the amount of formality, For a more in-depth discussion of the principles pro-
depth of documentation, and timescales are varied as vided in this handbook, refer to the NASA Expanded
appropriate for the type, size, and complexity of the Guidance for SE document which can be found at
project. References to “documents” are intended to https://nen.nasa.gov/web/se/doc-repository. This hand-
include not only paper or digital files but also models, book is an abridged version of that reference.

NASA SYSTEMS ENGINEERING HANDBOOK 2
2.0 Fundamentals of
Systems Engineering

A t NASA, “systems engineering” is defined as a
methodical, multi-disciplinary approach for the
design, realization, technical management, opera-
Systems engineering is the art and science of devel-
oping an operable system capable of meeting require-
ments within often opposed constraints. Systems
tions, and retirement of a system. A “system” is the engineering is a holistic, integrative discipline,
combination of elements that function together wherein the contributions of structural engineers,
to produce the capability required to meet a need. electrical engineers, mechanism designers, power
The elements include all hardware, software, equip- engineers, human factors engineers, and many more
ment, facilities, personnel, processes, and procedures disciplines are evaluated and balanced, one against
needed for this purpose; that is, all things required to another, to produce a coherent whole that is not dom-
produce system-level results. The results include sys- inated by the perspective of a single discipline.2
tem-level qualities, properties, characteristics, func-
tions, behavior, and performance. The value added by Systems engineering seeks a safe and balanced design
the system as a whole, beyond that contributed inde- in the face of opposing interests and multiple, some-
pendently by the parts, is primarily created by the times conflicting constraints. The systems engineer
relationship among the parts; that is, how they are should develop the skill for identifying and focusing
interconnected.1 It is a way of looking at the “big pic- efforts on assessments to optimize the overall design
ture” when making technical decisions. It is a way of and not favor one system/subsystem at the expense
achieving stakeholder functional, physical, and oper- of another while constantly validating that the goals
ational performance requirements in the intended of the operational system will be met. The art is in
use environment over the planned life of the system knowing when and where to probe. Personnel with
within cost, schedule, and other constraints. It is a these skills are usually tagged as “systems engineers.”
methodology that supports the containment of the They may have other titles—lead systems engineer,
life cycle cost of a system. In other words, systems
engineering is a logical way of thinking.
2 Comments on systems engineering throughout Chapter 2.0
are extracted from the speech “System Engineering and the
1 Eberhardt Rechtin, Systems Architecting of Organizations: Two Cultures of Engineering” by Michael D. Griffin, NASA
Why Eagles Can’t Swim. Administrator.

NASA SYSTEMS ENGINEERING HANDBOOK 3
2.0  Fundamentals of Systems Engineering

technical manager, chief engineer—but for this doc- In summary, the systems engineer is skilled in the
ument, the term “systems engineer” is used. art and science of balancing organizational, cost,
and technical interactions in complex systems. The
The exact role and responsibility of the systems engi- systems engineer and supporting organization are
neer may change from project to project depending on vital to supporting program and Project Planning
the size and complexity of the project and from phase and Control (PP&C) with accurate and timely cost
to phase of the life cycle. For large projects, there may and schedule information for the technical activities.
be one or more systems engineers. For small projects, Systems engineering is about tradeoffs and compro-
the project manager may sometimes perform these mises; it uses a broad crosscutting view of the system
practices. But whoever assumes those responsibil- rather than a single discipline view. Systems engineer-
ities, the systems engineering functions should be ing is about looking at the “big picture” and not only
performed. The actual assignment of the roles and ensuring that they get the design right (meet require-
responsibilities of the named systems engineer may ments) but that they also get the right design (enable
also therefore vary. The lead systems engineer ensures operational goals and meet stakeholder expectations).
that the system technically fulfills the defined needs
and requirements and that a proper systems engineer- Systems engineering plays a key role in the project
ing approach is being followed. The systems engineer organization. Managing a project consists of three
oversees the project’s systems engineering activities as main objectives: managing the technical aspects of
performed by the technical team and directs, com- the project, managing the project team, and manag-
municates, monitors, and coordinates tasks. The ing the cost and schedule. As shown in FIGURE 2.0-1,
systems engineer reviews and evaluates the technical these three functions are interrelated. Systems engi-
aspects of the project to ensure that the systems/sub- neering is focused on the technical characteristics
systems engineering processes are functioning prop- of decisions including technical, cost, and schedule
erly and evolves the system from concept to product. and on providing these to the project manager. The
The entire technical team is involved in the systems Project Planning and Control (PP&C) function is
engineering process. responsible for identifying and controlling the cost
and schedules of the project. The project manager has
The systems engineer usually plays the key role in overall responsibility for managing the project team
leading the development of the concept of opera- and ensuring that the project delivers a technically
tions (ConOps) and resulting system architec- correct system within cost and schedule. Note that
ture, defining boundaries, defining and allocating there are areas where the two cornerstones of project
requirements, evaluating design tradeoffs, balanc- management, SE and PP&C, overlap. In these areas,
ing technical risk between systems, defining and SE provides the technical aspects or inputs whereas
assessing interfaces, and providing oversight of ver- PP&C provides the programmatic, cost, and sched-
ification and validation activities, as well as many ule inputs.
other tasks. The systems engineer typically leads the
technical planning effort and has the prime respon- This document focuses on the SE side of the dia-
sibility in documenting many of the technical plans, gram. The practices/processes are taken from NPR
requirements and specification documents, verifica- 7123.1, NASA Systems Engineering Processes
tion and validation documents, certification pack- and Requirements. Each process is described in
ages, and other technical documentation. much greater detail in subsequent chapters of this

NASA SYSTEMS ENGINEERING HANDBOOK 4
2.0  Fundamentals of Systems Engineering

PROJECT MANAGEMENT

PROJECT MANAGEMENT ACTIVITIES
• Setting up Project Team • Identifying Programmatic Risks
• Programmatic Stakeholders (non-technical, non-business) • Technology Transfer and Commercialization
• Programmatic Planning (non-technical, non-business) • Integration of technical and non-technical activities
• Identifying Programmatic (non-technical) requirements • Overall Approver/Decider

Systems Engineering
System Design Processes
• Stakeholder Expectations Definition Common
• Technical Requirement’s Definition Areas
• Logical Decomposition
PP&C
• Design Solution Definition • Stakeholders • PP&C Integration
Product Realization Processes • Risks • Resource Management
• Product Implementation • Configuration • Scheduling
• Product Integration Management
• Product Verification
• Cost Estimation & Assessment
• Data • Acquisition & Contract
• Product Validation Management
• Product Transition Management
• Reviews • Risk Management
Technical Management Processes • Schedule
• Technical Planning
• CM/DM
• Requirements Management
• Interface Management
• Technical Risk Management
• Configuration Management
• Technical Data Management
• Technical Assessment
• Decision Analyses

FIGURE 2.0-1  SE in Context of Overall Project Management

document, but an overview is given in the following by the NPR systems engineering “engine” shown in
subsections of this chapter. FIGURE 2.1-1. The processes of the SE engine are used
to develop and realize the end products. This chap-
ter provides the application context of the 17 com-
2.1 The Common Technical mon technical processes required in NPR7123.1.
Processes and the SE Engine The system design processes, the product realization
processes, and the technical management processes
There are three sets of common technical processes in are discussed in more detail in Chapters 4.0, 5.0, and
NPR 7123.1, NASA Systems Engineering Processes 6.0, respectively. Processes 1 through 9 indicated in
and Requirements: system design, product realiza- FIGURE 2.1-1 represent the tasks in the execution of a
tion, and technical management. The processes in project. Processes 10 through17 are crosscutting tools
each set and their interactions and flows are illustrated for carrying out the processes.

NASA SYSTEMS ENGINEERING HANDBOOK 5
2.0  Fundamentals of Systems Engineering

Requirements Flow Down Realized Products
from Level above to Level above

Technical Management
Processes
System Design Product Realization
Processes Technical Planning Processes
Processes
10. Technical Planning Product Transition
Requirements Definition
Processes
Processes
Technical Control 9. Product Transition
1. Stakeholders Expectations Cross - Processes Cross-
Definition cutting 11. Requirement Management cutting
2. Technical Requirements Evaluation Processes
12. Interface Management
Definition 13. Technical Risk Management 8. Product Validation
14. Configuration Management 7. Product Verification
15. Technical Data Management
Technical Solution Design Realization
Definition Processes Technical Assessment Processes
3. Logical Decomposition Processes 6. Product Integration
4. Design Solution Definition 16. Technical Assessment 5. Product Implementation

Technical Decision
Analysis Process
17. Decision Analysis

Requirements Flow Down Realized Products
To Level below From Level below
System Design Processes Product Realization
applied to each product Processes applied to each
layer down through system product layer up through
structure system structure

FIGURE 2.1-1  The Systems Engineering Engine (NPR 7123.1)

• System Design Processes: The four system system structure are realized by implementation
design processes shown in FIGURE 2.1-1 are used to or integration.
define and baseline stakeholder expectations, gen-
erate and baseline technical requirements, decom- • Product Realization Processes: The product real-
pose the requirements into logical and behavioral ization processes are applied to each operational/
models, and convert the technical requirements mission product in the system structure starting
into a design solution that will satisfy the base- from the lowest level product and working up to
lined stakeholder expectations. These processes higher level integrated products. These processes
are applied to each product of the system struc- are used to create the design solution for each
ture from the top of the structure to the bottom product (through buying, coding, building, or
until the lowest products in any system structure reusing) and to verify, validate, and transition up
branch are defined to the point where they can be to the next hierarchical level those products that
built, bought, or reused. All other products in the satisfy their design solutions and meet stakeholder

NASA SYSTEMS ENGINEERING HANDBOOK 6
2.0  Fundamentals of Systems Engineering

expectations as a function of the applicable life product or set of products to correct a discovered
cycle phase. discrepancy or other variation from requirements,”
whereas “recursive” is defined as adding value to the
• Technical Management Processes: The techni- system “by the repeated application of processes to
cal management processes are used to establish design next lower layer system products or to real-
and evolve technical plans for the project, to man- ize next upper layer end products within the system
age communication across interfaces, to assess structure. This also applies to repeating application of
progress against the plans and requirements for the same processes to the system structure in the next
the system products or services, to control tech- life cycle phase to mature the system definition and
nical execution of the project through to comple- satisfy phase success criteria.” The technical processes
tion, and to aid in the decision-making process. are applied recursively and iteratively to break down
the initializing concepts of the system to a level of
The processes within the SE engine are used both detail concrete enough that the technical team can
iteratively and recursively. As defined in NPR 7123.1, implement a product from the information. Then
“iterative” is the “application of a process to the same the processes are applied recursively and iteratively to

TABLE 2.1-1 Alignment of the 17 SE Processes to AS9100
SE Process AS9100 Requirement
Stakeholder Expectations Customer Requirements

Technical Requirements Planning of Product Realization
Definition

Logical Decomposition Design and Development Input

Design Solution Definition Design and Development Output

Product Implementation Control of Production

Product Integration Control of Production

Product Verification Verification

Product Validation Validation

Product Transition Control of Work Transfers; Post Delivery Support, Preservation of Product

Technical Planning Planning of Product Realization; Review of Requirements; Measurement, Analysis and
Improvement

Requirements Management Design and Development Planning; Purchasing

Interface Management Configuration Management

Technical Risk Management Risk Management

Configuration Management Configuration Management; Identification and Traceability; Control of Nonconforming
Product

Technical Data Management Control of Documents; Control of Records; Control of Design and Development
Changes

Technical Assessment Design and Development Review

Decision Analysis Measurement, Analysis and Improvement; Analysis of Data

NASA SYSTEMS ENGINEERING HANDBOOK 7
2.0  Fundamentals of Systems Engineering

integrate the smallest product into greater and larger 2.2 An Overview of the SE
systems until the whole of the system or product has Engine by Project Phase
been assembled, verified, validated, and transitioned. FIGURE 2.2-1 conceptually illustrates how the SE
engine is used during each phase of a project (Pre-
For a detailed example of how the SE Engine could Phase A through Phase F). The life cycle phases are
be used, refer to the NASA Expanded Guidance described in TABLE 2.2-1. FIGURE 2.2-1 is a conceptual
for SE document at https://nen.nasa.gov/web/se/ diagram. For full details, refer to the poster version
doc-repository. of this figure, which is located at https://nen.nasa.gov/
web/se/doc-repository.
AS9100 is a widely adopted and standardized qual-
ity management system developed for the commer- The uppermost horizontal portion of this chart is
cial aerospace industry. Some NASA Centers have used as a reference to project system maturity, as the
chosen to certify to the AS9100 quality system and project progresses from a feasible concept to an as-de-
may require their contractors to follow NPR 7123.1. ployed system; phase activities; Key Decision Points
TABLE 2.1-1 shows how the 17 NASA SE processes (KDPs); and major project reviews. The next major
align with AS9100. horizontal band shows the technical development

Formulation Approval Implementation
Pre-Phase A: Phase A: Phase B: Phase C: Phase D: Phase E: Phase F:
Concept Studies Concept & Technology Preliminary Design & Final Design & System Assembly, Operations & Closeout
Development Technology Completion Fabrication Integration & Test, Launch Sustainment

Allocated Product
Feasible Concept Top-Level Architecture Functional Baseline Baseline Baseline As-Deployed Baseline

Key Decision Points:

Major Reviews:

? ? ? ?
4.1 4.1
Technical Development

?
4.2 5.5 4.2 5.5
?
4.3 5.4 4.3 5.4

4.4 5.3 5.2 4.4
5.3 5.2
? 5.1 ? ? ? 5.1

6.1 6.1 6.1 6.1 6.1 6.1 6.1
6.2
Technical Management

6.3
6.4
6.5
6.6
6.7
6.8 6.8 6.8 6.8 6.8 6.8 6.8

FIGURE 2.2-1  Miniature Version of the Poster-Size NASA Project Life Cycle
Process Flow for Flight and Ground Systems Accompanying this Handbook

NASA SYSTEMS ENGINEERING HANDBOOK 8
2.0  Fundamentals of Systems Engineering

TABLE 2.2-1  Project Life Cycle Phases
Phase Purpose Typical Outcomes
Pre-Phase A To produce a broad spectrum of ideas and alternatives Feasible system concepts
Pre-Formulation

Concept for missions from which new programs/projects can be in the form of simulations,
Studies selected. Determine feasibility of desired system, develop analysis, study reports,
mission concepts, draft system-level requirements, assess models, and mock-ups
performance, cost, and schedule feasibility; identify
potential technology needs, and scope.

Phase A To determine the feasibility and desirability of a suggested System concept definition
Concept and new system and establish an initial baseline compatibility in the form of simulations,
Technology with NASA’s strategic plans. Develop final mission concept, analysis, engineering models
Development system-level requirements, needed system technology and mock-ups, and trade
Formulation

developments, and program/project technical management study definition
plans.

Phase B To define the project in enough detail to establish an initial End products in the form of
Preliminary baseline capable of meeting mission needs. Develop mock-ups, trade study results,
Design and system structure end product (and enabling product) specification and interface
Technology requirements and generate a preliminary design for each documents, and prototypes
Completion system structure end product.

Phase C To complete the detailed design of the system (and its End product detailed designs,
Final Design associated subsystems, including its operations systems), end product component
and Fabrication fabricate hardware, and code software. Generate final fabrication, and software
designs for each system structure end product. development

Phase D To assemble and integrate the system (hardware, software, Operations-ready system
Implementation

System and humans), meanwhile developing confidence that it end product with supporting
Assembly, is able to meet the system requirements. Launch and related enabling products
Integration and prepare for operations. Perform system end product
Test, Launch implementation, assembly, integration and test, and
transition to use.

Phase E To conduct the mission and meet the initially identified Desired system
Operations and need and maintain support for that need. Implement the
Sustainment mission operations plan.

Phase F To implement the systems decommissioning/disposal plan Product closeout
Closeout developed in Phase E and perform analyses of the returned
data and any returned samples.

processes (steps 1 through 9) in each project phase. and closes out (Phase F), the technical work shifts
The SE engine cycles five times from Pre-Phase A to activities commensurate with these last two proj-
through Phase D. Note that NASA’s management ect phases. The next major horizontal band shows
has structured Phases C and D to “split” the tech- the eight technical management processes (steps 10
nical development processes in half in Phases C and through 17) in each project phase. The SE engine
D to ensure closer management control. The engine cycles the technical management processes seven
is bound by a dashed line in Phases C and D. Once times from Pre-Phase A through Phase F.
a project enters into its operational state (Phase E)

NASA SYSTEMS ENGINEERING HANDBOOK 9
2.0  Fundamentals of Systems Engineering

2.3 Example of Using the SE Engine enable designers to accurately determine the feasibil-
In Pre-Phase A, the SE engine is used to develop the ity of accomplishing the project, which is the purpose
initial concepts; clearly define the unique roles of of Pre-Phase A.
humans, hardware, and software in performing the
missions objectives; establish the system functional During Phase A, the recursive use of the SE engine
and performance boundaries; develop/identify a pre- is continued, this time taking the concepts and draft
liminary/draft set of key high-level requirements, key requirements that were developed and validated
define one or more initial Concept of Operations during Pre-Phase A and fleshing them out to become
(ConOps) scenarios; realize these concepts through the set of baseline system requirements and ConOps.
iterative modeling, mock-ups, simulation, or other During this phase, key areas of high risk might be
means; and verify and validate that these concepts simulated to ensure that the concepts and require-
and products would be able to meet the key high-level ments being developed are good ones and to identify
requirements and ConOps. The operational concept verification and validation tools and techniques that
must include scenarios for all significant operational will be needed in later phases.
situations, including known off-nominal situations.
To develop a useful and complete set of scenarios, During Phase B, the SE engine is applied recursively
important malfunctions and degraded-mode opera- to further mature requirements and designs for all
tional situations must be considered. The importance products in the developing product tree and perform
of early ConOps development cannot be underesti- verification and validation of concepts to ensure
mated. As system requirements become more detailed that the designs are able to meet their requirements.
and contain more complex technical information, Operational designs and mission scenarios are evalu-
it becomes harder for the stakeholders and users to ated and feasibility of execution within design capa-
understand what the requirements are conveying; bilities and cost estimates are assessed.
i.e., it may become more difficult to visualize the end
product. The ConOps can serve as a check in identi- Phase C again uses the left side of the SE engine to
fying missing or conflicting requirements. finalize all requirement updates, finalize the ConOps
validation, develop the final designs to the lowest
Note that this Pre-Phase A initial concepts develop- level of the product tree, and begin fabrication.
ment work is not the formal verification and valida-
tion program that is performed on the final product, Phase D uses the right side of the SE engine to recur-
but rather it is a methodical run through ensuring sively perform the final implementation, integration,
that the concepts that are being developed in this verification, and validation of the end product, and at
Pre-Phase A are able to meet the likely requirements the final pass, transition the end product to the user.
and expectations of the stakeholders. Concepts are
developed to the lowest level necessary to ensure that The technical management processes of the SE engine
they are feasible and to a level that reduces the risk are used in Phases E and F to monitor performance;
low enough to satisfy the project. Academically, this control configuration; and make decisions associ-
process could proceed down to the circuit board level ated with the operations, sustaining engineering,
for every system; however, that would involve a great and closeout of the system. Any new capabilities or
deal of time and money. There may be a higher level upgrades of the existing system reenter the SE engine
or tier of product than circuit board level that would as new developments.

NASA SYSTEMS ENGINEERING HANDBOOK 10
2.0  Fundamentals of Systems Engineering

2.4 Distinctions between It is appropriate for verification and validation meth-
Product Verification and ods to differ between phases as designs advance. The
Product Validation ultimate success of a program or project may relate
to the frequency and diligence of validation efforts
From a process perspective, the Product Verification during the design process, especially in Pre-Phase A
and Product Validation processes may be similar in and Phase A during which corrections in the direc-
nature, but the objectives are fundamentally different: tion of product design might still be made cost-effec-
tively. The question should be continually asked, “Are
• Verification of a product shows proof of compli- we building the right product for our users and other
ance with requirements—that the product can stakeholders?” The selection of the verification or val-
meet each “shall” statement as proven though per- idation method is based on engineering judgment as
formance of a test, analysis, inspection, or demon- to which is the most effective way to reliably show the
stration (or combination of these). product’s conformance to requirements or that it will
operate as intended and described in the ConOps.
• Validation of a product shows that the prod-
uct accomplishes the intended purpose in the
intended environment—that it meets the expec- 2.5 Cost Effectiveness
tations of the customer and other stakeholders as Considerations
shown through performance of a test, analysis,
inspection, or demonstration. The objective of systems engineering is to see that the
system is designed, built, and can be operated so that
Verification testing relates back to the approved it accomplishes its purpose safely in the most cost-ef-
requirements set and can be performed at different fective way possible considering performance, cost,
stages in the product life cycle. The approved specifi- schedule, and risk. A cost-effective and safe system
cations, drawings, parts lists, and other configuration should provide a particular kind of balance between
documentation establish the configuration baseline effectiveness and cost. This causality is an indefinite
of that product, which may have to be modified at a one because there are usually many designs that meet
later time. Without a verified baseline and appropri- the cost-effective condition.
ate configuration controls, later modifications could
be costly or cause major performance problems. Design trade studies, an important part of the sys-
tems engineering process, often attempt to find
Validation relates back to the ConOps document. designs that provide the best combination of cost
Validation testing is conducted under realistic con- and effectiveness. At times there are alternatives that
ditions (or simulated conditions) on end products for either reduce costs without reducing effectiveness or
the purpose of determining the effectiveness and suit- increase effectiveness without increasing cost. In such
ability of the product for use in mission operations by “win-win” cases, the systems engineer’s decision is
typical users. Validation can be performed in each easy. When the alternatives in a design trade study
development phase using phase products (e.g., mod- require trading cost for effectiveness, the decisions
els) and not only at delivery using end products. become harder.

NASA SYSTEMS ENGINEERING HANDBOOK 11
2.0  Fundamentals of Systems Engineering

THE SYSTEMS ENGINEER’S DILEMMA

At each cost-effective solution:

• To reduce cost at constant risk, performance must be reduced.
• To reduce risk at constant cost, performance must be reduced.
• To reduce cost at constant performance, higher risks must be accepted.
• To reduce risk at constant performance, higher costs must be accepted.

In this context, time in the schedule is often a critical resource, so that schedule behaves like a kind of cost.

FIGURE 2.5-1 shows that the life cycle costs of a The technical team may have to choose among designs
program or project tend to get “locked in” early in that differ in terms of numerous attributes. A variety of
design and development. The cost curves clearly show methods have been developed that can be used to help
that late identification of and fixes to problems cost uncover preferences between attributes and to quan-
considerably more later in the life cycle. Conversely, tify subjective assessments of relative value. When
descopes taken later versus earlier in the project life this can be done, trades between attributes can be
cycle result in reduced cost savings. This figure, assessed quantitatively. Often, however, the attributes
obtained from the Defense Acquisition University, is are incompatible. In the end, decisions need to be
an example of how these costs are determined by the made in spite of the given variety of attributes. There
early concepts and designs. The numbers will vary are several decision analysis techniques (Section 6.8)
from project to project, but the general shape of the that can aid in complex decision analysis. The systems
curves and the message they send will be similar. For engineer should always keep in mind the information
example, the figure shows that during design, only that needs to be available to help the decision-makers
about 15% of the costs might be expended, but the choose the most cost-effective option.
design itself will commit about 75% of the life cycle
costs. This is because the way the system is designed
will determine how expensive it will be to test, man- 2.6 Human Systems Integration
ufacture, integrate, operate, and sustain. If these fac- (HSI) in the SE Process
tors have not been considered during design, they
pose significant cost risks later in the life cycle. Also As noted at the beginning of NPR 7123.1, the “sys-
note that the cost to change the design increases as tems approach is applied to all elements of a system
you get later in the life cycle. If the project waits until (i.e., hardware, software, human systems integra-
verification to do any type of test or analysis, any tion. In short, the systems engineering approach
problems found will have a significant cost impact to must equally address and integrate these three key
redesign and reverify. elements: hardware, software, and human systems

NASA SYSTEMS ENGINEERING HANDBOOK 12
2.0  Fundamentals of Systems Engineering

100%
Cumulative Percentage Life Cycle Cost against Time 90%
90%
75% 500–1000× 100%
80% i fe
e d L ts
i t t os
70% m
m le C Operations
o
C C yc n through
20–100× c ti
o
60% re Disposal
n Di
45% e sig
50%
g eD
h an
40% 3–6× C ed ) 50%
to et e d
os
t pl end
C m p
30% Co E x Prod/Test
% ts
o s
20% (C
Design
Concept 20%
10% 15%
8% Develop
0%

MCR SRR SDR PDR CDR SIR ORR DR/DRR
Time

MCR Mission Concept Review CDR Critical Design Review
SRR System Requirements Review SIR System Integration Review
SDR System Definition Review ORR Operational Readiness Review
PDR Preliminary Design Review DR/DRR Decommissioning/Disposal Readiness Review

Adapted from INCOSE-TP-2003-002-04, 2015

FIGURE 2.5-1  Life-Cycle Cost Impacts from Early Phase Decision-Making

integration. Therefore, the human element is some- 2.7 Competency Model for
thing that integration and systems engineering pro- Systems Engineers
cesses must address. The definition of “system” in
NPR 7123.1 is inclusive; i.e., a system is “the combi- TABLE 2.7-1 provides a summary of the Competency
nation of elements that function together to produce Model for Systems Engineering. For more informa-
the capability required to meet a need. The elements tion on the NASA SE Competency model refer to:
include all hardware, software, equipment, facilities, http://appel.nasa.gov/competency-model/.
personnel, processes, and procedures needed for this
purpose. For additional information and guidance There are four levels of proficiencies associated with
on his, refer to Section 2.6 of the NASA Expanded each of these competencies:
Guidance for Systems Engineering at https://nen.
nasa.gov/web/se/doc-repository. • Team Practitioner/Technical Engineer
• Team Lead/Subsystem Lead
• Project Systems Engineer
• Chief Engineer

NASA SYSTEMS ENGINEERING HANDBOOK 13
2.0  Fundamentals of Systems Engineering

TABLE 2.7-1  NASA System Engineering Competency Model
Competency Competency Description
Area
SE 1.0 SE 1.1 Eliciting and defining use cases, scenarios, concept of operations and stakeholder
System Stakeholder expectations. This includes identifying stakeholders, establishing support strategies,
Design Expectation establishing a set of Measures of Effectiveness (MOEs), validating stakeholder
Definition & expectation statements, and obtaining commitments from the customer and other
Management stakeholders, as well as using the baselined stakeholder expectations for product
validation during product realization

SE 1.2 Transforming the baseline stakeholder expectations into unique, quantitative, and
Technical measurable technical requirements expressed as “shall” statements that can be
Requirements used for defining the design solution. This includes analyzing the scope of the
Definition technical problems to be solved, defining constraints affecting the designs, defining
the performance requirements, validating the resulting technical requirement
statements, defining the Measures of Performance (MOPs) for each MOE, and
defining appropriate Technical Performance Measures (TPMs) by which technical
progress will be assessed.

SE 1.3 Transforming the defined set of technical requirements into a set of logical
Logical decomposition models and their associated set of derived technical requirements for
Decomposition lower levels of the system, and for input to the design solution efforts. This includes
decomposing and analyzing by function, time, behavior, data flow, object, and other
models. It also includes allocating requirements to these decomposition models,
resolving conflicts between derived requirements as revealed by the models,
defining a system architecture for establishing the levels of allocation, and validating
the derived technical requirements.

SE 1.4 Translating the decomposition models and derived requirements into one or more
Design design solutions, and using the Decision Analysis process to analyze each alternative
Solution and for selecting a preferred alternative that will satisfy the technical requirements.
Definition A full technical data package is developed describing the selected solution. This
includes generating a full design description for the selected solution; developing a
set of ‘make-to,’ ‘buy-to,’ ‘reuse-to,’ specifications; and initiating the development or
acquisition of system products and enabling products.

SE 2.0 SE 2.1 Generating a specific product through buying, making, or reusing so as to satisfy
Product Product the design requirements. This includes preparing the implementation strategy;
Realization Implementation building or coding the produce; reviewing vendor technical information; inspecting
delivered, built, or reused products; and preparing product support documentation
for integration.

SE 2.2 Assembling and integrating lower-level validated end products into the desired end
Product product of the higher-level product. This includes preparing the product integration
Integration strategy, performing detailed planning, obtaining products to integrate, confirming
that the products are ready for integration, preparing the integration environment,
and preparing product support documentation.

SE 2.3 Proving the end product conforms to its requirements. This includes preparing for
Product the verification efforts, analyzing the outcomes of verification (including identifying
Verification anomalies and establishing recommended corrective actions), and preparing a
product verification report providing the evidence of product conformance with the
applicable requirements.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 14
2.0  Fundamentals of Systems Engineering

Competency Competency Description
Area
SE 2.0 SE 2.4 Confirming that a verified end product satisfies the stakeholder expectations for
Product Product its intended use when placed in its intended environment and ensuring that any
Realization Validation anomalies discovered during validation are appropriately resolved prior to product
transition. This includes preparing to conduct product validation, performing
the product validation, analyzing the results of validation (including identifying
anomalies and establishing recommended corrective actions), and preparing a
product validation report providing the evidence of product conformance with the
stakeholder expectations baseline.

SE 2.5 Transitioning the verified and validated product to the customer at the next level
Product in the system structure. This includes preparing to conduct product transition,
Transition evaluating the product and enabling product readiness for product transition,
preparing the product for transition (including handling, storing, and shipping
preparation), preparing sites, and generating required documentation to accompany
the product

SE 3.0 SE 3.1 Planning for the application and management of each common technical process,
Technical Technical as well as identifying, defining, and planning the technical effort necessary to
Management Planning meet project objectives. This includes preparing or updating a planning strategy
for each of the technical processes, and determining deliverable work products
from technical efforts; identifying technical reporting requirements; identifying
entry and success criteria for technical reviews; identifying product and process
measures to be used; identifying critical technical events; defining cross domain
interoperability and collaboration needs; defining the data management approach;
identifying the technical risks to be addressed in the planning effort; identifying
tools and engineering methods to be employed; and defining the approach to
acquire and maintain technical expertise needed. This also includes preparing the
Systems Engineering Management Plan (SEMP) and other technical plans; obtaining
stakeholder commitments to the technical plans; and issuing authorized technical
work directives to implement the technical work

SE 3.2 Managing the product requirements, including providing bidirectional traceability,
Requirements and managing changes to establish requirement baselines over the life cycle of the
Management system products. This includes preparing or updating a strategy for requirements
management; selecting an appropriate requirements management tool; training
technical team members in established requirement management procedures;
conducting expectation and requirements traceability audits; managing expectation
and requirement changes; and communicating expectation and requirement change
information

SE 3.3 Establishing and using formal interface management to maintain internal and
Interface external interface definition and compliance among the end products and enabling
Management products. This includes preparing interface management procedures, identifying
interfaces, generating and maintaining interface documentation, managing changes
to interfaces, disseminating interface information, and conducting interface control

SE 3.4 Examining on a continual basis the risks of technical deviations from the plans, and
Technical Risk identifying potential technical problems before they occur. Planning, invoking, and
Management performing risk-handling activities as needed across the life of the product or project
to mitigate impacts on meeting technical objectives. This includes developing the
strategy for technical risk management, identifying technical risks, and conducting
technical risk assessment; preparing for technical risk mitigation, monitoring
the status of each technical risk, and implementing technical risk mitigation and
contingency action plans when applicable thresholds have been triggered.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 15
2.0  Fundamentals of Systems Engineering

Competency Competency Description
Area
SE 3.0 SE 3.5 Identifying the configuration of the product at various points in time, systematically
Technical Configuration controlling changes to the configuration of the product, maintaining the integrity
Management Management and traceability of product configuration, and preserving the records of the product
configuration throughout its life cycle. This includes establishing configuration
management strategies and policies, identifying baselines to be under configuration
control, maintaining the status of configuration documentation, and conducting
configuration audits

SE 3.6 Identifying and controlling product-related data throughout its life cycle; acquiring,
Technical Data accessing, and distributing data needed to develop, manage, operate, support, and
Management retire system products; managing and disposing data as records; analyzing data use;
obtaining technical data feedback for managing the contracted technical efforts;
assessing the collection of appropriate technical data and information; maintaining
the integrity and security of the technical data, effectively managing authoritative
data that defines, describes, analyzes, and characterizes a product life cycle;
and ensuring consistent, repeatable use of effective Product Data and Life-cycle
Management processes, best practices, interoperability approaches, methodologies,
and traceability. This includes establishing technical data management strategies
and policies; maintaining revision, status, and history of stored technical data and
associated metadata; providing approved, published technical data; providing
technical data to authorized parties; and collecting and storing required technical data.

SE 3.7 Monitoring progress of the technical effort and providing status information for
Technical support of the system design, product realization, and technical management
Assessment efforts. This includes developing technical assessment strategies and policies,
assessing technical work productivity, assessing product quality, tracking and
trending technical metrics, and conducting technical, peer, and life cycle reviews.

SE 3.8 Evaluating technical decision issues, identifying decision criteria, identifying
Technical alternatives, analyzing alternatives, and selecting alternatives. Performed throughout
Decision the system life cycle to formulate candidate decision alternatives, and evaluate
Analysis their impacts on health and safety, technical, cost, and schedule performance. This
includes establishing guidelines for determining which technical issues are subject
to formal analysis processes; defining the criteria for evaluating alternative solutions;
identifying alternative solutions to address decision issues; selecting evaluation
methods; selecting recommended solutions; and reporting the results and findings
with recommendations, impacts, and corrective actions.

NASA SYSTEMS ENGINEERING HANDBOOK 16
3.0 NASA Program/Project Life Cycle
O ne of the fundamental concepts used within
NASA for the management of major systems
is the program/project life cycle, which categorizes
Decomposing the program/project life cycle into
phases organizes the entire process into more man-
ageable pieces. The program/project life cycle should
everything that should be done to accomplish a pro- provide managers with incremental visibility into the
gram or project into distinct phases that are separated progress being made at points in time that fit with the
by Key Decision Points (KDPs). KDPs are the events management and budgetary environments.
at which the decision authority determines the readi-
ness of a program/project to progress to the next phase For NASA projects, the life cycle is defined in the
of the life cycle (or to the next KDP). Phase boundaries applicable governing document:
are defined so that they provide natural points for
“go” or “no-go” decisions. Decisions to proceed may • For space flight projects: NPR 7120.5, NASA
be qualified by liens that should be removed within Space Flight Program and Project Management
an agreed-to time period. A program or project that Requirements
fails to pass a KDP may be allowed to try again later
after addressing deficiencies that precluded passing • For information technology: NPR 7120.7,
the KDP, or it may be terminated. NASA Information Technology and Institutional
Infrastructure Program and Project Management
All systems start with the recognition of a need or the Requirements
discovery of an opportunity and proceed through var-
ious stages of development to the end of the project. • For NASA research and technology: NPR 7120.8,
While the most dramatic impacts of the analysis and NASA Research and Technology Program and
optimization activities associated with systems engineer- Project Management Requirements
ing are obtained in the early stages, decisions that affect
cost continue to be amenable to the systems approach • For software: NPR 7150.2 NASA Software
even as the end of the system lifetime approaches. Engineering Requirements

NASA SYSTEMS ENGINEERING HANDBOOK 17
3.0  NASA Program/Project Life Cycle

For example, NPR 7120.5 defines the major NASA • Phase D: System Assembly, Integration and Test,
life cycle phases as Formulation and Implementation. Launch
For space flight systems projects, the NASA life cycle • Phase E: Operations and Sustainment
phases of Formulation and Implementation divide • Phase F: Closeout
into the following seven incremental pieces. The
phases of the project life cycle are: FIGURE 3.0-1 is taken from NPR 7120.5 and provides
the life cycle for NASA space flight projects and identi-
Program Pre-Formulation: fies the KDPs and reviews that characterize the phases.
• Pre-Phase A: Concept Studies More information concerning life cycles can be found
in the NASA Expanded Guidance for SE document at
Program Formulation https://nen.nasa.gov/web/se/doc-repository and in the
• Phase A: Concept and Technology Development SP-2014-3705, NASA Space Flight Program and Project
• Phase B: Preliminary Design and Technology Management Handbook.
Completion
TABLE 3.0-1 is taken from NPR 7123.1 and represents
Program Implementation: the product maturity for the major SE products
• Phase C: Final Design and Fabrication developed and matured during the product life cycle.

NASA Life-Cycle Approval for Approval for
Phases Formulation FORMULATION Implementation IMPLEMENTATION
Project Life-Cycle Pre-Phase A: Phase A: Phase B: Phase C: Phase D: Phase E: Phase F:
Phases Concept Studies Concept and Preliminary Design Final Design and System Assembly, Operations and Closeout
Technology and Technology Fabrication Integration & Test, Sustainment
Development Completion Launch & Checkout

Project Life- KDP A KDP B KDP C KDP D KDP E KDP F
Cycle Gates, FAD FA
Documents, and Preliminary Project Preliminary Baseline Launch End of Mission Final Archival
Major Events Requirements Project Plan Project Plan of Data

Agency Reviews
ASM7
Human Space
Flight Project
Life-Cycle MCR SRR SDR PDR CDR/ SIR ORR FRR PLAR CERR 4 DR DRR
Reviews1,2 PRR3 Inspections and
Re-enters appropriate life-cycle Refurbishment End of Flight
 Re-flights phase if modifications are
needed between flights PFAR
Robotic Mission
Project Life Cycle
Reviews1,2 MCR SRR MDR5 PDR CDR/ SIR ORR MRR PLAR CERR 4 DR DRR
PRR3
Other Reviews
SAR 6 SMSR,LRR (LV), FRR (LV)
Supporting
Reviews Peer Reviews, Subsystem PDFs, Subsystem CDRs, and System Reviews

FOOTNOTES ACRONYMS MDR – Mission Definition Review
1. Flexibility is allowed as to the timing, number, and content of reviews as long as the equivalent ASM – Acquisition Strategy Meeting MRR – Mission Readiness Review
information is provided at each KDP and the approach is fully documented in the Project Plan. CDR – Critical Design Review ORR – Operational Readiness Review
2. Life-cycle review objectives and expected maturity states for these reviews and the attendant CERR – Critical Events Readiness Review PDR – Preliminary Design Review
KDPs are contained in Table 2-5 and Appendix D Table D-3 of this handbook DR – Decommissioning Review PFAR – Post-Flight Assessment Review
3. PRR is needed only when there are multiple copies of systems. It does not require an SRB. Timing DRR – Disposal Readiness Review PLAR – Post-Launch Assessment Review
is notional. FA – Formulation Agreement PRR – Production Readiness Review
4. CERRs are established at the discretion of program . FAD – Formulation Authorization Document SAR – System Acceptance Review
5. For robotic missions, the SRR and the MDR may be combined. FRR – Flight Readiness Review SDR – System Definition Review
6. SAR generally applies to human space flight. KDP – Key Decision Point SIR – System Integration Review
7. Timing of the ASM is determined by the MDAA. It may take place at any time during Phase A. LRR – Launch Readiness Review SMSR – Safety and Mission Success Review
Red triangles represent life-cycle reviews that require SRBs. The Decision Authority, LV – Launch Vehicle SRB – Standing Review Board
Administrator, MDAA, or Center Director may request the SRB to conduct other reviews. MCR – Mission Concept Review SRR – System Requirements Review

FIGURE 3.0-1 NASA Space Flight Project Life Cycle from NPR 7120 .5E

NASA SYSTEMS ENGINEERING HANDBOOK 18
3.0  NASA Program/Project Life Cycle

TABLE 3.0-1  SE Product Maturity from NPR 7123.1
Formulation Implementation
Uncoupled/
KDP 0 KDP I Periodic KDPs
Loosely Coupled
Tightly Coupled
Products

KDP 0 KDP I KDP II KDP III Periodic KDPs
Programs
Pre-
Phase A Phase B Phase C Phase D Phase E Phase F
Projects and Phase A
Single Project
KDP A KDP B KDP C KDP D KDP E KDP F
Programs
MCR SRR MDR/SDR PDR CDR SIR ORR FRR DR DRR
Stakeholder identification and **Baseline Update Update Update

Concept definition **Baseline Update Update Update Update

Measure of effectiveness **Approve
definition

Cost and schedule for technical Initial Update Update Update Update Update Update Update Update

SEMP1 Preliminary **Baseline **Baseline Update Update Update

Requirements Preliminary **Baseline Update Update Update

Technical Performance **Approve
Measures definition

Architecture definition **Baseline

Allocation of requirements to **Baseline
next lower level

Required leading indicator **Initial Update Update Update
trends

Design solution definition Preliminary **Preliminary **Baseline Update Update

Interface definition(s) Preliminary Baseline Update Update

Implementation plans (Make/ Preliminary Baseline Update
code, buy, reuse)

Integration plans Preliminary Baseline Update **Update

Verification and validation Approach Preliminary Baseline Update Update
plans

Verification and validation **Initial **Preliminary **Baseline
results

Transportation criteria and Initial Final Update
instructions

Operations plans Baseline Update Update **Update

Operational procedures Preliminary Baseline **Update Update

Certification (flight/use) Preliminary **Final

Decommissioning plans Preliminary Preliminary Preliminary **Baseline Update **Update

Disposal plans Preliminary Preliminary Preliminary **Baseline Update Update **Update

** Item is a required product for that review
1 SEMP is baselined at SRR for projects, tightly coupled programs and single-project programs, and at MDR/SDR for uncoupled,
and loosely coupled programs.

NASA SYSTEMS ENGINEERING HANDBOOK 19
3.0  NASA Program/Project Life Cycle

3.1 Program Formulation Council (PMC) review. A summary of the required
The program Formulation Phase establishes a cost-ef- gate products for the program Formulation Phase
fective program that is demonstrably capable of can be found in the governing NASA directive (e.g.,
meeting Agency and mission directorate goals and NPR 7120.5 for space flight programs, NPR 7120.7
objectives. The program Formulation Authorization for IT projects, NPR 7120.8 for research and tech-
Document (FAD) authorizes a Program Manager nology projects). Formulation for all program types
(PM) to initiate the planning of a new program and is the same, involving one or more program reviews
to perform the analyses required to formulate a sound followed by KDP I where a decision is made approv-
program plan. The lead systems engineer provides the ing a program to begin implementation.
technical planning and concept development or this
phase of the program life cycle. Planning includes
identifying the major technical reviews that are 3.2 Program Implementation
needed and associated entrance and exit criteria. Major During the program Implementation phase, the
reviews leading to approval at KDP I are the SRR, PM works with the Mission Directorate Associate
SDR, PDR, and governing Program Management Administrator (MDAA) and the constituent project

SPACE FLIGHT PROGRAM FORMULATION

Purpose
To establish a cost-effective program that is demonstrably capable of meeting Agency and mission
directorate goals and objectives

Typical Activities and Their Products for Space Flight Programs
• Identify program stakeholders and users
• Develop program requirements based on user expectations and allocate them to initial projects
• Identify NASA risk classification
• Define and approve program acquisition strategies
• Develop interfaces to other programs
• Start developing technologies that cut across multiple projects within the program
• Derive initial cost estimates and approve a program budget based on the project’s life cycle costs
• Perform required program Formulation technical activities defined in NPR 7120.5
• Satisfy program Formulation reviews’ entrance/success criteria detailed in NPR 7123.1
• Develop a clear vision of the program’s benefits and usage in the operational era and document it in a ConOps
Reviews
• MCR (pre-Formulation)
• SRR
• SDR

NASA SYSTEMS ENGINEERING HANDBOOK 20
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PROGRAM IMPLEMENTATION

Purpose
To execute the program and constituent projects and ensure that the program continues to contribute to
Agency goals and objectives within funding constraints

Typical Activities and Their Products
• Initiate projects through direct assignment or competitive process (e.g., Request for Proposal (RFP),
Announcement of Opportunity (AO)
• Monitor project’s formulation, approval, implementation, integration, operation, and ultimate
decommissioning
• Adjust program as resources and requirements change
• Perform required program Implementation technical activities from NPR 7120.5
• Satisfy program Implementation reviews’ entrance/success criteria from NPR 7123.1

Reviews
• PSR/PIR (uncoupled and loosely coupled programs only)
• Reviews synonymous (not duplicative) with the project reviews in the project life cycle (see FIGURE 3.0-4)
through Phase D (single-project and tightly coupled programs only)

managers to execute the program plan cost-effec- 3.3 Project Pre-Phase A:
tively. Program reviews ensure that the program Concept Studies
continues to contribute to Agency and mission
directorate goals and objectives within funding con- The purpose of Pre-Phase A is to produce a broad
straints. A summary of the required gate products for spectrum of ideas and alternatives for missions
the program Implementation Phase can be found in from which new programs/projects can be selected.
the governing NASA directive; e.g., NPR 7120.5 for During Pre-Phase A, a study or proposal team anal-
space flight programs. The program life cycle has two yses a broad range of mission concepts that can fall
different implementation paths, depending on pro- within technical, cost, and schedule constraints and
gram type. Each implementation path has different that contribute to program and Mission Directorate
types of major reviews. It is important for the sys- goals and objectives. Pre-Phase A effort could include
tems engineer to know what type of program a proj- focused examinations on high-risk or high tech-
ect falls under so that the appropriate scope of the nology development areas. These advanced studies,
technical work, documentation requirements, and set along with interactions with customers and other
of reviews can be determined. potential stakeholders, help the team to identify
promising mission concept(s). The key stakehold-
ers (including the customer) are determined and

NASA SYSTEMS ENGINEERING HANDBOOK 21
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PRE‑PHASE A: CONCEPT STUDIES

Purpose
To produce a broad spectrum of ideas and alternatives for missions from which new programs and projects can
be selected. Determine feasibility of desired system; develop mission concepts; draft system-level requirements;
assess performance, cost, and schedule feasibility; identify potential technology needs and scope.

Typical Activities and Products
• Review/identify any initial customer requirements or scope of work, which may include:
>> Mission
>> Science
>> Top-level system
• Identify and involve users and other stakeholders
>> Identify key stakeholders for each phase of the life cycle
>> Capture and baseline expectations as Needs, Goals, and Objectives (NGOs)
>> Define measures of effectiveness
• Develop and baseline the Concept of Operations
>> Identify and perform trade-offs and analyses of alternatives (AoA)
>> Perform preliminary evaluations of possible missions
• Identify risk classification
• Identify initial technical risks
• Identify the roles and responsibilities in performing mission objectives (i.e., technical team, flight, and
ground crew) including training
• Develop plans
>> Develop preliminary SEMP
>> Develop and baseline Technology Development Plan
>> Define preliminary verification and validation approach
• Prepare program/project proposals, which may include:
>> Mission justification and objectives;
>> A ConOps that exhibits clear understanding of how the program’s outcomes will cost-effectively
satisfy mission objectives;
>> High-level Work Breakdown Structures (WBSs);
>> Life cycle rough order of magnitude (ROM) cost, schedule, and risk estimates; and
>> Technology assessment and maturation strategies.
• Satisfy MCR entrance/success criteria from NPR 7123.1

Reviews
• MCR
• Informal proposal review

NASA SYSTEMS ENGINEERING HANDBOOK 22
3.0  NASA Program/Project Life Cycle

expectations for the project are gathered from them. and desirability rather than optimality. Analyses and
If feasible concepts can be found, one or more may be designs are accordingly limited in both depth and
selected to go into Phase A for further development. number of options, but each option should be eval-
Typically, the system engineers are heavily involved uated for its implications through the full life cycle,
in the development and assessment of the concept i.e., through Operations and Disposal. It is important
options. In projects governed by NPR 7120.5, the in Pre-Phase A to develop and mature a clear vision
descope options define what the system can accom- of what problems the proposed program will address,
plish if the resources are not available to accomplish how it will address them, and how the solution will
the entire mission. This could be in the form of fewer be feasible and cost-effective.
instruments, a less ambitious mission profile, accom-
plishing only a few goals, or using cheaper, less capa-
ble technology. Descope options can also reflect what 3.4 Project Phase A: Concept
the mission can accomplish in case a hardware fail- and Technology Development
ure results in the loss of a portion of the spacecraft
architecture; for example, what an orbiter can accom- The purpose of Phase A is to develop a proposed
plish after the loss of a lander. The success criteria are mission/system architecture that is credible and
reduced to correspond with a descoped mission. responsive to program expectations, requirements,
and constraints on the project, including resources.
Descope options are developed when the NGOs During Phase A, activities are performed to fully
or other stakeholder expectation documentation is develop a baseline mission concept, begin or assume
developed. The project team develops a preliminary responsibility for the development of needed tech-
set of mission descope options as a gate product for nologies, and clarify expected reliance on human
the MCR, but these preliminary descope options are elements to achieve full system functionality or
not baselined or maintained. They are kept in the autonomous system development. This work, along
documentation archive in case they are needed later with interactions with stakeholders, helps mature the
in the life cycle. mission concept and the program requirements on
the project. Systems engineers are heavily involved
It is important in Pre-Phase A to define an accurate during this phase in the development and assessment
group of stakeholders and users to help ensure that of the architecture and the allocation of requirements
mission goals and operations concepts meet the needs to the architecture elements.
and expectations of the end users. In addition, it is
important to estimate the composition of the techni- In Phase A, a team—often associated with a program
cal team and identify any unique facility or personnel or informal project office—readdresses the mission
requirements. concept first developed in Pre-Phase A to ensure that
the project justification and practicality are sufficient
Advanced studies may extend for several years and to warrant a place in NASA’s budget. The team’s effort
are typically focused on establishing mission goals focuses on analyzing mission requirements and estab-
and formulating top-level system requirements and lishing a mission architecture. Activities become for-
ConOps. Conceptual designs may be developed to mal, and the emphasis shifts toward optimizing the
demonstrate feasibility and support programmatic concept design. The effort addresses more depth and
estimates. The emphasis is on establishing feasibility considers many alternatives. Goals and objectives are

NASA SYSTEMS ENGINEERING HANDBOOK 23
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PHASE A:
CONCEPT AND TECHNOLOGY DEVELOPMENT

Purpose
To determine the feasibility and desirability of a suggested new system and establish an initial baseline
compatibility with NASA’s strategic plans. Develop final mission concept, system-level requirements,
needed system technology developments, and program/project technical management plans.

Typical Activities and Their Products
• Review and update documents baselined in Pre-Phase A if needed
• Monitor progress against plans
• Develop and baseline top-level requirements and constraints including internal and external interfaces,
integrated logistics and maintenance support, and system software functionality
• Allocate system requirements to functions and to next lower level
• Validate requirements
• Baseline plans
>> Systems Engineering Management Plan
>> Human Systems Integration Plan
>> Control plans such as the Risk Management Plan, Configuration Management Plan, Data
Management Plan, Safety and Mission Assurance Plan, and Software Development or Management
Plan (See NPR 7150.2)
>> Other crosscutting and specialty plans such as environmental compliance documentation,
acquisition surveillance plan, contamination control plan, electromagnetic interference/
electromagnetic compatibility control plan, reliability plan, quality control plan, parts management
plan, logistics plan
• Develop preliminary Verification and Validation Plan
• Establish human rating plan and perform initial evaluations
• Develop and baseline mission architecture
>> Develop breadboards, engineering units or models identify and reduce high risk concepts
>> Demonstrate that credible, feasible design(s) exist
>> Perform and archive trade studies
>> Initiate studies on human systems interactions
• Initiate environmental evaluation/National Environmental Policy Act process
• Develop initial orbital debris assessment (NASA-STD-8719.14)
• Perform technical management
>> Provide technical cost estimate and range and develop system-level cost-effectiveness model
>> Define the WBS
>> Develop SOWs
>> Acquire systems engineering tools and models

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 24
3.0  NASA Program/Project Life Cycle

>> Establish technical resource estimates
• Identify, analyze and update risks
• Perform required Phase A technical activities from NPR 7120.5 as applicable
• Satisfy Phase A reviews’ entrance/success criteria from NPR 7123.1

Reviews
• SRR
• MDR/SDR

solidified, and the project develops more definition Publication (FIPS PUB) 199. The effort also pro-
in the system requirements, top-level system architec- duces various engineering and management plans to
ture, and ConOps. Conceptual designs and analyses prepare for managing the project’s downstream pro-
(including engineering units and physical models, as cesses such as verification and operations.
appropriate) are developed and exhibit more engi-
neering detail than in Pre-Phase A. Technical risks
are identified in more detail, and technology develop- 3.5 Project Phase B:
ment needs become focused. A Systems Engineering Preliminary Design and
Management Plan (SEMP) is baselined in Phase A to Technology Completion
document how NASA systems engineering require-
ments and practices of NPR 7123.1 will be addressed The purpose of Phase B is for the project team to
throughout the program life cycle. complete the technology development, engineering
prototyping, heritage hardware and software assess-
In Phase A, the effort focuses on allocating func- ments, and other risk-mitigation activities identified
tions to particular items of hardware, software, and in the project Formulation Agreement (FA) and the
to humans. System functional and performance preliminary design. The project demonstrates that
requirements, along with architectures and designs, its planning, technical, cost, and schedule baselines
become firm as system tradeoffs and subsystem developed during Formulation are complete and con-
tradeoffs iterate back and forth, while collaborating sistent; that the preliminary design complies with its
with subject matter experts in the effort to seek out requirements; that the project is sufficiently mature
more cost-effective designs. A method of determin- to begin Phase C; and that the cost and schedule are
ing life cycle cost (i.e., system-level cost-effectiveness adequate to enable mission success with acceptable
model) is refined in order to compare cost impacts risk. It is at the conclusion of this phase that the
for each of the different alternatives. (Trade studies project and the Agency commit to accomplishing
should precede—rather than follow—system design the project’s objectives for a given cost and schedule.
decisions.) Major products to this point include an For projects with a Life Cycle Cost (LCC) greater
accepted functional baseline for the system and its than $250 million, this commitment is made with
major end items. The project team conducts the secu- the Congress and the U.S. Office of Management
rity categorization of IT systems required by NPR and Budget (OMB). This external commitment is
2810.1 and Federal Information Processing Standard the Agency Baseline Commitment (ABC). Systems

NASA SYSTEMS ENGINEERING HANDBOOK 25
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PHASE B:
PRELIMINARY DESIGN AND TECHNOLOGY COMPLETION

Purpose
To define the project in enough detail to establish an initial baseline capable of meeting mission needs.
Develop system structure end product (and enabling product) requirements and generate a preliminary
design for each system structure end product.

Typical Activities and Their Products
• Review and update documents baselined in previous phases
• Monitor progress against plans
• Develop the preliminary design
>> Identify one or more feasible preliminary designs including internal and external interfaces
>> Perform analyses of candidate designs and report results
>> Conduct engineering development tests as needed and report results
>> Perform human systems integration assessments
>> Select a preliminary design solution
• Develop operations plans based on matured ConOps
>> Define system operations as well as Principal Investigator (PI)/contract proposal management,
review, and access and contingency planning
• Report technology development results
• Update cost range estimate and schedule data (Note that after PDR changes are incorporated and
costed, at KDP C this will turn into the Agency Baseline Commitment)
• Improve fidelity of models and prototypes used in evaluations
• Identify and update risks
• Develop appropriate level safety data package and security plan
• Develop preliminary plans
>> Orbital Debris Assessment
>> Decommissioning Plan
>> Disposal Plan
• Perform required Phase B technical activities from NPR 7120.5 as applicable
• Satisfy Phase B reviews’ entrance/success criteria from NPR 7123.1

Reviews
• PDR
• Safety review

NASA SYSTEMS ENGINEERING HANDBOOK 26
3.0  NASA Program/Project Life Cycle

engineers are involved in this phase to ensure the Phase B culminates in a series of PDRs, containing
preliminary designs of the various systems will work the system-level PDR and PDRs for lower level end
together, are compatible, and are likely to meet the items as appropriate. The PDRs reflect the succes-
customer expectations and applicable requirements. sive refinement of requirements into designs. Design
issues uncovered in the PDRs should be resolved
During Phase B, activities are performed to estab- so that final design can begin with unambiguous
lish an initial project baseline, which (according to design-to specifications. From this point on, almost
NPR 7120.5 and NPR 7123.1) includes “a formal all changes to the baseline are expected to represent
flow down of the project-level performance require- successive refinements, not fundamental changes. As
ments to a complete set of system and subsystem noted in FIGURE 2.5-1, significant design changes at
design specifications for both flight and ground ele- and beyond Phase B become increasingly expensive.
ments” and “corresponding preliminary designs.”
The technical requirements should be sufficiently
detailed to establish firm schedule and cost estimates 3.6 Project Phase C:
for the project. It also should be noted, especially for Final Design and Fabrication
AO-driven projects, that Phase B is where the top-
level requirements and the requirements flowed down The purpose of Phase C is to complete and docu-
to the next level are finalized and placed under con- ment the detailed design of the system that meets
figuration control. While the requirements should the detailed requirements and to fabricate, code,
be baselined in Phase A, changes resulting from the or otherwise realize the products. During Phase
trade studies and analyses in late Phase A and early C, activities are performed to establish a complete
Phase B may result in changes or refinement to sys- design (product baseline), fabricate or produce hard-
tem requirements. ware, and code software in preparation for integra-
tion. Trade studies continue and results are used to
It is important in Phase B to validate design decisions validate the design against project goals, objectives,
against the original goals and objectives and ConOps. and ConOps. Engineering test units more closely
All aspects of the life cycle should be considered, resembling actual hardware are built and tested to
including design decisions that affect training, oper- establish confidence that the design will function in
ations resource management, human factors, safety, the expected environments. Human subjects repre-
habitability and environment, and maintainability senting the user population participate in operations
and supportability. evaluations of the design, use, maintenance, training
procedures, and interfaces. Engineering specialty and
The Phase B baseline consists of a collection of evolv- crosscutting analysis results are integrated into the
ing baselines covering technical and business aspects design, and the manufacturing process and controls
of the project: system (and subsystem) requirements are defined and valid. Systems engineers are involved
and specifications, designs, verification and opera- in this phase to ensure the final detailed designs of
tions plans, and so on in the technical portion of the the various systems will work together, are compati-
baseline, and schedules, cost projections, and man- ble, and are likely to meet the customer expectations
agement plans in the business portion. Establishment and applicable requirements. During fabrication, the
of baselines implies the implementation of configura- systems engineer is available to answer questions and
tion management procedures. (See Section 6.5.) work any interfacing issues that might arise.

NASA SYSTEMS ENGINEERING HANDBOOK 27
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PHASE C: FINAL DESIGN AND FABRICATION

Purpose
To complete the detailed design of the system (and its associated subsystems, including its operations
systems), fabricate hardware, and code software. Generate final designs for each system structure end
product.

Typical Activities and Their Products
• Review and update documents baselined in previous phases
• Monitor progress against plans
• Develop and document hardware and software detailed designs
>> Fully mature and define selected preliminary designs
>> Add remaining lower level design specifications to the system architecture
>> Perform and archive trade studies
>> Perform development testing at the component or subsystem level
>> Fully document final design and develop data package
• Develop/refine and baseline plans
>> Interface definitions
>> Implementation plans
>> Integration plans
>> Verification and validation plans
>> Operations plans
• Develop/refine preliminary plans
>> Decommissioning and disposal plans, including human capital transition
>> Spares
>> Communications (including command and telemetry lists)
• Develop/refine procedures for
>> Refine integration
>> Manufacturing and assembly
>> Verification and validation
• Fabricate (or code) the product
• Identify and update risks
• Monitor project progress against project plans
• Prepare launch site checkout and post launch activation and checkout
• Finalize appropriate level safety data package and updated security plan
• Identify opportunities for preplanned product improvement
• Refine orbital debris assessment
• Perform required Phase C technical activities from NPR 7120.5 as applicable
• Satisfy Phase C review entrance/success criteria from NPR 7123.1

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 28
3.0  NASA Program/Project Life Cycle

Reviews
• CDR
• PRR
• SIR
• Safety review

All the planning initiated back in Phase A for the mission operations procedures are created and base-
testing and operational equipment, processes and lined. The final product of this phase is a product
analysis, integration of the crosscutting and engineer- ready for integration.
ing specialty analysis, and manufacturing processes
and controls is implemented. Configuration manage-
ment continues to track and control design changes 3.7 Project Phase D:
as detailed interfaces are defined. At each step in the System Assembly,
successive refinement of the final design, correspond- Integration and Test, Launch
ing integration and verification activities are planned
in greater detail. During this phase, technical param- The purpose of Phase D is to assemble, integrate, ver-
eters, schedules, and budgets are closely tracked to ify, validate, and launch the system. These activities
ensure that undesirable trends (such as an unexpected focus on preparing for the Flight Readiness Review
growth in spacecraft mass or increase in its cost) are (FRR)/Mission Readiness Review (MRR). Activities
recognized early enough to take corrective action. include assembly, integration, verification, and vali-
These activities focus on preparing for the CDR, dation of the system, including testing the flight sys-
Production Readiness Review (PRR) (if required), tem to expected environments within margin. Other
and the SIR. activities include updating operational procedures,
rehearsals and training of operating personnel and
Phase C contains a series of CDRs containing the sys- crew members, and implementation of the logistics
tem-level CDR and CDRs corresponding to the dif- and spares planning. For flight projects, the focus
ferent levels of the system hierarchy. A CDR for each of activities then shifts to prelaunch integration and
end item should be held prior to the start of fabrica- launch. System engineering is involved in all aspects
tion/production for hardware and prior to the start of this phase including answering questions, provid-
of coding of deliverable software products. Typically, ing advice, resolving issues, assessing results of the
the sequence of CDRs reflects the integration pro- verification and validation tests, ensuring that the
cess that will occur in the next phase; that is, from V&V results meet the customer expectations and
lower level CDRs to the system-level CDR. Projects, applicable requirements, and providing information
however, should tailor the sequencing of the reviews to decision makers for go/no-go decisions.
to meet the needs of the project. If there is a pro-
duction run of products, a PRR will be performed to The planning for Phase D activities was initiated
ensure the production plans, facilities, and personnel in Phase A. For IT projects, refer to the IT Systems
are ready to begin production. Phase C culminates Engineering Handbook. The planning for the activ-
with an SIR. Training requirements and preliminary ities should be performed as early as possible since

NASA SYSTEMS ENGINEERING HANDBOOK 29
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PHASE D:
SYSTEM ASSEMBLY, INTEGRATION AND TEST, LAUNCH

Purpose
To assemble and integrate the system (hardware, software, and humans), meanwhile developing confidence
that it will be able to meet the system requirements. Launch and prepare for operations. Perform system
end product implementation, assembly, integration and test, and transition to use.

Typical Activities and Their Products
• Update documents developed and baselined in previous phases
• Monitor project progress against plans
• Identify and update risks
• Integrate/assemble components according to the integration plans
• Perform verification and validation on assemblies according to the V&V Plan and procedures
>> Perform system qualification verifications, including environmental verifications
>> Perform system acceptance verifications and validation(s) (e.g., end-to-end tests encompassing all
elements; i.e., space element, ground system, data processing system)
>> Assess and approve verification and validation results
>> Resolve verification and validation discrepancies
>> Archive documentation for verifications and validations performed
>> Baseline verification and validation report
• Prepare and baseline
>> Operator’s manuals
>> Maintenance manuals
>> Operations handbook
• Prepare launch, operations, and ground support sites including training as needed
>> Train initial system operators and maintainers
>> Train on contingency planning
>> Confirm telemetry validation and ground data processing
>> Confirm system and support elements are ready for flight
>> Provide support to the launch and checkout of the system
>> Perform planned on-orbit operational verification(s) and validation(s)
• Document lessons learned. Perform required Phase D technical activities from NPR 7120.5
• Satisfy Phase D reviews’ entrance/success criteria from NPR 7123.1

Reviews
• Test Readiness Reviews (TRRs)
• System Acceptance Review (SAR) or pre-Ship Review
• ORR

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 30
3.0  NASA Program/Project Life Cycle

• FRR
• System functional and physical configuration audits
• Safety review

changes at this point can become costly. Phase D This phase encompasses the evolution of the system
concludes with a system that has been shown to be only insofar as that evolution does not involve major
capable of accomplishing the purpose for which it changes to the system architecture. Changes of that
was created. scope constitute new “needs,” and the project life cycle
starts over. For large flight projects, there may be an
extended period of cruise, orbit insertion, on-orbit
3.8 Project Phase E: assembly, and initial shakedown operations. Near the
Operations and Sustainment end of the prime mission, the project may apply for
a mission extension to continue mission activities or
The purpose of Phase E is to conduct the prime attempt to perform additional mission objectives.
mission to meet the initially identified need and to
maintain support for that need. The products of the For additional information on systems engineering in
phase are the results of the mission and performance Phase E, see Appendix T.
of the system.

Systems engineering personnel continue to play a 3.9 Project Phase F: Closeout
role during this phase since integration often overlaps The purpose of Phase F is to implement the systems
with operations for complex systems. Some programs decommissioning and disposal planning and analyze
have repeated operations/flights which require con- any returned data and samples. The products of the
figuration changes and new mission objectives with phase are the results of the mission. The system engi-
each occurrence. And systems with complex sustain- neer is involved in this phase to ensure all technical
ment needs or human involvement will likely require information is properly identified and archived, to
evaluation and adjustments that may be beyond the answer questions, and to resolve issues as they arise.
scope of operators to perform. Specialty engineering
disciplines, like maintainability and logistics servic- Phase F deals with the final closeout of the sys-
ing, will be performing tasks during this phase as tem when it has completed its mission; the time at
well. Such tasks may require reiteration and/or recur- which this occurs depends on many factors. For a
sion of the common systems engineering processes. flight system that returns to Earth after a short mis-
sion duration, closeout may require little more than
Systems engineering personnel also may be involved de-integrating the hardware and returning it to its
in in-flight anomaly resolution. Additionally, soft- owner. On flight projects of long duration, closeout
ware development may continue well into Phase E. may proceed according to established plans or may
For example, software for a planetary probe may begin as a result of unplanned events, such as fail-
be developed and uplinked while in-flight. Another ures. Refer to NASA Policy Directive (NPD) 8010.3,
example would be new hardware developed for space Notification of Intent to Decommission or Terminate
station increments. Operating Space Systems and Terminate Missions,

NASA SYSTEMS ENGINEERING HANDBOOK 31
3.0  NASA Program/Project Life Cycle

SPACE FLIGHT PHASE E: OPERATIONS AND SUSTAINMENT

Purpose
To conduct the mission and meet the initially identified need and maintain support for that need. Implement
the mission operations plan.

Typical Activities and Their Products
• Conduct launch vehicle performance assessment. Commission and activate science instruments
• Conduct the intended prime mission(s)
• Provide sustaining support as planned
>> Implement spares plan
>> Collect engineering and science data
>> Train replacement operators and maintainers
>> Train the flight team for future mission phases (e.g., planetary landed operations)
>> Maintain and approve operations and maintenance logs
>> Maintain and upgrade the system
>> Identify and update risks
>> Address problem/failure reports
>> Process and analyze mission data
>> Apply for mission extensions, if warranted
• Prepare for deactivation, disassembly, decommissioning as planned (subject to mission extension)
• Capture lessons learned
• Complete post-flight evaluation reports
• Develop final mission report
• Perform required Phase E technical activities from NPR 7120.5
• Satisfy Phase E reviews’ entrance/success criteria from NPR 7123.1

Reviews
• Post-Launch Assessment Review (PLAR)
• Critical Event Readiness Review (CERR)
• Post-Flight Assessment Review (PFAR) (human space flight only)
• DR
• System upgrade review
• Safety review

NASA SYSTEMS ENGINEERING HANDBOOK 32
3.0  NASA Program/Project Life Cycle

PHASE F: CLOSEOUT

Purpose
To implement the systems decommissioning/disposal plan developed in Phase E and perform analyses of
the returned data and any returned samples.

Typical Activities and Their Products
• Dispose of the system and supporting processes
• Document lessons learned
• Baseline mission final report
• Archive data
• Capture lessons learned
• Perform required Phase F technical activities from NPR 7120.5
• Satisfy Phase F reviews’ entrance/success criteria from NPR 7123.1

Reviews
• DRR

for terminating an operating mission. Alternatively, so they are boosted to a higher orbit well beyond the
technological advances may make it uneconomical crowded operational GEO orbit.
to continue operating the system either in its current
configuration or an improved one. In addition to uncertainty about when this part of
the phase begins, the activities associated with safe
To limit space debris, NPR 8715.6, NASA Procedural closeout of a system may be long and complex and
Requirements for Limiting Orbital Debris, provides may affect the system design. Consequently, different
requirements for removing Earth-orbiting robotic options and strategies should be considered during
satellites from their operational orbits at the end of the project’s earlier phases along with the costs and
their useful life. For Low Earth Orbit (LEO) mis- risks associated with the different options.
sions, the satellite is usually deorbited. For small sat-
ellites, this is accomplished by allowing the orbit to
slowly decay until the satellite eventually burns up 3.10 Funding: The Budget Cycle
in Earth’s atmosphere. Larger, more massive satel- For a description of the NASA Budget Cycle, refer
lites and observatories should be designed to demise to the NASA Expanded Guidance for Systems
or deorbit in a controlled manner so that they can Engineering document found at https://nen.nasa.gov/
be safely targeted for impact in a remote area of the web/se/doc-repository. See also Section 5.8 of NASA/
ocean. The Geostationary (GEO) satellites at 35,790 SP-2014-3705, NASA Space Flight Program and
km above the Earth cannot be practically deorbited, Project Management Handbook.

NASA SYSTEMS ENGINEERING HANDBOOK 33
3.0  NASA Program/Project Life Cycle

3.11 Tailoring and Customization While the technical aspects of small projects are gen-
of NPR 7123.1 Requirements erally narrower and more focused, they can also be
In this section, the term requirements refers to the challenging when their objectives are to demonstrate
“shall” statements imposed from Agency direc- advanced technologies or provide “one of a kind”
tives. This discussion focuses on the tailoring of the capabilities. At the same time, their comparatively
requirements contained in NPR 7123.1. small budgets and restricted schedules dictate lean
and innovative implementation approaches to proj-
3.11.1 Introduction ect management and systems engineering. Tailoring
NASA policy recognizes the need to accommodate the and customization allow programs and projects to be
unique aspects of each program or project to achieve successful in achieving technical objectives within
mission success in an efficient and economical man- cost and schedule constraints. The key is effective tai-
ner. Tailoring is a process used to accomplish this. loring that reflects lessons learned and best practices.
Tailoring the SE requirements and customizing the
NPR 7123.1 defines tailoring as “the process used SE best practices to the specific needs of the project
to seek relief from SE NPR requirements consistent helps to obtain the desired benefits while eliminating
with program or project objectives, allowable risk, unnecessary overhead. To accomplish this, an accept-
and constraints.” Tailoring results in deviations or able risk posture must be understood and agreed
waivers (see NPR 7120.5, Section 3.5) to SE require- upon by the project, customer/stakeholder, Center
ments and is documented in the next revision of the management, and independent reviewers. Even with
SEMP (e.g., via the Compliance Matrix). this foundation, however, the actual process of appro-
priately tailoring SE requirements and customizing
Since NPR 7123.1 was written to accommodate pro- NPR 7123.1 practices to a specific project can be
grams and projects regardless of size or complexity, complicated and arduous. Effective approaches and
the NPR requirements leave considerable latitude for experienced mentors make the tailoring process for
interpretation. Therefore, the term “customization” any project more systematic and efficient.
is introduced and is defined as “the modification of
recommended SE practices that are used to accom- Chapter 6 of the NASA Software Engineering
plish the SE requirements.” Customization does not Handbook provides guidance on tailoring SE require-
require waivers or deviations, but significant custom- ments for software projects.
ization should be documented in the SEMP.
3.11.2 Criteria for Tailoring
Tailoring and customization are essential systems NPR 8705.4, Risk Classification for NASA Payloads,
engineering tools that are an accepted and expected is intended for assigning a risk classification to projects
part of establishing the proper SE NPR require- and programs. It establishes baseline criteria that enable
ments for a program or project. Although tailoring is users to define the risk classification level for NASA
expected for all sizes of projects and programs, small payloads on human or non-human-rated launch sys-
projects present opportunities and challenges that are tems or carrier vehicles. It is also a starting point for
different from those of large, traditional projects such understanding and defining criteria for tailoring.
as the Shuttle, International Space Station, Hubble
Space Telescope, and Mars Science Laboratory. The extent of acceptable tailoring depends on several char-
acteristics of the program/project such as the following:

NASA SYSTEMS ENGINEERING HANDBOOK 34
3.0  NASA Program/Project Life Cycle

1. Type of mission. For example, the requirements the requirements of the NPR or justification for tai-
for a human space flight mission are much more loring. The Compliance Matrix can be used to assist
rigorous than those for a small robotic mission. in identifying where major customization of the way
(e.g., formality and rigor) the NPR requirements
2. Criticality of the mission inmeeting the Agency will be accomplished and to communicate that cus-
Strategic Plan. Critical missions that absolutely tomization to the stakeholders. The tailoring pro-
must be successful may not be able to get relief cess (which can occur at any time in the program or
from NPR requirements. project’s life cycle) results in deviations or waivers to
the NPR requirements depending on the timing of
3. Acceptable risk level. If the Agency and the the request. Deviations and waivers of the require-
customer are willing to accept a higher risk of ments can be submitted separately to the Designated
failure, some NPR requirements may be waived. Governing Authority or via the Compliance Matrix.
The Compliance Matrix is attached to the Systems
4. National significance. A project that has great Engineering Management Plan (SEMP) when sub-
national significance may not be able to get relief mitted for approval. Alternatively, if there is no
from NPR requirements. stand-alone SEMP and the contents of the SEMP
are incorporated into another document such as the
5. Complexity. Highly complex missions may project plan, the Compliance Matrix can be captured
require more NPR requirements in order to keep within that plan.
systems compatible, whereas simpler ones may
not require the same level of rigor. FIGURE 3.11-1 illustrates a notional tailoring process
for a space flight project. Project management (such
6. Mission lifetime.Missions with a longer lifetime as the project manager/the Principal Investigator/
need to more strictly adhere to NPR require- the task lead, etc.) assembles a project team to tailor
ments than short-lived programs/projects. the NPR requirements codified in the Compliance
Matrix. To properly classify the project, the team
7. Cost of mission. Higher cost missions may (chief engineer, lead systems engineer, safety and mis-
require stricter adherence to NPR requirements sion assurance, etc.) needs to understand the building
to ensure proper program/project control. blocks of the project such as the needs, goals, and
objectives as well as the appropriate risk posture.
8. Launch constraints. If there are several launch
constraints, a project may need to be more fully Through an iterative process, the project team goes
compliant with Agency requirements. through the NPR requirements in the Compliance
Matrix to tailor the requirements. A tailoring tool
3.11.3 
Tailoring SE NPR Requirements with suggested guidelines may make the tailoring
Using the Compliance Matrix process easier if available. Several NASA Centers
NPR 7123.1 includes a Compliance Matrix including LaRC and MSFC have developed tools
(Appendix H.2) to assist programs and projects in for use at their Centers which could be adapted
verifying that they meet the specified NPR require- for other Centers. Guidance from Subject Matter
ments. The Compliance Matrix documents the pro- Experts (SMEs) should be sought to determine the
gram/project’s compliance or intent to comply with appropriate amount of tailoring for a specific project.

NASA SYSTEMS ENGINEERING HANDBOOK 35
3.0  NASA Program/Project Life Cycle

Inputs Outputs

N Review/ Y
Center-level Approve
Project
Needs, Goals,
Objectives N Review/ Y
Program Office Approve

N Review/ Y
Engineering/Projects Directorate Approve
Tailoring
Tool(s)
Suggest
PM Tailoring Finalize/
Update
Project Team Project-
S&MA Review Specific
and Refine Tailoring
Risk CE Tailoring and Capture Approved
Posture Waiver Compliance Matrix
Rationales Attached to SEMP or
LSE Project Plan
Advisory
Teams as
Necessary

FIGURE 3.11-1  Notional Space Flight Products Tailoring Process

The Compliance Matrix provides rationales for each 3. Scaling the requirement in a manner that bet-
of the NPR requirements to assist in understanding. ter balances the cost of implementation and the
Once the tailoring is finalized and recorded in the project risk.
Compliance Matrix with appropriate rationales, the
requested tailoring proceeds through the appropriate Customizing SE practices can include the following:
governance model for approval.
1. Adjusting the way each of the 17 SE processes is
3.11.4 
Ways to Tailor a SE implemented.
Requirement
Tailoring often comes in three areas: 2. Adjusting the formality and timing of reviews.

1. Eliminating a requirement that does not apply to Non-Applicable NPR Requirements
3.11.4.1 
the specific program/project. Each requirement in NPR 7123.1 is assessed for
applicability to the individual project or program.
2. Eliminating a requirement that is overly bur- For example, if the project is to be developed com-
densome (i.e., when the cost of implementing pletely in-house, the requirements of the NPR’s
the requirement adds more risk to the project by Chapter 4 on contracts would not be applicable. If
diverting resources than the risk of not comply- a system does not contain software, then none of the
ing with the requirement). NPR requirements for developing and maintaining
software would be applicable.

NASA SYSTEMS ENGINEERING HANDBOOK 36
3.0  NASA Program/Project Life Cycle

Adjusting the Scope
3.11.4.2  If a program/project decides it does not need one of
Depending on the project or program, some relief on the required reviews, a waiver or deviation is needed.
the scope of a requirement may be appropriate. For However, the SE NPR does not specify a minimum
example, although the governing project management amount of spacing for these reviews. A small project
directive (e.g., NPR 7120.5, 7150.2, 7120.7, 7120.8) may decide to combine the SRR and the SDR (or
for a program/project may require certain documents Mission Definition Review (MDR)) for example. As
to be standalone, the SE NPR does not require any long as the intent for both reviews is accomplished,
additional stand-alone documents. For small proj- the SE NPR does not require a waiver or deviation.
ects, many of the plans can be described in just a few (Note that even though the SE NPR does not require
paragraphs or pages. In these types of projects, any it, a waiver or deviation may still be required in the
NPR requirements stating that the plans need to be governing project management NPR.) This customi-
stand-alone document would be too burdensome. In zation and/or tailoring should be documented in the
these cases, the information can simply be written Compliance Matrix and/or the review plan or SEMP.
and included as part of the project plan or SEMP.
If the applicable project management directive (e.g., Unless otherwise required by the governing project
NPR 7120.5 or NPR 7120.8) requires documents to management directives, the formality of the review
be stand-alone, a program/project waiver/deviation is can be customized as appropriate for the type of
needed. However, if there is no requirement or Center program/project. For large projects, it might be
expectation for a stand-alone document, a project can appropriate to conduct a very formal review with a
customize where that information is recorded and no formal Review Item Discrepancy (RID)/Request
waiver or deviation is required. Capturing where this for Action (RFA) process, a summary, and detailed
information is documented within the systems engi- presentations to a wide audience including boards and
neering or project management Compliance Matrix pre-boards over several weeks. For small projects,
would be useful for clarity. that same review might be done in a few hours across
a tabletop with a few stakeholders and with issues
Formality and Timing of Reviews
3.11.4.3  and actions simply documented in a word or
The governing project management directive iden- PowerPoint document.
tifies the required or recommended life cycle for
the specific type of program/project. The life cycle The NASA Engineering Network Systems
defines the number and timing of the various Engineering Community of Practice, located at
reviews; however, there is considerable discretion https://nen.nasa.gov/web/se includes document tem-
concerning the formality of the review and how plates for milestone review presentations required by
to conduct it. NPR 7123.1, Appendix G, provides the NASA SE process.
extensive guidance for suggested review entrance
and success criteria. It is expected that the program/ 3.11.5 
Examples of Tailoring and
project will customize these criteria in a manner Customization
that makes sense for their program/project. The SE TABLE 3.11-1 shows an example of the types of mis-
NPR does not require a waiver/deviation for this sions that can be defined based on a system that
customization; however, departures from review ele- breaks projects into various types ranging from a very
ments required by other NPRs need to be addressed complex type A to a much simpler type F. When tai-
by tailoring those documents. loring a project, the assignment of specific projects to

NASA SYSTEMS ENGINEERING HANDBOOK 37
3.0  NASA Program/Project Life Cycle

TABLE 3.11-1  Example of Program/Project Types
Criteria Type A Type B Type C Type D Type E Type F
Description of the Human Space Non-Human Small Science or Smaller Science Suborbital or Aircraft or
Types of Mission Flight or Very Space Flight or Robotic Missions or Technology Aircraft or Large Ground based
Large Science/ Science/Robotic Missions (ISS Ground based technology
Robotic Missions Missions payload) Missions demonstrations

Priority (Criticality High priority, High priority, Medium priority, Low priority, Low priority, Low to very low
to Agency Strategic very low low risk medium risk high risk high risk priority, high risk
Plan) and Acceptable (minimized) risk
Risk Level

National Significance Very high High Medium Medium to Low Low Very Low

Complexity Very high to high High to Medium Medium to Low Medium to Low Low Low to Very Low

Mission Lifetime Long. >5 years Medium. 2–5 Short. <2 years Short. <2 years N/A N/A
(Primary Baseline years
Mission)

Cost Guidance High High to Medium Medium to Low Low
(estimate LCC) (greater than (~$500M–$1B) (~$100M–$500M) (~$50M–$100M) (~$10–50M) (less than
~$1B) $10–15M)

Launch Constraints Critical Medium Few Few to none Few to none N/A

Alternative Research No alternative Few or no Some or few Significant Significant Significant
Opportunities or re-flight alternative alternative alternative alternative alternative
or Re-flight opportunities or re-flight or re-flight or re-flight or re-flight or re-flight
Opportunities opportunities opportunities opportunities opportunities opportunities

Achievement of All practical Stringent Medium risk of Medium or Significant risk Significant risk
Mission Success measures are assurance not achieving significant risk of not achieving of not achieving
Criteria taken to achieve standards with mission of not achieving mission success mission success
minimum risk to only minor success may mission success is permitted. is permitted.
mission success. compromises in be acceptable. is permitted. Minimal Minimal
The highest application to Reduced Minimal assurance assurance
assurance maintain a low assurance assurance standards are standards are
standards are risk to mission standards are standards are permitted. permitted.
used. success. permitted. permitted.

Examples HST, Cassini, MER, MRO, ESSP, Explorer SPARTAN, GAS IRVE-2, IRVE‑3, DAWNAir,
JIMO, JWST, Discovery payloads, Can, technology HiFIRE, HyBoLT, InFlame,
MPCV, SLS, ISS payloads, ISS MIDES, ISS demonstrators, ALHAT, STORRM, Research,
Facility Class complex subrack simple ISS, Earth Venture I technology
payloads, payloads, PA-1, express middeck demonstrations
Attached ISS ARES 1-X, and subrack
payloads MEDLI, payloads, SMEX,
CLARREO, MISSE-X, EV-2
SAGE III, Calipso

NASA SYSTEMS ENGINEERING HANDBOOK 38
3.0  NASA Program/Project Life Cycle

particular types should be viewed as guidance, not as complexity and/or risk aversion dominate. These
rigid characterization. Many projects will have char- tailoring criteria and definitions of project “types”
acteristics of multiple types, so the tailoring approach may vary from Center to Center and from Mission
may permit more tailoring for those aspects of the Directorate to Mission Directorate according to what
project that are simpler and more open to risk and is appropriate for their missions. TABLE 3.11-2 shows
less tailoring for those aspects of the project where an example of how the documentation required of

TABLE 3.11-2  Example of Tailoring NPR 7120.5 Required Project Products
Type A Type B Type C Type D Type E Type F
Example Project Technical Products
Concept Fully Fully Fully Tailor Tailor Tailor
Documentation Compliant Compliant Compliant

Mission, Spacecraft, Fully Fully Fully Tailor Tailor Tailor
Ground, and Payload Compliant Compliant Compliant
Architectures

Project-Level, Fully Fully Fully Fully Tailor Tailor
System and Compliant Compliant Compliant Compliant
Subsystem
Requirements

Design Fully Fully Fully Fully Tailor Tailor
Documentation Compliant Compliant Compliant Compliant

Operations Concept Fully Fully Fully Tailor Tailor Tailor
Compliant Compliant Compliant

Technology Fully Fully Fully Tailor Tailor Tailor
Readiness Compliant Compliant Compliant
Assessment
Documentation

Human Systems Fully Fully Fully Tailor Tailor Tailor
Integration Plan Compliant Compliant Compliant

Heritage Assessment Fully Fully Fully Tailor Tailor Tailor
Documentation Compliant Compliant Compliant

Safety Data Fully Fully Fully Fully Tailor Tailor
Packages Compliant Compliant Compliant Compliant

ELV Payload Safety Fully Fully Fully Fully Fully Not
Process Deliverables Compliant Compliant Compliant Compliant Compliant Applicable

Verification and Fully Fully Fully Tailor Tailor Tailor
Validation Report Compliant Compliant Compliant

Operations Fully Fully Fully Tailor Tailor Not
Handbook Compliant Compliant Compliant Applicable

End of Mission Plans Fully Fully Fully Tailor Tailor Tailor
Compliant Compliant Compliant

Mission Report Fully Fully Tailor Tailor Tailor Tailor
Compliant Compliant

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 39
3.0  NASA Program/Project Life Cycle

Type A Type B Type C Type D Type E Type F
Example Project Plan Control Plans
Risk Management Fully Fully Fully Tailor Tailor Not
Plan Compliant Compliant Compliant Applicable

Technology Fully Fully Fully Fully Not Not
Development plan Compliant Compliant Compliant Compliant Applicable Applicable

Systems Engineering Fully Fully Fully Tailor Tailor Tailor
Management Plan Compliant Compliant Compliant

Software Fully Fully Tailor Tailor Tailor Tailor
Management plan Compliant Compliant

Verification and Fully Fully Tailor Tailor Tailor Tailor
Validation Plan Compliant Compliant

Review Plan Fully Fully Fully Tailor Tailor Tailor
Compliant Compliant Compliant

Integrated Logistics Fully Fully Fully Tailor Tailor Not
Support Plan Compliant Compliant Compliant Applicable

Science Data Fully Fully Fully Tailor Tailor Not
Management Plan Compliant Compliant Compliant Applicable

Integration Plan Fully Fully Fully Fully Tailor Tailor
Compliant Compliant Compliant Compliant

Configuration Fully Fully Fully Fully Tailor Tailor
Management Plan Compliant Compliant Compliant Compliant

Technology Transfer Fully Fully Fully Fully Tailor Tailor
(formerly Export) Compliant Compliant Compliant Compliant
Control Plan

Lessons Learned Fully Fully Fully Fully Tailor Tailor
Plan Compliant Compliant Compliant Compliant

Human Rating Fully Not Not Not Not Not
Certification Compliant Applicable Applicable Applicable Applicable Applicable
Package

a program/project might also be tailored or custom- H. If it is a Center that is requesting tailoring of the
ized. The general philosophy is that the simpler, less NPR requirements for standard use at the Center,
complex projects should require much less documen- Appendix H.1 is completed and submitted to the
tation and fewer formal reviews. Project products OCE for approval upon request or as changes to the
should be sensibly scaled. Center processes occur. If a program/project whose
responsibility has been delegated to a Center is seek-
3.11.6 
Approvals for Tailoring ing a waiver/deviation from the NPR requirements,
Deviations and waivers of the requirements for the the Compliance Matrix in Appendix H.2 is used.
SE NPR can be submitted separately to the require- In these cases, the Center Director or designee will
ments owners or in bulk using the appropriate approve the waiver/deviation.
Compliance Matrix found in NPR 7123.1 Appendix

NASA SYSTEMS ENGINEERING HANDBOOK 40
3.0  NASA Program/Project Life Cycle

The result of this tailoring, whether for a Center or team as well as associated managers. If an indepen-
for a program/project, should also be captured in dent assessment is being conducted on the program/
the next revision of the SEMP along with support- project, this also allows appropriate modification of
ing rationale and documented approvals from the expectations and assessment criteria. TABLE 3.11-3
requirement owner. This allows communication of provides some examples of tailoring captured within
the approved waivers/deviations to the entire project the H.2 Compliance Matrix.

TABLE 3.11-3  Example Use of a Compliance Matrix
Req SE NPR Requirement Rationale Req. Comply? Justification
ID Section Statement Owner
SE-05 2.1.5.2 For those For programs and projects, the CD Fully
requirements Compliance Matrix in Appendix Compliant
owned by Center H.2 is filled out showing that the
Directors, the program/project is compliant with
technical team the requirements of this NPR (or a
shall complete the particular Center’s implementation
Compliance Matrix of NPR 7123.1, whichever is
in Appendix H.2 applicable) or any tailoring thereof
and include it in the is identified and approved by the
SEMP. Center Director or designee as part
of the program/project SEMP.

SE-06 2.1.6.1 The DGA shall The DGA, who is often the TA, CD Fully
approve the provides an approval of the SEMPs, Complaint
SEMP, waiver waivers to technical requirements
authorizations, and and other key technical document
other key technical to provide assurance of the
documents to applicability and technical quality of
ensure independent the products.
assessment of
technical content.

SE-24 4.2.1 The NASA technical It is important for both the CD Not Project is
team shall define government and contractor Applicable conducted
the engineering technical teams to understand entirely
activities for the what activities will be handled by in-house and
periods before which organization throughout the therefore
contract award, product life cycle. The contractor(s) there are no
during contract will typically develop a SEMP or its contracts
performance, and equivalent to describe the technical involved
upon contract activities in their portion of the
completion in the project, but an overarching SEMP
SEMP. is needed that will describe all
technical activities across the life
cycle whether contracted or not.

NASA SYSTEMS ENGINEERING HANDBOOK 41
3.0  NASA Program/Project Life Cycle

NASA SYSTEMS ENGINEERING HANDBOOK 42
4.0 System Design Processes
T his chapter describes the activities in the system
design processes listed in FIGURE 2.1-1. The chap-
ter is separated into sections corresponding to pro-
each other and will require iterations and design deci-
sions to achieve this consistency. Once consistency is
achieved, analyses allow the project team to validate
cesses 1 to 4 listed in FIGURE 2.1-1. The tasks within the proposed design against the stakeholder expecta-
each process are discussed in terms of inputs, activ- tions. A simplified validation asks the questions: Will
ities, and outputs. Additional guidance is provided the system work as expected? Is the system achiev-
using examples that are relevant to NASA projects. able within budget and schedule constraints? Does
the system provide the functionality and fulfill the
The system design processes are interdependent, operational needs that drove the project’s funding
highly iterative and recursive processes resulting in approval? If the answer to any of these questions is
a validated set of requirements and a design solution no, then changes to the design or stakeholder expec-
that satisfies a set of stakeholder expectations. There tations will be required, and the process starts again.
are four system design processes: developing stake- This process continues until the system—architec-
holder expectations, technical requirements, logical ture, ConOps, and requirements—meets the stake-
decompositions, and design solutions. holder expectations.

FIGURE 4.0-1 illustrates the recursive relationship The depth of the design effort should be sufficient
among the four system design processes. These pro- to allow analytical verification of the design to the
cesses start with a study team collecting and clarifying requirements. The design should be feasible and cred-
the stakeholder expectations, including the mission ible when judged by a knowledgeable independent
objectives, constraints, design drivers, operational review team and should have sufficient depth to sup-
objectives, and criteria for defining mission success. port cost modeling and operational assessment.
This set of stakeholder expectations and high-level
requirements is used to drive an iterative design loop Once the system meets the stakeholder expectations,
where a straw man architecture/design, the concept the study team baselines the products and prepares
of operations, and derived requirements are devel- for the next phase. Often, intermediate levels of
oped. These three products should be consistent with decomposition are validated as part of the process. In

NASA SYSTEMS ENGINEERING HANDBOOK 43
4.0  System Design Processes

Iterate ConOps Iterate Requirements
Iterate
Expectations
Stakeholder Expectations Requirements Definition
Inconsistencies - Iterate
Needs, No - Iterate
Goals and
Derived and
Objectives
Allocated
Requirements
Program Develop Good Yes
Constraints • Functional Validate Req. Meet
Authority ConOps • Performance Req. Set ConOps?
• Interface
• Operational
Success • Safety
Criteria • “ilities”

Design Solution Definition Logical
No – Recursive Cycle Iterate
Decomposition

Decomposition
Success
To Product Criteria Develop
Lowest Develop • Functional Flow
Realization Design
Level? Architecture • Temporal Flow
Processes
• Behavioral
• Data Flow
• States and Modes
Evaluate Compare
ConOps

FIGURE 4.0‑1  Interrelationships among the System Design Processes

SYSTEM DESIGN KEYS

• Successfully understanding and defining the mission objectives and the concept of operations are keys
to capturing the stakeholder expectations, which will translate into quality requirements and operational
efficiencies over the life cycle of the project.
• Complete and thorough requirements traceability is a critical factor in successful validation of
requirements.
• Clear and unambiguous requirements will help avoid misunderstanding when developing the overall
system and when making major or minor changes.
• Document all decisions made during the development of the original design concept in the technical data
package. This will make the original design philosophy and negotiation results available to assess future
proposed changes and modifications against.
• The validation of a design solution is a continuing recursive and iterative process during which the design
solution is evaluated against stakeholder expectations.

NASA SYSTEMS ENGINEERING HANDBOOK 44
4.0  System Design Processes

the next level of decomposition, the baselined derived 4.1.1 Process Description
(and allocated) requirements become the set of high- FIGURE 4.1-1 provides a typical flow diagram for the
level requirements for the decomposed elements and Stakeholder Expectations Definition Process and
the process begins again. These system design pro- identifies typical inputs, outputs, and activities to
cesses are primarily applied in Pre-Phase A and con- consider in defining stakeholder expectations.
tinue through Phase C.
4.1.1.1  Inputs
The system design processes during Pre-Phase A Typical inputs needed for the Stakeholder Expectations
focus on producing a feasible design that will lead Definition Process include the following:
to Formulation approval. During Phase A, alterna-
tive designs and additional analytical maturity are • Initial Customer Expectations: These are the
pursued to optimize the design architecture. Phase needs, goals, objectives, desires, capabilities, and
B results in a preliminary design that satisfies the other constraints that are received from the cus-
approval criteria. During Phase C, detailed, build-to tomer for the product within the product layer.
designs are completed. For the top-tier products (final end item), these are
the expectations of the originating customer who
This is a simplified description intended to demon- requested the product. For an end product within
strate the recursive relationship among the system the product layer, these are the expectations of the
design processes. These processes should be used as recipient of the end item when transitioned.
guidance and tailored for each study team depending
on the size of the project and the hierarchical level • Other Stakeholder Expectations: These are the
of the study team. The next sections describe each of expectations of key stakeholders other than the
the four system design processes and their associated customer. For example, such stakeholders may
products for a given NASA mission. be the test team that will be receiving the transi-
tioned product (end product and enabling prod-
ucts) or the trainers that will be instructing the
Stakeholder Expectations
4.1  operators or managers that are accountable for the
Definition product at this layer.

The Stakeholder Expectations Definition Process is • Customer Flow-down Requirements: These are
the initial process within the SE engine that estab- any requirements that are being flowed down or
lishes the foundation from which the system is allocated from a higher level (i.e., parent require-
designed and the product is realized. The main pur- ments). They are helpful in establishing the expec-
pose of this process is to identify who the stakehold- tations of the customer at this layer.
ers are and how they intend to use the product. This
is usually accomplished through use-case scenarios 4.1.1.2  Process Activities
(sometimes referred to as Design Reference Missions 4.1.1.2.1  Identify Stakeholders
(DRMs)) and the ConOps. A “stakeholder” is a group or individual that is
affected by or has a stake in the product or project.
The key players for a project/product are called the
key stakeholders. One key stakeholder is always the

NASA SYSTEMS ENGINEERING HANDBOOK 45
4.0  System Design Processes

To Technical
Requirements Definition and
Establish list of stakeholders Requirements and Interface
Management Processes

Elicit stakeholder expectations Validated Stakeholder
From Project
Expectations

Initial Customer Establish operations concept and support
Expectations strategies
To Technical Requirements
Definition and Configuration
Define stakeholder expectations in acceptable Management Processes
Other Stakeholder statements
Expectations
Concept of Operations

Analyze expectation statements for measures
of effectiveness

Enabling Product
Validate that defined expectation statements Support Strategies
reflect bidirectional traceability
Customer Flow-down
Requirements To Technical Requirements
Obtain stakeholder commitments to the Definition and Technical Data
validated set of expectations Management Processes

Measures of Effectiveness
Baseline stakeholder expectations

Capture work products from
stakeholder expectations activities

FIGURE 4.1‑1  Stakeholder Expectations Definition Process

“customer.” The customer may vary depending on achieved. These parties may be affected by the result-
where the systems engineer is working in the PBS. ing product, the manner in which the product is
For example, at the topmost level, the customer may used, or have a responsibility for providing life cycle
be the person or organization that is purchasing the support services. Examples include Congress, advi-
product. For a systems engineer working three or sory planning teams, program managers, maintain-
four levels down in the PBS, the customer may be the ers, and mission partners. It is important that the list
leader of the team that takes the element and inte- of stakeholders be identified early in the process, as
grates it into a larger assembly. Regardless of where well as the primary stakeholders who will have the
the systems engineer is working within the PBS, most significant influence over the project.
it is important to understand what is expected by
the customer. The customer and users of the system are usually
easy to identify. The other key stakeholders may
Other interested parties are those who affect the be more difficult to identify and they may change
project by providing broad, overarching constraints depending on the type of the project and the
within which the customers’ needs should be phase the project is in. TABLE 4.1-1 provides some

NASA SYSTEMS ENGINEERING HANDBOOK 46
4.0  System Design Processes

TABLE 4.1-1  Stakeholder Identification throughout the Life Cycle
Life-Cycle Stage Example Stakeholders
Pre-Phase A NASA Headquarters, NASA Centers, Presidential Directives, NASA advisory committees, the
National Academy of Sciences

Phase A Mission Directorate, customer, potential users, engineering disciplines, safety organization

Phase B Customer, engineering disciplines, safety, crew, operations, logistics, production facilities,
suppliers, principle investigators

Phase C Customer, engineering disciplines, safety, crew, operations, logistics, production facilities,
suppliers, principle investigators

Phase D Customer, engineering disciplines, safety, crew, operations, training, logistics, verification team,
Flight Readiness Board members

Phase E Customer, system managers, operations, safety, logistics, sustaining team, crew, principle
investigators, users

Phase F Customer, NASA Headquarters, operators, safety, planetary protection, public

examples of stakeholders in the life cycle phase that encompass expenditures (resources), time to deliver,
should be considered. life cycle support expectations, performance objec-
tives, operational constraints, training goals, or other
4.1.1.2.2  Understand Stakeholder Expectations less obvious quantities such as organizational needs
Thoroughly understanding the customer and other or geopolitical goals. This information is reviewed,
key stakeholders’ expectations for the project/prod- summarized, and documented so that all parties can
uct is one of the most important steps in the systems come to an agreement on the expectations.
engineering process. It provides the foundation upon
which all other systems engineering work depends. It FIGURE 4.1-2 shows the type of information needed
helps ensure that all parties are on the same page and when defining stakeholder expectations and depicts
that the product being provided will satisfy the cus- how the information evolves into a set of high-level
tomer. When the customer, other stakeholders, and requirements. The yellow lines depict validation
the systems engineer mutually agree on the func- paths. Examples of the types of information that
tions, characteristics, behaviors, appearance, and would be defined during each step are also provided.
performance the product will exhibit, it sets more
realistic expectations on the customer’s part and Defining stakeholder expectations begins with
helps prevent significant requirements creep later in the mission authority and strategic objectives that
the life cycle. the mission is meant to achieve. Mission authority
changes depending on the category of the mission.
Through interviews/discussions, surveys, marketing For example, science missions are usually driven by
groups, e-mails, a Statement of Work (SOW), an NASA Science Mission Directorate strategic plans,
initial set of customer requirements, or some other whereas the exploration missions may be driven by a
means, stakeholders specify what is desired as an end Presidential directive. Understanding the objectives
state or as an item to be produced and put bounds of the mission helps ensure that the project team is
on the achievement of the goals. These bounds may working toward a common vision. These goals and

NASA SYSTEMS ENGINEERING HANDBOOK 47
4.0  System Design Processes

Mission Mission Operational Success Design
Goals Objectives Objectives Criteria Drivers

• Agency Strategic • Science Objectives Operational Drivers Measurements Mission Drivers
Plans • Exploration • Integration and • What • Launch Date
• Announcements Objectives Test measurements? • Mission Duration
of Opportunity • Technology • Launch • How well? • Orbit
• Road Maps Demonstration • On-Orbit • Cost Constraints
• Directed Missions Objectives • Transfer • Etc.
• Technology • Surface
Development • Science Data Explorations
Objectives Distribution • What
• Programmatic • Maintenance explorations?
Objectives • Logistics • What goals?
• Etc.

FIGURE 4.1‑2  Information Flow for Stakeholder Expectations

objectives form the basis for developing the mission, In identifying the full set of expectations, the systems
so they need to be clearly defined and articulated. engineer will need to interact with various commu-
nities, such as those working in the areas of orbital
The project team should also identify the constraints debris, space asset protection, human systems inte-
that may apply. A “constraint” is a condition that is to gration, quality assurance, and reliability. Ensuring
be met. Sometimes a constraint is dictated by external that a complete set of expectations is captured will
factors such as orbital mechanics, an existing system help prevent “surprise” features from arising later in
that must be utilized (external interface), a regu- the life cycle. For example, space asset protection may
latory restriction, or the state of technology; some- require additional encryption for the forward link
times constraints are the result of the overall budget commands, additional shielding or filtering for RF
environment. Concepts of operation and constraints systems, use of a different frequency, or other design
also need to be included in defining the stakeholder changes that might be costly to add to a system that
expectations. These identify how the system should has already been developed.
be operated to achieve the mission objectives.
4.1.1.2.3  Identify Needs, Goals, and Objectives
In order to define the goals and objectives, it is nec-
NOTE: It is extremely important to involve stake- essary to elicit the needs, wants, desires, capabilities,
holders in all phases of a project. Such involvement external interfaces, assumptions, and constraints
should be built in as a self-correcting feedback loop from the stakeholders. Arriving at an agreed-to set of
that will significantly enhance the chances of mis- goals and objectives can be a long and arduous task.
sion success. Involving stakeholders in a project Proactive iteration with the stakeholders throughout
builds confidence in the end product and serves as a the systems engineering process is the way that all par-
validation and acceptance with the target audience. ties can come to a true understanding of what should
be done and what it takes to do the job. It is important

NASA SYSTEMS ENGINEERING HANDBOOK 48
4.0  System Design Processes

to know who the primary stakeholders are and who • Goals: An elaboration of the need, which con-
has the decision authority to help resolve conflicts. stitutes a specific set of expectations for the sys-
tem. Goals address the critical issues identified
Needs, Goals, and Objectives (NGOs) provide a during the problem assessment. Goals need not
mechanism to ensure that everyone (implementer, be in a quantitative or measurable form, but they
customer, and other stakeholders) is in agreement at should allow us to assess whether the system has
the beginning of a project in terms of defining the achieved them.
problem that needs to be solved and its scope. NGOs
are not contractual requirements or designs. • Objectives: Specific target levels of outputs the
system must achieve. Each objective should relate
Needs are defined in the answer to the question to a particular goal. Generally, objectives should
“What problem are we trying to solve?” Goals meet four criteria. (1) They should be specific
address what must be done to meet the needs; i.e., enough to provide clear direction, so developers,
what the customer wants the system to do. Objectives customers, and testers will understand them. They
expand on the goals and provide a means to docu- should aim at results and reflect what the system
ment specific expectations. (Rationale should be pro- needs to do but not outline how to implement the
vided where needed to explain why the need, goal, solution. (2) They should be measurable, quantifi-
or objective exists, any assumptions made, and any able, and verifiable. The project needs to monitor
other information useful in understanding or man- the system’s success in achieving each objective.
aging the NGO.) (3) They should be aggressive but attainable,
challenging but reachable, and targets need to be
Well-written NGOs provide clear traceability from realistic. Objectives “To Be Determined” (TBD)
the needs, then to the goals, and then to objectives. may be included until trade studies occur, oper-
For example, if a given goal does not support a need, ations concepts solidify, or technology matures.
or an objective does not support a goal, it should not Objectives need to be feasible before require-
be part of the integrated set of NGOs. This traceabil- ments are written and systems designed. (4) They
ity helps ensure that the team is actually providing should be results-oriented focusing on desired
what is needed. outputs and outcomes, not on the methods used
to achieve the target (what, not how). It is import-
The following definitions (source: Applied Space ant to always remember that objectives are not
Systems Engineering edited by Larson, Kirkpatrick, requirements. Objectives are identified during
Sellers, Thomas, and Verma) are provided to help the pre-Phase A development and help with the even-
reader interpret the NGOs contained in this product. tual formulation of a requirements set, but it is the
requirements themselves that are contractually
• Need: A single statement that drives everything binding and will be verified against the “as-built”
else. It should relate to the problem that the system system design.
is supposed to solve but not be the solution. The
need statement is singular. Trying to satisfy more These stakeholder expectations are captured and are
than one need requires a trade between the two, considered as initial until they can be further refined
which could easily result in failing to meet at least through development of the concept of operations
one, and possibly several, stakeholder expectations. and final agreement by the stakeholders.

NASA SYSTEMS ENGINEERING HANDBOOK 49
4.0  System Design Processes

Establish Concept of Operations and Support
4.1.1.2.4  location that may not be required during the nominal
Strategies mission. The ConOps should include scenarios for all
After the initial stakeholder expectations have significant operational situations, including known
been established, the development of a Concept of off-nominal situations. To develop a useful and
Operations (ConOps) will further ensure that the complete set of scenarios, important malfunctions
technical team fully understands the expectations and degraded-mode operational situations should be
and how they may be satisfied by the product, and considered. The ConOps is also an important aide to
that understanding has been agreed to by the stake- characterizing life cycle staffing goals and function
holders. This may lead to further refinement of the allocation between humans and systems. In walking
initial set of stakeholder expectations if gaps or ambig- through the accomplishment of mission objectives, it
uous statements are discovered. These scenarios and should become clear when decisions need to be made
concepts of how the system will behave provide an as to what the human operators are contributing vs.
implementation-free understanding of the stakehold- what the systems are responsible for delivering.
ers’ expectations by defining what is expected with-
out addressing how (the design) to satisfy the need. The ConOps should consider all aspects of opera-
It captures required behavioral characteristics and tions including nominal and off-nominal operations
the manner in which people will interact with the during integration, test, and launch through dis-
system. Support strategies include provisions for fab- posal. Typical information contained in the ConOps
rication, test, deployment, operations, sustainment, includes a description of the major phases; operation
and disposal. timelines; operational scenarios and/or DRM (see
FIGURE 4.1-3 for an example of a DRM); fault man-
The ConOps is an important component in captur- agement strategies, description of human interaction
ing stakeholder expectations and is used in defin- and required training, end-to-end communications
ing requirements and the architecture of a project. strategy; command and data architecture; opera-
It stimulates the development of the requirements tional facilities; integrated logistic support (resup-
and architecture related to the user elements of the ply, maintenance, and assembly); staffing levels and
system. It serves as the basis for subsequent defini- required skill sets; and critical events. The operational
tion documents such as the operations plan, launch scenarios describe the dynamic view of the systems’
and early orbit plan, and operations handbook, and operations and include how the system is perceived
it provides the foundation for the long-range opera- to function throughout the various modes and mode
tional planning activities such as operational facili- transitions, including interactions with external inter-
ties, staffing, and network scheduling. faces, response to anticipated hazard and faults, and
during failure mitigations. For exploration missions,
The ConOps is an important driver in the system multiple DRMs make up a ConOps. The design and
requirements and therefore should be considered early performance analysis leading to the requirements
in the system design processes. Thinking through the should satisfy all of them.
ConOps and use cases often reveals requirements and
design functions that might otherwise be overlooked. Additional information on the development of the
For example, adding system requirements to allow for ConOps is discussed in Section 4.1.2.1 of the NASA
communication during a particular phase of a mis- Expanded Guidance for Systems Engineering docu-
sion may require an additional antenna in a specific ment found https://nen.nasa.gov/web/se/doc-repository.

NASA SYSTEMS ENGINEERING HANDBOOK 50
4.0  System Design Processes

Moon

Ascent Stage
LSAM Performs Lunar Orbit Injection Expended

100 km
Low Lunar Orbit

Earth Departure
Stage Expended

Low Earth
Orbit

Lunar
Surface
Access
Module Direct or Skip
(LSAM) Crew Exploration Vehicle Land Entry

Earth Departure Stage
Earth

FIGURE 4.1‑3  Example of a Lunar Sortie DRM Early in the Life Cycle

CONCEPT OF OPERATIONS VS. OPERATIONS CONCEPT

Concept of Operations
Developed early in Pre-Phase A by the technical team, describes the overall high-level concept of how the
system will be used to meet stakeholder expectations, usually in a time sequenced manner. It describes
the system from an operational perspective and helps facilitate an understanding of the system goals. It
stimulates the development of the requirements and architecture related to the user elements of the system.
It serves as the basis for subsequent definition documents and provides the foundation for the long-range
operational planning activities.

Operations Concept
A description of how the flight system and the ground system are used together to ensure that the concept
of operation is reasonable. This might include how mission data of interest, such as engineering or scientific
data, are captured, returned to Earth, processed, made available to users, and archived for future reference.
It is typically developed by the operational team. (See NPR 7120.5.)

NASA SYSTEMS ENGINEERING HANDBOOK 51
4.0  System Design Processes

Appendix S contains one possible outline for developing requirements and constraints, are used within the
a ConOps. The specific sections of the ConOps will high-level requirements.
vary depending on the scope and purpose of the project.
Measures of Effectiveness (MOEs) are the measures
Define Stakeholder Expectations in
4.1.1.2.5  of success that are designed to correspond to accom-
Acceptable Statements plishment of the system objectives as defined by the
Once the ConOps has been developed, any gaps or stakeholder’s expectations. They are stated from the
ambiguities have been resolved, and understanding stakeholder’s point of view and represent criteria that
between the technical team and stakeholders about are to be met in order for the stakeholder to consider
what is expected/intended for the system/product has the project successful. As such, they can be synon-
been achieved, the expectations can be formally doc- ymous with mission/project success criteria. MOEs
umented. This often comes in the form of NGOs, are developed when the NGOs or other stakeholder
mission success criteria, and design drivers. These expectation documentation is developed. Additional
may be captured in a document, spreadsheet, model, information on MOEs is contained in Section 6.7.2.4
or other form appropriate to the product. of the NASA Expanded Guidance for SE document
at https://nen.nasa.gov/web/se/doc-repository.
The design drivers will be strongly dependent upon
the ConOps, including the operational environment, Validate That Defined Expectation Statements
4.1.1.2.7 
orbit, and mission duration requirements. For science Reflect Bidirectional Traceability
missions, the design drivers include, at a minimum, The NGOs or other stakeholder expectation doc-
the mission launch date, duration, and orbit, as well umentation should also capture the source of the
as operational considerations. If alternative orbits are expectation. Depending on the location within the
to be considered, a separate concept is needed for product layer, the expectation may be traced to an
each orbit. Exploration missions should consider the NGO or a requirement of a higher layer product, to
destination, duration, operational sequence (and sys- organizational strategic plans, or other sources. Later
tem configuration changes), crew interactions, main- functions and requirements will be traced to these
tenance and repair activities, required training, and NGOs. The use of a requirements management tool
in situ exploration activities that allow the explora- or model or other application is particularly useful in
tion to succeed. capturing and tracing expectations and requirements.

Analyze Expectations Statements for Measures
4.1.1.2.6  Obtain Stakeholder Commitments to the
4.1.1.2.8 
of Effectiveness Validated Set of Expectations
The mission success criteria define what the mission Once the stakeholder and the technical team are
needs to accomplish to be successful. This could be in agreement with the expressed stakeholder expec-
in the form of science missions, exploration concept tations and the concept of operations, signatures or
for human exploration missions, or a technological other forms of commitment are obtained. In order to
goal for technology demonstration missions. The obtain these commitments, a concept review is typi-
success criteria also define how well the concept cally held on a formal or informal basis depending on
measurements or exploration activities should be the scope and complexity of the system (see Section
accomplished. The success criteria capture the stake- 6.7). The stakeholder expectations (e.g., NGOs),
holder expectations and, along with programmatic MOEs, and concept of operations are presented,
discussed, and refined as necessary to achieve final

NASA SYSTEMS ENGINEERING HANDBOOK 52
4.0  System Design Processes

agreement. This agreement shows that both sides • Enabling Product Support Strategies: These
have committed to the development of this product. include any special provisions that might be
needed for fabrication, test, deployment, opera-
4.1.1.2.9  Baseline Stakeholder Expectations tions sustainment, and disposal of the end prod-
The set of stakeholder expectations (e.g., NGOs uct. They identify what support will be needed
and MOEs) and the concept of operations that are and any enabling products that will need to be
agreed upon are now baselined. Any further changes developed in order to generate the end product.
will be required to go through a formal or informal
(depending on the nature of the product) approval • Measures of Effectiveness: A set of MOEs is
process involving both the stakeholder and the developed based on the stakeholder expectations.
technical team. These are measures that represent expectations
that are critical to the success of the system, and
4.1.1.2.10  Capture Work Products failure to satisfy these measures will cause the
In addition to developing, documenting, and base- stakeholder to deem the system unacceptable.
lining stakeholder expectations, the ConOps and
MOEs discussed above and other work products from Other outputs that might be generated:
this process should be captured. These may include
key decisions made, supporting decision rationale • Human/Systems Function Allocation: This
and assumptions, and lessons learned in performing describes the interaction of the hardware and
these activities. software systems with all personnel and their
supporting infrastructure. In many designs (e.g.,
4.1.1.3  Outputs human space flight) human operators are a critical
Typical outputs for capturing stakeholder expecta- total-system component and the roles and respon-
tions include the following: sibilities of the humans-in-the-system should
be clearly understood. This should include all
• Validated Stakeholder Expectations: These are human/system interactions required for a mission
the agreed-to set of expectations for this product including assembly, ground operations, logistics,
layer. They are typically captured in the form of in-flight and ground maintenance, in-flight oper-
needs, goals, and objectives with constraints and ations, etc.
assumptions identified. They may also be in the
form of models or other graphical forms. 4.1.2 
Stakeholder Expectations
Definition Guidance
• Concept of Operations: The ConOps describes Refer to Section 4.1.2 in the NASA Expanded
how the system will be operated during the life Guidance for Systems Engineering at https://nen.
cycle phases that will meet stakeholder expecta- nasa.gov/web/se/doc-repository for additional guid-
tions. It describes the system characteristics from ance on:
an operational perspective and helps facilitate an
understanding of the system goals and objectives • Concept of Operations (including examples),
and other stakeholder expectations. Examples • protection of space assets, and
would be the ConOps document, model, or a • identification of stakeholders for each phase.
Design Reference Mission (DRM).

NASA SYSTEMS ENGINEERING HANDBOOK 53
4.0  System Design Processes

4.2 Technical Requirements 4.2.1 Process Description
Definition FIGURE 4.2-1 provides a typical flow diagram for the
The Technical Requirements Definition Process Technical Requirements Definition Process and iden-
transforms the stakeholder expectations into a defi- tifies typical inputs, outputs, and activities to con-
nition of the problem and then into a complete set sider in addressing technical requirements definition.
of validated technical requirements expressed as
“shall” statements that can be used for defining a 4.2.1.1  Inputs
design solution for the Product Breakdown Structure Typical inputs needed for the requirements process
(PBS) and related enabling products. The process of include the following:
requirements definition is a recursive and iterative one
that develops the stakeholders’ requirements, prod- • Baselined Stakeholder Expectations: This is the
uct requirements, and lower level product/compo- agreed-to set of stakeholder expectations (e.g.,
nent requirements. The requirements should enable needs, goals, objectives, assumptions, constraints,
the description of all inputs, outputs, and required external interfaces) for the product(s) of this prod-
relationships between inputs and outputs, including uct layer.
constraints, and system interactions with operators,
maintainers, and other systems. The requirements • Baselined Concept of Operations: This describes
documents organize and communicate requirements how the system will be operated during the life
to the customer and other stakeholders and the tech- cycle phases to meet stakeholder expectations. It
nical community. describes the system characteristics from an oper-
ational perspective and helps facilitate an under-
standing of the system goals, objectives, and
NOTE: It is important to note that the team constraints. It includes scenarios, use cases, and/or
must not rely solely on the requirements received to Design Reference Missions (DRMs) as appropriate
design and build the system. Communication and for the project. It may be in the form of a docu-
iteration with the relevant stakeholders are essential ment, graphics, videos, models, and/or simulations.
to ensure a mutual understanding of each require-
ment. Otherwise, the designers run the risk of • Baselined Enabling Support Strategies: These
misunderstanding and implementing an unwanted describe the enabling products that were identi-
solution to a different interpretation of the require- fied in the Stakeholder Expectations Definition
ments. This iterative stakeholder communication Process as needed to develop, test, produce,
is a critically important part of project validation. operate, or dispose of the end product. They also
Always confirm that the right products and results include descriptions of how the end product will
are being developed. be supported throughout the life cycle.

• Measures of Effectiveness: These MOEs were
Technical requirements definition activities apply to identified during the Stakeholder Expectations
the definition of all technical requirements from the Definition Process as measures that the stake-
program, project, and system levels down to the low- holders deemed necessary to meet in order for the
est level product/component requirements document. project to be considered a success (i.e., to meet
success criteria).

NASA SYSTEMS ENGINEERING HANDBOOK 54
4.0  System Design Processes

Analyze scope of problem

From Stakeholder
Expectations Definition To Logical Decomposition
and Configuration Define design and Define functional and
and Requirements
Management Processes product constraints behavioral expectation in
and Interface Management
technical terms
Processes
Baselined Stakeholder
Expectations
Validated Technical
Define performance Requirements
requirements for each
Baselined Concept of defined functional and
Operations behavioral expectation To Logical Decomposition
and Technical Data
Management Processes
Baselined Enabling
Support Strategies Measures of
Define technical require- Performance
ments in acceptable
“shall” statements
To Technical
Assessment Process

Technical Performance
Measures of Validate technical Define measures of Measures
Effectiveness requirements performance for each
measure of effectiveness

Establish technical Define technical
requirements baseline performance measures

Capture work products from technical
requirements definition activities

FIGURE 4.2‑1  Technical Requirements Definition Process

Other inputs that might be useful in determining the 4.2.1.2  Process Activities
technical requirements: Define Constraints, Functional and
4.2.1.2.1 
Behavioral Expectations
• Human/Systems Function Allocation: This The top-level requirements and expectations are ini-
describes the interaction of the hardware and tially assessed to understand the technical problem
software systems with all personnel and their sup- to be solved (scope of the problem) and establish the
porting infrastructure. When human operators design boundary. This boundary is typically estab-
are a critical total-system component, the roles lished by performing the following activities:
and responsibilities of the humans-in-the-system
should be clearly understood. This should include • Defining constraints that the design needs to
all human/system interactions required for a mis- adhere to or that limit how the system will be
sion including assembly, ground operations, logis- used. The constraints typically cannot be changed
tics, in-flight and ground maintenance, in-flight based on trade-off analyses.
operations, etc.
• Identifying those elements that are already under
design control and cannot be changed. This helps

NASA SYSTEMS ENGINEERING HANDBOOK 55
4.0  System Design Processes

establish those areas where further trades will be interaction requirements). Crosscutting require-
made to narrow potential design solutions. ments include environmental, safety, human factors,
and those that originate from the “-ilities” and from
• Identifying external and enabling systems with Design and Construction (D&C) standards. FIGURE
which the system should interact and establishing 4.2-2 is a general overview on the flow of require-
physical and functional interfaces (e.g., mechani- ments, what they are called, and who is responsible
cal, electrical, thermal, human, etc.). (owns) for approving waivers.

• Defining functional and behavioral expectations
for the range of anticipated uses of the system as • Functional requirements define what
identified in the ConOps. The ConOps describes functions need to be performed to
how the system will be operated and the possible accomplish the objectives.
use-case scenarios. • Performance requirements define how well
the system needs to perform the functions.
4.2.1.2.2  Define Requirements
A complete set of project requirements includes those
that are decomposed and allocated down to design With an overall understanding of the constraints,
elements through the PBS and those that cut across physical/functional interfaces, and functional/behav-
product boundaries. Requirements allocated to the ioral expectations, the requirements can be further
PBS can be functional requirements (what functions defined by establishing performance and other tech-
need to be performed), performance requirements nical criteria. The expected performance is expressed
(how well these functions should be performed), as a quantitative measure to indicate how well each
and interface requirements (product to product product function needs to be accomplished.

EXAMPLE OF FUNCTIONAL AND PERFORMANCE REQUIREMENTS

Initial Function Statement
The Thrust Vector Controller (TVC) shall provide vehicle control about the pitch and yaw axes.

This statement describes a high-level function that the TVC must perform. The technical team needs to
transform this statement into a set of design-to functional and performance requirements.

Functional Requirements with Associated Performance Requirements
• The TVC shall gimbal the engine a maximum of 9 degrees, ± 0.1 degree.
• The TVC shall gimbal the engine at a maximum rate of 5 degrees/second ± 0.3 degrees/second.
• The TVC shall provide a force of 40,000 pounds, ± 500 pounds.
• The TVC shall have a frequency response of 20 Hz, ± 0.1 Hz.

NASA SYSTEMS ENGINEERING HANDBOOK 56
4.0  System Design Processes

Flow Type Ownership

Mission Directorate
Imposed Requirements

“Programmatic” All Owned by
Self-Imposed Requirements Program/
Program Project
Derived
Requirements
Requirements Ex: At least one major
Ex: The spacecraft shall
element shall be provided by
provide a direct Earth entry
Program the international community.
capability for 11500 m/s or
Imposed greater.
Requirements

Self-Imposed
Project Technical Owned by
Derived
Requirements Requirements Technical
Requirements See note* Authority
Likewise flow to Ex: The spacecraft shall Ex: The system shall have a
Lower Level provide a direct Earth entry 1.4 factor of safety
Systems capability for 11500 m/s or
greater.

* Requirements invoked by OCE, OSMA and OCHMO directives, technical standards and Center institutional requirements

FIGURE 4.2-2  Flow, Type and Ownership of Requirements

NOTE: Requirements can be generated from technical requirements from which the system will
non-obvious stakeholders and may not directly be architected and designed. FIGURE 4.2-3 shows an
support the current mission and its objectives, but example of parent and child requirement flowdown.
instead provide an opportunity to gain additional
benefits or information that can support the Agency Define Requirements in Acceptable
4.2.1.2.3 
or the Nation. Early in the process, the systems Statements
engineer can help identify potential areas where the Finally, the requirements should be defined in accept-
system can be used to collect unique information able “shall” statements, which are complete sentences
that is not directly related to the primary mission. with a single “shall” per statement. Rationale for the
Often outside groups are not aware of the system requirement should also be captured to ensure the
goals and capabilities until it is almost too late in reason and context of the requirement is understood.
the process. The Key Driving Requirements (KDRs) should be
identified. These are requirements that can have a
large impact on cost or schedule when implemented.
Technical requirements come from a number of A KDR can have any priority or criticality. Knowing
sources including functional, performance, interface, the impact that a KDR has on the design allows bet-
environmental, safety, human interfaces, standards ter management of requirements.
and in support of the “’ilities” such as reliability, sus-
tainability, producibility and others. Consideration See Appendix C for guidance and a checklist on how
and inclusion of all types of requirements is needed to write good requirements and Appendix E for val-
in order to form a complete and consistent set of idating requirements. A well-written requirements

NASA SYSTEMS ENGINEERING HANDBOOK 57
4.0  System Design Processes

Mission
Authority

Mission
Objectives

Mission Programmatics:
Requirements • Cost
• Schedule
• Constraints
Customer • Mission Classification

Implementing
Organizations System
Functional
Requirements
Environmental Institutional
and Other Design Constraints
Requirements
and Guidelines
Assumptions

System
Performance
Requirements

Subsystem A Subsystem Subsystem Subsystem X
Functional and B C Functional and
Performance Performance
Requirements Requirements

Allocated
Requirements
Derived
Requirements
... Allocated
Requirements
Derived
Requirements

FIGURE 4.2‑3  The Flowdown of Requirements

document provides several specific benefits to both request or have options for storing this type of infor-
the stakeholders and the technical team as shown in mation. TABLE 4.2-2 provides examples of the types of
TABLE 4.2-1. metadata that might be useful.

It is useful to capture information about each of the 4.2.1.2.4  Validate Technical Requirements
requirements, called metadata, for future reference An important part of requirements definition is the
and use. Many requirements management tools will validation of the requirements against the stakeholder

NASA SYSTEMS ENGINEERING HANDBOOK 58
4.0  System Design Processes

TABLE 4.2-1  Benefits of Well-Written Requirements
Benefit Rationale
Establish the basis for The complete description of the functions to be performed by the product specified in
agreement between the the requirements will assist the potential users in determining if the product specified
stakeholders and the meets their needs or how the product should be modified to meet their needs. During
developers on what the system design, requirements are allocated to subsystems (e.g., hardware, software,
product is to do and other major components of the system), people, or processes.

Reduce the development The Technical Requirements Definition Process activities force the relevant
effort because less rework stakeholders to rigorously consider all of the requirements before design begins.
is required to address Careful review of the requirements can reveal omissions, misunderstandings, and
poorly written, missing, inconsistencies early in the development cycle when these problems are easier to
and misunderstood correct thereby reducing costly redesign, remanufacture, recoding, and retesting in
requirements later life cycle phases.

Provide a basis for The description of the product to be developed as given in the requirements is a
estimating costs and realistic basis for estimating project costs and can be used to evaluate bids or price
schedules estimates.

Provide a baseline for Organizations can develop their verification and validation plans much more
verification and validation productively from a good requirements document. Both system and subsystem
test plans and procedures are generated from the requirements. As part of the
development, the requirements document provides a baseline against which
compliance can be measured. The requirements are also used to provide the
stakeholders with a basis for acceptance of the system.

Facilitate transfer The requirements make it easier to transfer the product. Stakeholders thus find it
easier to transfer the product to other parts of their organization, and developers find it
easier to transfer it to new stakeholders or reuse it.

Serve as a basis for The requirements serve as a basis for later enhancement or alteration of the finished
enhancement product.

TABLE 4.2‑2  Requirements Metadata
Item Function
Requirement ID Provides a unique numbering system for sorting and tracking.

Rationale Provides additional information to help clarify the intent of the requirements at the time they
were written. (See “Rationale” box below on what should be captured.)

Traced from Captures the bidirectional traceability between parent requirements and lower level (derived)
requirements and the relationships between requirements.

Owner Person or group responsible for writing, managing, and/or approving changes to this
requirement.

Verification method Captures the method of verification (test, inspection, analysis, demonstration) and should be
determined as the requirements are developed.

Verification lead Person or group assigned responsibility for verifying the requirement.

Verification level Specifies the level in the hierarchy at which the requirements will be verified (e.g., system,
subsystem, element).

NASA SYSTEMS ENGINEERING HANDBOOK 59
4.0  System Design Processes

RATIONALE

The rationale should be kept up to date and include the following information:

• Reason for the Requirement: Often the reason for the requirement is not obvious, and it may be lost if
not recorded as the requirement is being documented. The reason may point to a constraint or concept
of operations. If there is a clear parent requirement or trade study that explains the reason, then it should
be referenced.
• Document Assumptions: If a requirement was written assuming the completion of a technology
development program or a successful technology mission, the assumption should be documented.
• Document Relationships: The relationships with the product’s expected operations (e.g., expectations
about how stakeholders will use a product) should be documented. This may be done with a link to the
ConOps.
• Document Design Constraints: Constraints imposed by the results from decisions made as the design
evolves should be documented. If the requirement states a method of implementation, the rationale
should state why the decision was made to limit the solution to this one method of implementation.

expectations, the mission objectives and constraints, 3. Do the Requirements Satisfy Stakeholders?
the concept of operations, and the mission success All relevant stakeholder groups identify and
criteria. Validating requirements can be broken into remove defects.
six steps:
4. Are the Requirements Feasible? All require-
1. Are the Requirements Written Correctly? ments should make technical sense and be possi-
Identify and correct requirements “shall” state- ble to achieve.
ment format errors and editorial errors.
5. Are the Requirements Verifiable? All require-
2. Are the Requirements Technically Correct? A ments should be stated in a fashion and with
few trained reviewers from the technical team enough information that it will be possible to
identify and remove as many technical errors as verify the requirement after the end product is
possible before having all the relevant stakehold- implemented.
ers review the requirements. The reviewers should
check that the requirement statements (a) have 6. Are the Requirements Redundant or Over-
bidirectional traceability to the baselined stake- specified? All requirements should be unique
holder expectations; (b) were formed using valid (not redundant to other requirements) and nec-
assumptions; and (c) are essential to and consis- essary to meet the required functions, perfor-
tent with designing and realizing the appropriate mance, or behaviors.
product solution form that will satisfy the appli-
cable product life cycle phase success criteria.

NASA SYSTEMS ENGINEERING HANDBOOK 60
4.0  System Design Processes

Requirements validation results are often a decid- For additional information on MOPs and TPMs,
ing factor in whether to proceed with the next pro- their relationship to each other and MOEs, and
cess of Logical Decomposition or Design Solution examples of each, see Section 6.7.2.6.2 of the NASA
Definition. The project team should be prepared to: Expanded Guidance for SE document at https://nen.
(1) demonstrate that the project requirements are nasa.gov/web/se/doc-repository.
complete and understandable; (2) demonstrate that
evaluation criteria are consistent with requirements 4.2.1.2.6  Establish Technical Requirement Baseline
and the operations and logistics concepts; (3) con- Once the technical requirements are identified and
firm that requirements and MOEs are consistent validated to be good (clear, correct, complete, and
with stakeholder needs; (4) demonstrate that oper- achievable) requirements, and agreement has been
ations and architecture concepts support mission gained by the customer and key stakeholders, they
needs, goals, objectives, assumptions, guidelines, and are baselined and placed under configuration control.
constraints; and (5) demonstrate that the process for Typically, a System Requirements Review (SRR) is
managing change in requirements is established, doc- held to allow comments on any needed changes and
umented in the project information repository, and to gain agreement on the set of requirements so that it
communicated to stakeholders. may be subsequently baselined. For additional infor-
mation on the SRR, see Section 6.7.
4.2.1.2.5  Define MOPs and TPMs
Measures of Performance (MOPs) define the perfor- 4.2.1.2.7  Capture Work Products
mance characteristics that the system should exhibit The work products generated during the above activ-
when fielded and operated in its intended environment. ities should be captured along with key decisions
MOPs are derived from the MOEs but are stated in that were made, any supporting decision rationale
more technical terms from the supplier’s point of view. and assumptions, and lessons learned in performing
Typically, multiple MOPs, which are quantitative and these activities.
measurable, are needed to satisfy a MOE, which can
be qualitative. From a verification and acceptance 4.2.1.3  Outputs
point of view, MOPs reflect the system characteristics • Validated Technical Requirements: This is the
deemed necessary to achieve the MOEs. approved set of requirements that represents a
complete description of the problem to be solved
Technical Performance Measures (TPMs) are phys- and requirements that have been validated and
ical or functional characteristics of the system asso- approved by the customer and stakeholders.
ciated with or established from the MOPs that are Examples of documents that capture the require-
deemed critical or key to mission success. The TPMs ments are a System Requirements Document
are monitored during implementation by comparing (SRD), Project Requirements Document (PRD),
the current actual achievement or best estimate of the Interface Requirements Document (IRD), and a
parameters with the values that were anticipated for Software Requirements Specification (SRS).
the current time and projected for future dates. They
are used to confirm progress and identify deficien- • Measures of Performance: These are the iden-
cies that might jeopardize meeting a critical system tified quantitative measures that, when met
requirement or put the project at cost or schedule risk. by the design solution, help ensure that one
or more MOEs will be satisfied. There may be

NASA SYSTEMS ENGINEERING HANDBOOK 61
4.0  System Design Processes

two or more MOPs for each MOE. See Section the requirements (e.g., functional, performance,
6.7.2.6.2in the NASA Expanded Guidance for behavioral, and temporal etc.), and
Systems Engineering at https://nen.nasa.gov/web/
se/doc-repository for further details. • Decompose the parent requirements into a set of
logical decomposition models and their associated
• Technical Performance Measures: These are the sets of derived technical requirements for input to
set of performance measures that are monitored the Design Solution Definition Process.
and trended by comparing the current actual
achievement of the parameters with that expected 4.3.1 Process Description
or required at the time. TPMs are used to confirm FIGURE 4.3-1 provides a typical flow diagram for the
progress and identify deficiencies. See Section Logical Decomposition Process and identifies typical
6.7.2.6.2 in the NASA Expanded Guidance for inputs, outputs, and activities to consider in address-
Systems Engineering at https://nen.nasa.gov/web/ ing logical decomposition.
se/doc-repository for further details.
4.3.1.1  Inputs
4.2.2 Technical Requirements Typical inputs needed for the Logical Decomposition
Definition Guidance Process include the following:
Refer to Section 4.2.2 of the NASA Expanded
Guidance for SE document at https://nen.nasa.gov/ • Technical Requirements: A validated set of
web/se/doc-repository for additional information on: requirements that represent a description of the
problem to be solved, have been established by
• types of requirements, functional and performance analysis, and have
• requirements databases, and been approved by the customer and other stake-
• the use of technical standards. holders. Examples of documents that capture the
requirements are an SRD, PRD, and IRD.
4.3 Logical Decomposition
• Technical Measures: An established set of mea-
Logical decomposition is the process for creating the sures based on the expectations and requirements
detailed functional requirements that enable NASA that will be tracked and assessed to determine
programs and projects to meet the stakeholder expec- overall system or product effectiveness and cus-
tations. This process identifies the “what” that should tomer satisfaction. These measures are MOEs,
be achieved by the system at each level to enable a MOPs, and a special subset of these called TPMs.
successful project. Logical decomposition utilizes See Section 6.7.2.6.2 in the NASA Expanded
functional analysis to create a system architecture Guidance for Systems Engineering at https://nen.
and to decompose top-level (or parent) requirements nasa.gov/web/se/doc-repository for further details.
and allocate them down to the lowest desired levels
of the project. 4.3.1.2  Process Activities
Define One or More Logical Decomposition
4.3.1.2.1 
The Logical Decomposition Process is used to: Models
The key first step in the Logical Decomposition
• Improve understanding of the defined techni- Process is establishing the system architecture
cal requirements and the relationships among model. The system architecture activity defines the

NASA SYSTEMS ENGINEERING HANDBOOK 62
4.0  System Design Processes

To Design Solution and
From Technical Define one or more logical
Requirements and Interface
Requirements Definition decomposition models Management Processes
and Configuration
Management Processes Derived Technical
Allocate technical requirements to Requirements
Baselined Technical logical decomposition models to form
Requirements a set of derived technical requirements
To Design Solution
and Configuration
From Technical Resolve derived technical Management Processes
Requirements Definition requirements conflicts
and Technical Data Logical Decomposition
Management Processes Models
Validate the resulting set of derived
Measures of
technical requirements
Performance To Technical Data
Management Process
Establish the derived technical Logical Decomposition
requirements baseline Work Products

Capture work products from logical
decomposition activities

FIGURE 4.3‑1  Logical Decomposition Process

underlying structure and relationships of hardware, defined and understood. It is strategic in its focus
software, humans-in-the-loop, support personnel, on the overarching structure of the system and how
communications, operations, etc., that provide for its elements fit together to contribute to the whole,
the implementation of Agency, mission director- instead of on the particular workings of the elements
ate, program, project, and subsequent levels of the themselves. It enables the elements to be developed
requirements. System architecture activities drive the separately from each other while ensuring that they
partitioning of system elements and requirements to work together effectively to achieve the top-level (or
lower level functions and requirements to the point parent) requirements.
that design work can be accomplished. Interfaces and
relationships between partitioned subsystems and Much like the other elements of functional decom-
elements are defined as well. position, the development of a good system-level
architecture is a creative, recursive, collaborative, and
Once the top-level (or parent) functional require- iterative process that combines an excellent under-
ments and constraints have been established, the standing of the project’s end objectives and constraints
system designer uses functional analysis to begin with an equally good knowledge of various potential
to formulate a conceptual system architecture. The technical means of delivering the end products.
system architecture can be seen as the strategic orga-
nization of the functional elements of the system, Focusing on the project’s ends, top-level (or parent)
laid out to enable the roles, relationships, dependen- requirements, and constraints, the system architect
cies, and interfaces between elements to be clearly should develop at least one, but preferably multiple,

NASA SYSTEMS ENGINEERING HANDBOOK 63
4.0  System Design Processes

concept architectures capable of achieving program are developed with increased fidelity as the project
objectives. Each architecture concept involves spec- evolves. Functional decomposition, requirements
ification of the functional elements (what the pieces development, and trade studies are subsequently
do), their relationships to each other (interface defi- undertaken. Multiple iterations of these activities
nition), and the ConOps, i.e., how the various seg- feed back to the evolving architectural concept as the
ments, subsystems, elements, personnel, units, etc., requirements flow down and the design matures.
will operate as a system when distributed by location
and environment from the start of operations to the Allocate Technical Requirements, Resolve
4.3.1.2.2 
end of the mission. Conflicts, and Baseline
Functional analysis is the primary method used in
The development process for the architectural con- system architecture development and functional
cepts should be recursive and iterative with feedback requirement decomposition. It is the systematic pro-
from stakeholders and external reviewers, as well as cess of identifying, describing, and relating the func-
from subsystem designers and operators, provided as tions a system should perform to fulfill its goals and
often as possible to increase the likelihood of effec- objectives. Functional analysis identifies and links
tively achieving the program’s desired ends while system functions, trade studies, interface character-
reducing the likelihood of cost and schedule overruns. istics, and rationales to requirements. It is usually
based on the ConOps for the system of interest.
In the early stages of development, multiple con-
cepts are generated. Cost and schedule constraints Three key steps in performing functional analysis are:
will ultimately limit how long a program or project
can maintain multiple architectural concepts. For all 1. Translate top-level requirements into functions
NASA programs, architecture design is completed that should be performed to accomplish the
during the Formulation Phase. For most NASA proj- requirements.
ects (and tightly coupled programs), the baselining
of a single architecture happens during Phase A. 2. Decompose and allocate the functions to lower
Architectural changes at higher levels occasionally levels of the product breakdown structure.
occur as decomposition to lower levels produces com-
plexity in design, cost, or schedule that necessitates 3. Identify and describe functional and subsystem
such changes. However, as noted in FIGURE 2.5-1, the interfaces.
later in the development process that changes occur,
the more expensive they become. The process involves analyzing each system require-
ment to identify all of the functions that need to be
Aside from the creative minds of the architects, there performed to meet the requirement. Each function
are multiple tools that can be utilized to develop a identified is described in terms of inputs, outputs,
system’s architecture. These are primarily modeling failure modes, consequence of failure, and inter-
and simulation tools, functional analysis tools, archi- face requirements. The process is repeated from
tecture frameworks, and trade studies. (For example, the top down so that sub-functions are recognized
one way of doing architecture is the Department of as part of larger functional areas. Functions are
Defense (DOD) Architecture Framework (DODAF). arranged in a logical sequence so that any specified
A search concept is developed, and analytical models operational usage of the system can be traced in an
of the architecture, its elements, and their operations end-to-end path.

NASA SYSTEMS ENGINEERING HANDBOOK 64
4.0  System Design Processes

The process is recursive and iterative and continues of requirements into lower levels to the point that
until all desired levels of the architecture/system have design work can be accomplished.
been analyzed, defined, and baselined. There will
almost certainly be alternative ways to decompose • Derived Technical Requirements: These are
functions. For example, there may be several ways to requirements that arise from the definitions of
communicate with the crew: Radio Frequency (RF), the selected architecture that were not explicitly
laser, Internet, etc. Therefore, the outcome is highly stated in the baselined requirements that served as
dependent on the creativity, skills, and experience an input to this process. Both the baselined and
of the engineers doing the analysis. As the analysis derived requirements are allocated to the system
proceeds to lower levels of the architecture and sys- architecture and functions.
tem, and the system is better understood, the systems
engineer should keep an open mind and a willingness • Logical Decomposition Work Products: These
to go back and change previously established archi- are the other products generated by the activities
tecture and system requirements. These changes will of this process.
then have to be decomposed down through the archi-
tecture and sub-functions again with the recursive 4.3.2 Logical Decomposition
process continuing until the system is fully defined Guidance
with all of the requirements understood and known Refer to Section 4.3.2 and Appendix F in the NASA
to be viable, verifiable, and internally consistent. Expanded Guidance for Systems Engineering at
Only at that point should the system architecture and https://nen.nasa.gov/web/se/doc-repository for addi-
requirements be baselined. tional guidance on:

4.3.1.2.3  Capture Work Products • Product Breakdown Structures and
The other work products generated during the • Functional Analysis Techniques.
Logical Decomposition Process should be captured
along with key decisions made, supporting decision 4.4 Design Solution Definition
rationale and assumptions, and lessons learned in
performing the activities. The Design Solution Definition Process is used to
translate the high-level requirements derived from
4.3.1.3  Outputs the stakeholder expectations and the outputs of the
Typical outputs of the Logical Decomposition Logical Decomposition Process into a design solu-
Process include the following: tion. This involves transforming the defined logical
decomposition models and their associated sets of
• Logical Decomposition Models: These models derived technical requirements into alternative solu-
define the relationship of the requirements and tions. These alternative solutions are then analyzed
functions and their behaviors. They include the through detailed trade studies that result in the
system architecture models that define the under- selection of a preferred alternative. This preferred
lying structure and relationship of the elements alternative is then fully defined into a final design
of the system (e.g., hardware, software, humans- solution that satisfies the technical requirements.
in-the-loop, support personnel, communications, This design solution definition is used to generate the
operations, etc.) and the basis for the partitioning end product specifications that are used to produce

NASA SYSTEMS ENGINEERING HANDBOOK 65
4.0  System Design Processes

the product and to conduct product verification. This • Technical Requirements: These are the customer
process may be further refined depending on whether and stakeholder needs that have been translated
there are additional subsystems of the end product into a complete set of validated requirements for
that need to be defined. the system, including all interface requirements.

4.4.1  Process Description • Logical Decomposition Models: Requirements
FIGURE 4.4-1 provides a typical flow diagram for the are analyzed and decomposed by one or more
Design Solution Definition Process and identifies different methods (e.g., function, time, behavior,
typical inputs, outputs, and activities to consider in data flow, states, modes, system architecture, etc.)
addressing design solution definition. in order to gain a more comprehensive under-
standing of their interaction and behaviors. (See
4.4.1.1 Inputs the definition of a model in Appendix B.)
There are several fundamental inputs needed to initi-
ate the Design Solution Definition Process:

To Requirements and
Interface Management Processes
System-Speci d
Define alternative design solutions Requirements

End Product–Specified
Analyze each alternative design solution Requirements

From Logical To Stakeholder Expectations Definition
Decomposition and Select best design solution alternative and Requirements and Interface
Configuration Management Management Processes
Processes Initial Subsystem
Baselined Logical Generate full design description of the Specifications
Decomposition selected solution
Models To Stakeholder Expectations Definition
or Product Implementation and
Verify the fully defined design solution Requirements and Interface
Management Processes
Baselined Derived
Technical Enabling Product
Baseline design solution specified requirements
Requirements Requirements
and design descriptions
To Product Verification Process
Product Verification
Yes Enabling Need No Plan
lower level
* product
product?
*
exists? To Product Validation Process
Product Validation
No Yes Plan
Initiate development
Initiate development To Technical Data Management Process
of next lower level
of enabling products
products Logistics and Operate-
To Procedures
Capture work products from design solution
definition activities

* To Implementation Process

FIGURE 4.4‑1  Design Solution Definition Process

NASA SYSTEMS ENGINEERING HANDBOOK 66
4.0  System Design Processes

4.4.1.2 Process Activities Recognize
Identify and
need/
4.4.1.2.1 Define Alternative Design Solutions quantify goals
opportunity
The realization of a system over its life cycle involves Identify and
quantify goals
a succession of decisions among alternative courses
Identify and
of action. If the alternatives are precisely defined and quantify goals

conc
Creat ts
thoroughly understood to be well differentiated in

conc
Creat
reso ease
Identify and

n

ep
lutio
the cost-effectiveness space, then the systems engi-

e
conc
Creat ts
quantify goals

reso ease
n

epts
Incr

lutio

e
reso ease
n

conc
Creat ts
neer can make choices among them with confidence.

lutio

ep
Incr

e
Incr

ep
e
e
rad
To obtain assessments that are crisp enough to facili- ns
Sele
desig c t D o t dies
stu
is io n de
tate good decisions, it is often necessary to delve more ec Sele o tra ies
td D ud
en desi ct st
deeply into the space of possible designs than has yet gn de
em tra ies
pl Sele
o
D ud
been done, as illustrated in FIGURE 4.4-2 . It should m
desi ct
st
de

I
gn tra s
be realized, however, that this illustration represents Sele Do udie
st
neither the project life cycle, which encompasses Perform desi ct
mission gn
the system development process from inception
through disposal, nor the product development pro-
cess by which the system design is developed and FIGURE 4.4‑2  The Doctrine of Successive Refinement

implemented.
Additionally, the state of any technology that is con-
Each “create concepts” step in FIGURE 4.4-2 involves a sidered enabling should be properly monitored, and
recursive and iterative design loop driven by the set of care should be taken when assessing the impact of
stakeholder expectations where a straw man architec- this technology on the concept performance. This
ture/design, the associated ConOps, and the derived interaction is facilitated through a periodic assess-
requirements are developed and programmatic con- ment of the design with respect to the maturity of the
straints such as cost and schedule are considered. technology required to implement the design. (See
These three products should be consistent with each Section 4.4.2.1 in the NASA Expanded Guidance
other and will require iterations and design decisions for Systems Engineering at https://nen.nasa.gov/web/
to achieve this consistency. This recursive and itera- se/doc-repository for a more detailed discussion of
tive design loop is illustrated in FIGURE 4.0-1. technology assessment.) These technology elements
usually exist at a lower level in the PBS. Although
Each “create concepts” step in FIGURE 4.4-2 also the process of design concept development by the
involves an assessment of potential capabilities offered integration of lower level elements is a part of the
by the continually changing state of technology and systems engineering process, there is always a danger
potential pitfalls captured through experience-based that the top-down process cannot keep up with the
review of prior program/project lessons learned data. bottom-up process. Therefore, system architecture
It is imperative that there be a continual interaction issues need to be resolved early so that the system
between the technology development process, cross- can be modeled with sufficient realism to do reliable
cutting processes such as human systems integration, trade studies.
and the design process to ensure that the design
reflects the realities of the available technology and As the system is realized, its particulars become
that overreliance on immature technology is avoided. clearer—but also harder to change. See the rising

NASA SYSTEMS ENGINEERING HANDBOOK 67
4.0  System Design Processes

“Cost to Change Design Direction” in FIGURE 2.5‑1. interaction between the technology development
The purpose of systems engineering is to make sure process and the design process ensures that the design
that the Design Solution Definition Process happens reflects the realities of the available technology. This
in a way that leads to the most functional, safe, and interaction is facilitated through periodic assessment
cost-effective final system while working within any of the design with respect to the maturity of the tech-
given schedule boundaries. The basic idea is that nology required to implement the design.
before those decisions that are hard to undo are
made, the alternatives should be carefully and itera- After identifying the technology gaps existing in
tively assessed, particularly with respect both to the a given design concept, it is frequently necessary
maturity of the required technology and to stake- to undertake technology development in order to
holder expectations for efficient, effective operations. ascertain viability. Given that resources will always
be limited, it is necessary to pursue only the most
4.4.1.2.2  Create Alternative Design Concepts promising technologies that are required to enable a
Once it is understood what the system is to accom- given concept.
plish, it is possible to devise a variety of ways that
those goals can be met. Sometimes, that comes about If requirements are defined without fully under-
as a consequence of considering alternative func- standing the resources required to accomplish needed
tional allocations and integrating available subsystem technology developments, then the program/project
design options, all of which can have technologies at is at risk. Technology assessment should be done iter-
varying degrees of maturity. Ideally, as wide a range of atively until requirements and available resources are
plausible alternatives as is consistent with the design aligned within an acceptable risk posture. Technology
organization’s charter should be defined, keeping in development plays a far greater role in the life cycle
mind the current stage in the process of successive of a program/project than has been traditionally con-
refinement. When the bottom-up process is operat- sidered, and it is the role of the systems engineer to
ing, a problem for the systems engineer is that the develop an understanding of the extent of program/
designers tend to become fond of the designs they project impacts—maximizing benefits and minimiz-
create, so they lose their objectivity; the systems engi- ing adverse effects. Traditionally, from a program/
neer should stay an “outsider” so that there is more project perspective, technology development has
objectivity. This is particularly true in the assessment been associated with the development and incorpo-
of the technological maturity of the subsystems and ration of any “new” technology necessary to meet
components required for implementation. There is a requirements. However, a frequently overlooked area
tendency on the part of technology developers and is that associated with the modification of “heritage”
project management to overestimate the maturity systems incorporated into different architectures and
and applicability of a technology that is required to operating in different environments from the ones for
implement a design. This is especially true of “heri- which they were designed. If the required modifica-
tage” equipment. The result is that critical aspects of tions and/or operating environments fall outside the
systems engineering are often overlooked. realm of experience, then these too should be consid-
ered technology development.
The creation of alternative design solutions involves
assessment of potential capabilities offered by the To understand whether or not technology develop-
continually changing state of technology. A continual ment is required—and to subsequently quantify the

NASA SYSTEMS ENGINEERING HANDBOOK 68
4.0  System Design Processes

associated cost, schedule, and risk—it is necessary 4.4.1.2.3  Analyze Each Alternative Design Solution
to systematically assess the maturity of each system, The technical team analyzes how well each of the
subsystem, or component in terms of the architecture design alternatives meets the system objectives (tech-
and operational environment. It is then necessary to nology gaps, effectiveness, technical achievability,
assess what is required in the way of development to performance, cost, schedule, and risk, both quanti-
advance the maturity to a point where it can success- fied and otherwise). This assessment is accomplished
fully be incorporated within cost, schedule, and per- through the use of trade studies. The purpose of the
formance constraints. A process for accomplishing trade study process is to ensure that the system archi-
this assessment is described in Appendix G. Because tecture, intended operations (i.e., the ConOps) and
technology development has the potential for such design decisions move toward the best solution that
significant impacts on a program/project, technology can be achieved with the available resources. The
assessment needs to play a role throughout the design basic steps in that process are:
and development process from concept development
through Preliminary Design Review (PDR). Lessons • Devise some alternative means to meet the func-
learned from a technology development point of tional requirements. In the early phases of the
view should then be captured in the final phase of project life cycle, this means focusing on system
the program. architectures; in later phases, emphasis is given to
system designs.
On the first turn of the successive refinement in
FIGURE 4.4-2 , the subject is often general approaches • Evaluate these alternatives in terms of the MOPs
or strategies, sometimes architectural concepts. On and system life cycle cost. Mathematical models
the next, it is likely to be functional design, then are useful in this step not only for forcing rec-
detailed design, and so on. The reason for avoiding ognition of the relationships among the outcome
a premature focus on a single design is to permit dis- variables, but also for helping to determine what
covery of the truly best design. Part of the systems the MOPs should be quantitatively.
engineer’s job is to ensure that the design concepts to
be compared take into account all interface require- • Rank the alternatives according to appropriate
ments. Characteristic questions include: “Did you selection criteria.
include the cabling?” or “Did you consider how the
maintainers can repair the system?” When possible, • Drop less promising alternatives and proceed to
each design concept should be described in terms the next level of resolution, if needed.
of controllable design parameters so that each rep-
resents as wide a class of designs as is reasonable. In The trade study process should be done openly and
doing so, the systems engineer should keep in mind inclusively. While quantitative techniques and rules
that the potentials for change may include organi- are used, subjectivity also plays a significant role.
zational structure, personnel constraints, schedules, To make the process work effectively, participants
procedures, and any of the other things that make up should have open minds, and individuals with dif-
a system. When possible, constraints should also be ferent skills—systems engineers, design engineers,
described by parameters. crosscutting specialty discipline and domain engi-
neers, program analysts, system end users, deci-
sion scientists, maintainers, operators, and project

NASA SYSTEMS ENGINEERING HANDBOOK 69
4.0  System Design Processes

managers—should cooperate. The right quantita- to find regions of the design space that warrant fur-
tive methods and selection criteria should be used. ther investigation.
Trade study assumptions, models, and results should
be documented as part of the project archives. The Whether system models are used or not, the design
participants should remain focused on the functional concepts are developed, modified, reassessed, and
requirements, including those for enabling products. compared against competing alternatives in a closed-
For an in-depth discussion of the trade study process, loop process that seeks the best choices for further
see Section 6.8. The ability to perform these studies is development. System and subsystem sizes are often
enhanced by the development of system models that determined during the trade studies. The end result is
relate the design parameters to those assessments, but the determination of bounds on the relative cost-ef-
it does not depend upon them. fectiveness of the design alternatives, measured in
terms of the quantified system goals. (Only bounds,
The technical team should consider a broad range rather than final values, are possible because deter-
of concepts when developing the system model. The mination of the final details of the design is inten-
model should define the roles of crew, operators, tionally deferred.) Increasing detail associated with
maintainers, logistics, hardware, and software in the continually improving resolution reduces the
the system. It should identify the critical technolo- spread between upper and lower bounds as the pro-
gies required to implement the mission and should cess proceeds.
consider the entire life cycle from fabrication to
disposal. Evaluation criteria for selecting concepts 4.4.1.2.4  Select the Best Design Solution Alternative
should be established. Cost is always a limiting fac- The technical team selects the best design solution
tor. However, other criteria, such as time to develop from among the alternative design concepts, taking
and certify a unit, risk, and reliability, also are crit- into account subjective factors that the team was
ical. This stage cannot be accomplished without unable to quantify, such as robustness, as well as
addressing the roles of operators and maintainers. estimates of how well the alternatives meet the quan-
These contribute significantly to life cycle costs and titative requirements; the maturity of the available
to the system reliability. Reliability analysis should be technology; and any effectiveness, cost, schedule,
performed based upon estimates of component fail- risk, or other constraints.
ure rates for hardware and an understanding of the
consequences of these failures. If probabilistic risk The Decision Analysis Process, as described in
assessment models are applied, it may be necessary to Section 6.8, should be used to make an evaluation of
include occurrence rates or probabilities for software the alternative design concepts and to recommend
faults or human error events. These models should the “best” design solution.
include hazard analyses and controls implemented
through fault management. Assessments of the matu- When it is possible, it is usually well worth the trou-
rity of the required technology should be done and a ble to develop a mathematical expression, called an
technology development plan developed. “objective function,” that expresses the values of com-
binations of possible outcomes as a single measure of
Controlled modification and development of design cost-effectiveness, as illustrated in FIGURE 4.4-3, even
concepts, together with such system models, often if both cost and effectiveness should be described by
permits the use of formal optimization techniques more than one measure.

NASA SYSTEMS ENGINEERING HANDBOOK 70
4.0  System Design Processes

may affect the effectiveness and cost values and the
risk distributions.
Expressed in Quantitative Units
Some Aspect of Effectiveness,

The mission success criteria for systems differ sig-
nificantly. In some cases, effectiveness goals may be
A much more important than all others. Other projects
may demand low costs, have an immutable sched-
ule, or require minimization of some kinds of risks.
C Rarely (if ever) is it possible to produce a combined
B quantitative measure that relates all of the important
factors, even if it is expressed as a vector with sev-
eral components. Even when that can be done, it is
Life-Cycle Cost, Expressed in Constant Dollars essential that the underlying actors and relationships
be thoroughly revealed to and understood by the sys-
FIGURE 4.4‑3  A Quantitative Objective Function, Dependent
on Life Cycle Cost and All Aspects of Effectiveness tems engineer. The systems engineer should weigh
the importance of the unquantifiable factors along
Note: The different shaded areas indicate different levels
of uncertainty. Dashed lines represent constant values of
with the quantitative data.
objective function (cost-effectiveness). Higher values of cost-
effectiveness are achieved by moving toward upper left. A, Technical reviews of the data and analyses, includ-
B, and C are design concepts with different risk patterns.
ing technology maturity assessments, are an import-
ant part of the decision support packages prepared
The objective function (or “cost function”) assigns a for the technical team. The decisions that are made
real number to candidate solutions or “feasible solu- are generally entered into the configuration manage-
tions” in the alternative space or “search space.” A ment system as changes to (or elaborations of) the
feasible solution that minimizes (or maximizes, if system baseline. The supporting trade studies are
that is the goal) the objective function is called an archived for future use. An essential feature of the
“optimal solution.” When achievement of the goals systems engineering process is that trade studies are
can be quantitatively expressed by such an objec- performed before decisions are made. They can then
tive function, designs can be compared in terms of be baselined with much more confidence.
their value. Risks associated with design concepts
can cause these evaluations to be somewhat nebulous 4.4.1.2.5  Increase the Resolution of the Design
because they are uncertain and are best described by The successive refinement process of FIGURE 4.4-2
probability distributions. illustrates a continuing refinement of the system
design. At each level of decomposition, the baselined
In FIGURE 4.4-3, the risks are relatively high for derived (and allocated) requirements become the set
design concept A. There is little risk in either effec- of high-level requirements for the decomposed ele-
tiveness or cost for concept B, while the risk of an ments, and the process begins again. One might ask,
expensive failure is high for concept C, as is shown “When do we stop refining the design?” The answer
by the cloud of probability near the x axis with a high is that the design effort proceeds to a depth that is
cost and essentially no effectiveness. Schedule factors sufficient to meet several needs: the design should
penetrate sufficiently to allow analytical validation

NASA SYSTEMS ENGINEERING HANDBOOK 71
4.0  System Design Processes

of the design to the requirements and ConOps; completed, then the design is fully defined into a
it should also have sufficient depth to support cost final design solution that will satisfy the technical
and operations modeling and to convince a review requirements and ConOps. The design solution defi-
team of a feasible design with performance, cost, and nition will be used to generate the end product speci-
risk margins. fications that will be used to produce the product and
to conduct product verification. This process may be
The systems engineering engine is applied again and further refined depending on whether there are addi-
again as the system is developed. As the system is real- tional subsystems of the end product that need to
ized, the issues addressed evolve and the particulars be defined.
of the activity change. Most of the major system deci-
sions (goals, architecture, acceptable life cycle cost, The scope and content of the full design description
etc.) are made during the early phases of the project, should be appropriate for the product life cycle phase,
so the successive refinements do not correspond pre- the phase success criteria, and the product position in
cisely to the phases of the system life cycle. Much of the PBS (system structure). Depending on these fac-
the system architecture can be seen even at the out- tors, the form of the design solution definition could
set, so the successive refinements do not correspond be simply a simulation model or a paper study report.
exactly to development of the architectural hierarchy The technical data package evolves from phase to
either. Rather, they correspond to the successively phase, starting with conceptual sketches or models
greater resolution by which the system is defined. and ending with complete drawings, parts list, and
other details needed for product implementation or
It is reasonable to expect the system to be defined product integration. Typical output definitions from
with better resolution as time passes. This tendency the Design Solution Definition Process are shown in
is formalized at some point (in Phase B) by defin- FIGURE 4.4-1 and are described in Section 4.4.1.3.
ing a baseline system definition. Usually, the goals,
objectives, and constraints are baselined as the 4.4.1.2.7 Verify the Design Solution
requirements portion of the baseline. The entire base- Once an acceptable design solution has been selected
line is then placed under configuration control in an from among the various alternative designs and doc-
attempt to ensure that any subsequent changes are umented in a technical data package, the design
indeed justified and affordable. solution should next be verified against the system
requirements and constraints. A method to achieve
At this point in the systems engineering process, there this verification is by means of a peer review to
is a logical branch point. For those issues for which evaluate the resulting design solution definition.
the process of successive refinement has proceeded far Guidelines for conducting a peer review are discussed
enough, the next step is to implement the decisions at in Section 6.7.2.4.5.
that level of resolution. For those issues that are still
insufficiently resolved, the next step is to refine the In addition, peer reviews play a significant role as a
development further. detailed technical component of higher level tech-
nical and programmatic reviews. For example, the
4.4.1.2.6  Fully Describe the Design Solution peer review of a component battery design can go
Once the preferred design alternative has been into much more technical detail on the battery than
selected and the proper level of refinement has been the integrated power subsystem review. Peer reviews

NASA SYSTEMS ENGINEERING HANDBOOK 72
4.0  System Design Processes

can cover the components of a subsystem down to products should be consistent with each other and
the level appropriate for verifying the design against will require iterations and design decisions to achieve
the requirements. Concerns raised at the peer this consistency. Once consistency is achieved, func-
review might have implications on the power sub- tional analyses allow the study team to validate the
system design and verification and therefore should design against the stakeholder expectations. A sim-
be reported at the next higher level review of the plified validation asks the questions: Does the system
power subsystem. work as expected? How does the system respond to
failures, faults, and anomalies? Is the system afford-
The verification should show that the design solution able? If the answer to any of these questions is no,
definition: then changes to the design or stakeholder expecta-
tions will be required, and the process is started over
• Is realizable within the constraints imposed on again. This process continues until the system—
the technical effort; architecture, ConOps, and requirements—meets the
stakeholder expectations.
• Has specified requirements that are stated in
acceptable statements and have bidirectional This design solution validation is in contrast to
traceability with the technical requirements and the validation of the end product described in the
stakeholder expectations; and end-product validation plan, which is part of the
technical data package. That validation occurs in a
• Has decisions and assumptions made in forming later life cycle phase and is a result of the Product
the solution consistent with its set of technical Validation Process (see Section 5.4) applied to the
requirements and identified system product and realization of the design solution as an end product.
service constraints.
4.4.1.2.9  Identify Enabling Products
This design solution verification is in contrast Enabling products are the life cycle support prod-
to the verification of the end product described ucts and services (e.g., production, test, deployment,
in the end product verification plan which is training, maintenance, and disposal) that facili-
part of the technical data package. That verifi- tate the progression and use of the operational end
cation occurs in a later life cycle phase and is a product through its life cycle. Since the end prod-
result of the Product Verification Process (see uct and its enabling products are interdependent,
Section 5.3) applied to the realization of the design they are viewed as a system. Project responsibility
solution as an end product. thus extends to responsibility for acquiring services
from the relevant enabling products in each life cycle
4.4.1.2.8  Validate the Design Solution phase. When a suitable enabling product does not
The validation of the design solution is a recursive already exist, the project that is responsible for the
and iterative process as shown in FIGURE 4.0-1. end product can also be responsible for creating and
Each alternative design concept is validated against using the enabling product.
the set of stakeholder expectations. The stakeholder
expectations drive the iterative design loop in which Therefore, an important activity in the Design
a straw man architecture/design, the ConOps, and Solution Definition Process is the identification of the
the derived requirements are developed. These three enabling products and personnel that will be required

NASA SYSTEMS ENGINEERING HANDBOOK 73
4.0  System Design Processes

during the life cycle of the selected design solution alternative design concepts. This is a critical point
and then initiating the acquisition or development of in the design process. It puts a stake in the ground
those enabling products and personnel. Need dates and gets everyone on the design team focused on
for the enabling products should be realistically the same concept. When dealing with complex sys-
identified on the project schedules, incorporating tems, it is difficult for team members to design their
appropriate schedule slack. Then firm commitments portion of the system if the system design is a mov-
in the form of contracts, agreements, and/or opera- ing target. The baselined design is documented and
tional plans should be put in place to ensure that the placed under configuration control. This includes the
enabling products will be available when needed to system requirements, specifications, and configura-
support the product life cycle phase activities. The tion descriptions.
enabling product requirements are documented as
part of the technical data package for the Design While baselining a design is beneficial to the design
Solution Definition Process. process, there is a danger if it is exercised too early
in the Design Solution Definition Process. The early
An environmental test chamber is an example of an exploration of alternative designs should be free and
enabling product whose use would be acquired at open to a wide range of ideas, concepts, and imple-
an appropriate time during the test phase of a space mentations. Baselining too early takes the inventive
flight system. nature out of the concept exploration. Therefore,
baselining should be one of the last steps in the
Special test fixtures or special mechanical handling Design Solution Definition Process.
devices are examples of enabling products that would
have to be created by the project. Because of long 4.4.1.3  Outputs
development times as well as oversubscribed facil- Outputs of the Design Solution Definition Process
ities, it is important to identify enabling products are the specifications and plans that are passed on to
and secure the commitments for them as early in the the product realization processes. They contain the
design phase as possible. design-to, build-to, train-to, and code-to documen-
tation that complies with the approved baseline for
4.4.1.2.10  Baseline the Design Solution the system.
As shown earlier in FIGURE 4.0-1, once the selected
system design solution meets the stakeholder expec- As mentioned earlier, the scope and content of the
tations, the study team baselines the products and full design description should be appropriate for the
prepares for the next life cycle phase. Because of the product life cycle phase, the phase success criteria,
recursive nature of successive refinement, intermedi- and the product position in the PBS.
ate levels of decomposition are often validated and
baselined as part of the process. In the next level of Outputs of the Design Solution Definition Process
decomposition, the baselined requirements become include the following:
the set of high-level requirements for the decomposed
elements, and the process begins again. • The System Specification: The system spec-
ification contains the functional baseline for
Baselining a particular design solution enables the the system that is the result of the Design
technical team to focus on one design out of all the Solution Definition Process. The system design

NASA SYSTEMS ENGINEERING HANDBOOK 74
4.0  System Design Processes

specification provides sufficient guidance, con- viewed as part of the system since the end product
straints, and system requirements for the design and its enabling products are interdependent.
engineers to begin developing the design.
• Product Verification Plan: The end-product ver-
• The System External Interface Specifications: ification plan (generated through the Technical
The system external interface specifications Planning Process) provides the content and
describe the functional baseline for the behavior depth of detail necessary to provide full visibil-
and characteristics of all physical interfaces that the ity of all verification activities for the end prod-
system has with the external world. These include uct. Depending on the scope of the end product,
all structural, thermal, electrical, and signal inter- the plan encompasses qualification, acceptance,
faces, as well as the human-system interfaces. prelaunch, operational, and disposal verification
activities for flight hardware and software.
• The End-Product Specifications: The end-prod-
uct specifications contain the detailed build-to • Product Validation Plan: The end-product val-
and code-to requirements for the end product. idation plan (generated through the Technical
They are detailed, exact statements of design par- Planning Process) provides the content and depth
ticulars, such as statements prescribing materials, of detail necessary to provide full visibility of
dimensions, and quality of work to build, install, all activities to validate the end product against
or manufacture the end product. the baselined stakeholder expectations. The plan
identifies the type of validation, the validation
• The End-Product Interface Specifications: The procedures, and the validation environment that
end-product interface specifications contain the are appropriate to confirm that the realized end
detailed build-to and code-to requirements for product conforms to stakeholder expectations.
the behavior and characteristics of all logical and
physical interfaces that the end product has with • Logistics and Operate-to Procedures: The appli-
external elements, including the human-system cable logistics and operate-to procedures for the
interfaces. system describe such things as handling, trans-
portation, maintenance, long-term storage, and
• Initial Subsystem Specifications: The end-prod- operational considerations for the particular
uct subsystem initial specifications provide detailed design solution.
information on subsystems if they are required.
Other outputs may include:
• Enabling Product Requirements: The require-
ments for associated supporting enabling products • Human Systems Integration Plan: The system
provide details of all enabling products. Enabling HSI Plan should be updated to indicate the
products are the life cycle support products, infra- numbers, skills, and development (i.e., training)
structures, personnel, logistics, and services that required for humans throughout the full life cycle
facilitate the progression and use of the opera- deployment and operations of the system.
tional end product through its life cycle. They are

NASA SYSTEMS ENGINEERING HANDBOOK 75
4.0  System Design Processes

4.4.2 Design Solution Definition
Guidance
Refer to Section 4.4.2 in the NASA Expanded
Guidance for Systems Engineering at https://nen.nasa.
gov/web/se/doc-repository for additional guidance on:

• technology assessment,
• human capability assessment, and
• integrating engineering specialties into the SE process.

NASA SYSTEMS ENGINEERING HANDBOOK 76
5.0 Product Realization
T his chapter describes the activities in the prod-
uct realization processes listed in FIGURE 2.1-1.
The chapter is separated into sections corresponding
the appropriate crosscutting functions, such as data
and configuration management, and technical assess-
ments to make, buy, or reuse subsystems. Once these
to steps 5 through 9 listed in FIGURE 2.1-1. The pro- subsystems are realized, they should be integrated to
cesses within each step are discussed in terms of the the appropriate level as designated by the appropri-
inputs, the activities, and the outputs. Additional ate interface requirements. These products are then
guidance is provided using examples that are relevant verified through the Technical Assessment Process to
to NASA projects. ensure that they are consistent with the technical data
package and that “the product was built right.” Once
In the product realization side of the SE engine, five consistency is achieved, the technical team validates
interdependent processes result in systems that meet the products against the stakeholder expectations
the design specifications and stakeholder expecta- to ensure that “the right product was built.” Upon
tions. These products are produced, acquired, reused, successful completion of validation, the products are
or coded; integrated into higher level assemblies; ver- transitioned to the next level of the system. FIGURE
ified against design specifications; validated against 5.0-1 illustrates these processes.
stakeholder expectations; and transitioned to the
next level of the system. As has been mentioned in This is an iterative and recursive process. Early in the
previous sections, products can be models and simu- life cycle, paper products, models, and simulations
lations, paper studies or proposals, or hardware and are run through the five realization processes. As the
software. The type and level of product depends on system matures and progresses through the life cycle,
the phase of the life cycle and the product’s specific hardware and software products are run through
objectives. But whatever the product, all should effec- these processes. It is important to detect as many
tively use the processes to ensure the system meets errors and failures as possible at the lowest level of
the intended operational concept. integration and early in the life cycle so that changes
can be made through the design processes with min-
This effort starts with the technical team taking the imum impact to the project.
output from the system design processes and using

NASA SYSTEMS ENGINEERING HANDBOOK 77
5.0  Product Realization

PRODUCT
DESIGN REALIZATION EVALUATION PROCESSES TRANSITION
PROCESS

Product Product Product Product Product
Implementation Integration Verification Validation Transition

• Acquire • Assembly • Functional • Operational • Delivery to Next
• Make/Code • Functional • Environmental Testing in Mission Higher Level in PBS
• Reuse Evaluation • Operational Test- Environment • Delivery to
ing in Integration Operational
& Test Environment System

FIGURE 5.0-1 Product Realization

PRODUCT REALIZATION KEYS

• Define and execute production activities.
• Generate and manage requirements for off-the-shelf hardware/software products as for all other products.
• Understand the differences between verification testing and validation testing.
• Consider all customer, stakeholder, technical, programmatic, and safety requirements when evaluating
the input necessary to achieve a successful product transition.
• Analyze for any potential incompatibilities with interfaces as early as possible.
• Completely understand and analyze all test data for trends and anomalies.
• Understand the limitations of the testing and any assumptions that are made.
• Ensure that a reused product meets the verification and validation required for the relevant system in
which it is to be used, as opposed to relying on the original verification and validation it met for the
system of its original use. Then ensure that it meets the same verification and validation as a purchased
product or a built product. The “pedigree” of a reused product in its original application should not be
relied upon in a different system, subsystem, or application.

The next sections describe each of the five product from the bottom of the product hierarchy up towards
realization processes and their associated products for the Product Transition Process. This is where the
a given NASA mission. plans, designs, analysis, requirements development,
and drawings are realized into actual products.

5.1 Product Implementation Product implementation is used to generate a speci-
Product implementation is the first process encoun- fied product of a project or activity through buying,
tered in the SE engine that begins the movement making/coding, or reusing previously developed

NASA SYSTEMS ENGINEERING HANDBOOK 78
5.0  Product Realization

From existing
resources or
external sources Prepare to conduct implementation To Product
Verification Process
Required Raw
Materials Desired End
If implemented by If implemented by making: If implemented by Product
buying: Evaluate readiness of reuse:
From Configuration Participate in purchase product implementation– Participate in acquiring
Management Process of specified end product enabling products the reuse end product
End Product Design To Technical Data
Specifications and Management Process
Configuration
Documentation Make the specified end product End Product
Documents and
Manuals
From existing
Prepare appropriate
resources or Product
product support documentation Product
Transition Process
Implementation
Product Work Products
Implementation– Capture work products from product
Enabling Products implementation activities

FIGURE 5.1-1  Product Implementation Process

hardware, software, models, or studies to generate a 5.1.1 Process Description
product appropriate for the phase of the life cycle. FIGURE 5.1-1 provides a typical flow diagram for
The product should satisfy the design solution and its the Product Implementation Process and identifies
specified requirements. typical inputs, outputs, and activities to consider in
addressing product implementation.
The Product Implementation Process is the key activ-
ity that moves the project from plans and designs 5.1.1.1 Inputs
into realized products. Depending on the project and Inputs to the Product Implementation Process
life cycle phase within the project, the product may depend primarily on the decision about whether
be hardware, software, a model, simulations, mock- the end product will be purchased, developed from
ups, study reports, or other tangible results. These scratch, or formed by reusing part or all of products
products may be realized through their purchase from other projects. Typical inputs are shown in
from commercial or other vendors, through partial FIGURE 5.1-1.
or complete reuse of products from other projects or
activities, or they may be generated from scratch. The • Inputs If Purchasing the End Product: If the
decision as to which of these realization strategies or decision was made to purchase part or all of the
combination of strategies will be used for the prod- products for this project, the end product design
ucts of this project will have been made early in the specifications are obtained from the configura-
life cycle using the Decision Analysis Process. tion management system as well as other appli-
cable documents.

NASA SYSTEMS ENGINEERING HANDBOOK 79
5.0  Product Realization

• Inputs If Making/Coding the End Product: For making/reusing the product, and capturing the prod-
end products that will be made/coded by the uct implementation work product. In some cases,
technical team, the inputs will be the config- implementing a product may have aspects of more
uration-controlled design specifications, man- than one of these forms (such as a build-to-print). In
ufacturing plans, manufacturing processes, those cases, the appropriate aspects of the applicable
manufacturing procedures, and raw materials as forms are used.
provided to or purchased by the project.
5.1.1.2.1 Prepare to Conduct Implementation
• Inputs Needed If Reusing an End Product: For Preparing to conduct the product implementation
end products that will reuse part or all of prod- is a key first step regardless of what form of imple-
ucts generated by other projects, the inputs may mentation has been selected. For complex projects,
be the documentation associated with the product implementation strategy and detailed planning or
as well as the product itself. Care should be taken procedures need to be developed and documented.
to ensure that these products will indeed meet the For less complex projects, the implementation strat-
specifications and environments for this project. egy and planning need to be discussed, approved,
These would have been factors involved in the and documented as appropriate for the complexity of
Decision Analysis Process to determine the make/ the project.
buy/reuse decision.
The documentation, specifications, and other inputs
• Enabling Products: These would be any enabling also need to be reviewed to ensure they are ready and
products necessary to make, code, purchase, or at an appropriate level of detail to adequately complete
reuse the product (e.g., drilling fixtures, produc- the type of implementation form being employed and
tion facilities, production lines, software devel- for the product life cycle phase. For example, if the
opment facilities, software test facilities, system “make” implementation form is being employed, the
integration and test facilities). design specifications need to be reviewed to ensure
they are at a design-to level that allows the product
5.1.1.2  Process Activities to be developed. If the product is to be bought as
Implementing the product can take one of three a pure Commercial Off-the-Shelf (COTS) item, the
forms: specifications need to be checked to make sure they
adequately describe the vendor characteristics to nar-
1. Purchase/buy row to a single make/model of their product line.
2. Make/code
3. Reuse Finally, the availability and skills of personnel needed
to conduct the implementation as well as the avail-
These three forms will be discussed in the following ability of any necessary raw materials, enabling prod-
subsections. FIGURE 5.1-1 shows what kind of inputs, ucts, or special services should also be reviewed. Any
outputs, and activities are performed during product special training necessary for the personnel to per-
implementation regardless of where in the product form their tasks needs to be performed by this time.
hierarchy or life cycle it is. These activities include pre- This is a key part of the Acceptance Data Package.
paring to conduct the implementation, purchasing/

NASA SYSTEMS ENGINEERING HANDBOOK 80
5.0  Product Realization

5.1.1.2.2 Purchase, Make, or Reuse the Product The NASA technical team should also ensure that
Purchase the Product any enabling products necessary to provide test,
In the first case, the end product is to be purchased operations, maintenance, and disposal support for
from a commercial or other vendor. Design/purchase the product are also ready or provided as defined in
specifications will have been generated during require- the contract.
ments development and provided as inputs. The tech-
nical team needs to review these specifications and Depending on the strategy and roles/responsibilities
ensure they are in a form adequate for the contract of the vendor, a determination/analysis of the ven-
or purchase order. This may include the generation dor’s verification and validation compliance may
of contracts, Statements of Work (SOWs), requests need to be reviewed. This may be done informally
for proposals, purchase orders, or other purchasing or formally as appropriate for the complexity of the
mechanisms. For major end products purchased from product. For products that were verified and validated
a vendor, the responsibilities of the Government and by the vendor, after ensuring that all work products
contractor team should be documented in the SEMP from this phase have been captured, the product may
and Integration Plan. This will define, for example, be ready to enter the Product Transition Process to
whether NASA expects the vendor to provide a fully be delivered to the next higher level or to its final end
verified and validated product or whether the NASA user. For products that the technical team will verify
technical team will be performing those duties. The and validate, the product will be ready for verification
team needs to work with the acquisition team to after ensuring that all work products for this phase
ensure the accuracy of the contract SOW or purchase have been captured.
order and to ensure that adequate documentation,
certificates of compliance, or other specific needs are Make/Code the Product
requested from the vendor. If the strategy is to make or code the product, the
technical team should first ensure that the enabling
For contracted purchases, as proposals come back products are ready. This may include ensuring all
from the vendors, the technical team should work piece parts are available, drawings are complete and
with the contracting officer and participate in the adequate, software design is complete and reviewed,
review of the technical information and in the selec- machines to cut the material are available, interface
tion of the vendor that best meets the design require- specifications are approved, operators are trained and
ments for acceptable cost and schedule. available, manufacturing and/or coding procedures/
processes are ready, software personnel are trained
As the purchased products arrive, the technical team and available to generate code, test fixtures are devel-
should assist in the inspection of the delivered prod- oped and ready to hold products while being gener-
uct and its accompanying documentation. The team ated, and software test cases are available and ready to
should ensure that the requested product was indeed begin model generation.
the one delivered, and that all necessary documenta-
tion, such as source code, operator manuals, certifi- The product is then made or coded in accordance
cates of compliance, safety information, or drawings with the specified requirements, configuration docu-
have been received. mentation, and applicable standards. Software devel-
opment must be consistent with NPR 7150.2, NASA
Software Engineering Requirements. Throughout

NASA SYSTEMS ENGINEERING HANDBOOK 81
5.0  Product Realization

this process, the technical team should work with the used in the decision strategy to make/buy/reuse. If
quality organization to review, inspect, and discuss the new environment is more extreme, requalifica-
progress and status within the team and with higher tion is needed for the component or system. Design
levels of management as appropriate. Progress should factors of safety, margins, and other required design
be documented within the technical schedules. Peer and construction standards should also be assessed.
reviews, audits, unit testing, code inspections, simu- If the program/project requires higher factor of safety
lation checkout, and other techniques may be used or margins, the component may not be usable or a
to ensure the made or coded product is ready for the waiver may have to be approved.
verification process. Some production and coding
can also be separately contracted. This is sometimes The documentation available (e.g., as-built documen-
pursued as a cost control feature providing motiva- tation, user’s guides, operations manuals, discrep-
tion for the design contractor to keep the operations ancy reports, waivers and deviations) from the reuse
costs low and not roll costs into the operations phase product should be reviewed by the technical team so
of a long-term contract. This is also valuable when that they can become completely familiar with the
the design contractor is not well suited for long-term product and ensure it will meet the requirements in
continuing production operations. Small projects the intended environment. Any supporting manuals,
and activities often use small manufacturing shops drawings, or other documentation available should
to fabricate the system or major portions and small also be gathered.
software companies to code their software. In these
cases, the production and software engineers may The availability of any supporting or enabling prod-
specify some portion of the hardware production or ucts or infrastructure needed to complete the fabrica-
software coding and request the remaining portions, tion, coding, testing, analysis, verification, validation,
including as-built documentation, from the man- or shipping of the product needs to be determined.
ufacturing or software provider. The specified por- Supporting products may be found in product man-
tions are contained as part of the contract statement ufacturing plans, processes, and procedures. If any of
of work in these cases. The level of process control these products or services are lacking, they will need
and information provided to or from the vendor is to be developed or arranged for before progressing to
dependent on the criticality of the systems obtained. the next phase.
As production proceeds and components are pro-
duced, there is a need to establish a method (Material Special arrangements may need to be made or forms
Review Boards (MRBs) are typically used for large such as nondisclosure agreements may need to be
projects) to review any nonconformance to specifica- acquired before the reuse product can be received.
tions and disposition whether the components can be
accepted, reworked, or scrapped and remade. A reused product often needs to undergo the same
verification and validation as a purchased product
Reuse or a built product. Relying on prior verification and
If the strategy is to reuse a product that already validation should only be considered if the product’s
exists, extreme care should be taken to ensure that verification and validation documentation meets or
the product is truly applicable to this project and for exceeds the verification, validation, and documen-
the intended uses and the environment in which it tation requirements of the current project and the
will be used. This should have been a major factor documentation demonstrates that the product was

NASA SYSTEMS ENGINEERING HANDBOOK 82
5.0  Product Realization

verified and validated against equivalent require- baseline documents (configuration information
ments (including environments) and expectations. such as as-built specifications or stakeholder
The savings gained from reuse is not necessarily from expectations); certificates of compliance; or other
reduced acceptance-level testing of the flight prod- vendor documentation.
ucts, but possibly elimination of the need to fully
requalify the item (if all elements are the same, includ- • Product Implementation Work Products: Any
ing the environment and operation), elimination of additional work products providing reports,
the need to specify all of the internal requirements records, lesson learned, assumptions, updated CM
such as printed circuit board specifications or mate- products, and other outcomes of these activities.
rial requirements, reduced internal data products, or
the confidence that the item will pass acceptance test The process is complete when the following activities
and will not require rework. have been accomplished:

5.1.1.2.3  Capture Work Products • End products are fabricated, purchased, or reuse
Regardless of what implementation form was selected, modules are acquired.
all work products from the make/buy/reuse process
should be captured, including as-built design draw- • End products are reviewed, checked, and ready
ings, design documentation, design models, code list- for verification.
ings, model descriptions, procedures used, operator
manuals, maintenance manuals, or other documen- • Procedures, decisions, assumptions, anomalies,
tation as appropriate. corrective actions, lessons learned, etc., resulting
from the make/buy/reuse are recorded.
5.1.1.3  Outputs
• End Product for Verification: Unless the vendor 5.1.2 
Product Implementation
performs verification, the made/coded, purchased, Guidance
or reused end product in a form appropriate for Refer to Section 5.1.2 in the NASA Expanded
the life cycle phase is provided for the verification Guidance for Systems Engineering at https://nen.
process. The form of the end product is a function nasa.gov/web/se/doc-repository for additional guid-
of the life cycle phase and the placement within ance on:
the system structure (the form of the end product
could be hardware, software, model, prototype, • buying off-the-shelf products and
first article for test, or single operational article or • the need to consider the heritage of products.
multiple production articles).
5.2 Product Integration
• End Product Documents and Manuals:
Appropriate documentation is also delivered Product integration is a key activity of the systems
with the end product to the verification process engineer. Product integration is the engineering of
and to the technical data management process. the subsystem interactions and their interactions
Documentation may include applicable as-built with the system environments (both natural and
design drawings; close out photos; operation, user, induced). Also in this process, lower-level products
maintenance, or training manuals; applicable are assembled into higher-level products and checked

NASA SYSTEMS ENGINEERING HANDBOOK 83
5.0  Product Realization

to make sure that the integrated product functions Integration during the design phase continues the
properly and that there are no adverse emergent engineering of these interactions and requires con-
behaviors. This integration begins during concept stant analysis and management of the subsystem
definition and continues throughout the system life functions and the subsystem interactions between
cycle. Integration involves several activities focused themselves and with their environments. Analysis of
on the interactions of the subsystems and environ- the system interactions and managing the balance
ments. These include system analysis to define and of the system is the central function of the systems
understand the interactions, development testing engineer during the design process. The system needs
including qualification testing, and integration with to create and maintain a balance between the sub-
external systems (e.g., launch operations centers, systems, optimizing the system performance over
space vehicles, mission operations centers, flight con- any one subsystem to achieve an elegant and efficient
trol centers, and aircraft) and objects (i.e., planetary design. The design phase often involves development
bodies or structures). To accomplish this integration, testing at the component, assembly, or system level.
the systems engineer is active in integrating the dif- This is a key source of data on system interactions,
ferent discipline and design teams to ensure system and the developmental test program should be struc-
and environmental interactions are being properly tured to include subsystem interactions, human-in-
balanced by the differing design teams. The result of the-loop evaluations, and environmental interaction
a well-integrated and balanced system is an elegant test data as appropriate.
design and operation.
Integration continues during the operations phase,
Integration begins with concept development, ensur- bringing together the system hardware, software, and
ing that the system concept has all necessary func- human operators to perform the mission. The inter-
tions and major elements and that the induced and actions between these three integrated natures of the
natural environment domains in which the system system need to be managed throughout development
is expected to operate are all identified. Integration and into operations for mission success. The systems
continues during requirements development, ensur- engineer, program manager, and the operations team
ing that all system and environmental requirements (including the flight crew from crewed missions) need
are compatible and that the system has a proper bal- to work together to perform this management. The
ance of functional utility to produce a robust and systems engineer is not only cognizant of these oper-
efficient system. Interfaces are defined in this phase ations team interactions, but is also involved in the
and are the pathway of system interactions. Interfaces design responses and updates to changes in mission
include mechanical (i.e., structure, loads), fluids, parameters and unintended consequences (through
thermal, electrical, data, logical (i.e., algorithms and fault management).
software), and human. These interfaces may include
support for assembly, maintenance, and testing func- Finally, integration or de-integration occurs during
tions in addition to the system main performance system closeout (i.e., decommissioning and disposal).
functions. The interactions that occur through all of The system capabilities to support de-integration
these interfaces can be subtle and complex, leading to and/or disposal need to be engineered into the sys-
both intended and unintended consequences. All of tem from the concept definition phase. The closeout
these interactions need to be engineered to produce phase involves the safe disposal of flight assets con-
an elegant and balanced system. sistent with U.S. policy and law and international

NASA SYSTEMS ENGINEERING HANDBOOK 84
5.0  Product Realization

treaties. This disposal can involve the safe reentry and Plan). For smaller projects and activities, particularly
recovery or impact in the ocean, impact on the moon, with short life cycles (i.e., short mission durations),
or solar trajectory. This can also involve the disassem- the closeout plans may be contained in the SEMP.
bly or repurposing of terrestrial equipment used in
manufacturing, assembly, launch, and flight opera- 5.2.1 Process Description
tions. Dispositioning of recovered flight assets also FIGURE 5.2-1 provides a typical flow diagram for the
occurs during this phase. Capture of system data and Product Integration Process and identifies typical
archiving for use in future analysis also occurs. In all inputs, outputs, and activities to consider in address-
of these activities, the systems engineer is involved ing product integration. The activities of the Product
in ensuring a smooth and logical disassembly of the Integration Process are truncated to indicate the
system and associated program assets. action and object of the action.

The Product Integration Process applies not only 5.2.1.1  Inputs
to hardware and software systems but also to ser- • Lower-level products to be integrated: These are
vice-oriented solutions, requirements, specifications, the products developed in the previous lower-level
plans, and concepts. The ultimate purpose of prod- tier in the product hierarchy. These products will
uct integration is to ensure that the system elements be integrated/assembled to generate the product
function as a whole. for this product layer.

Product integration involves many activities that need • End product design specifications and config-
to be planned early in the program or project in order uration documentation: These are the specifi-
to effectively and timely accomplish the integration. cations, Interface Control Documents (ICDs),
Some integration activities (such as system tests) can drawings, integration plan, procedures, or other
require many years of work and costs that need to be documentation or models needed to perform the
identified and approved through the budget cycles. integration including documentation for each of
An integration plan should be developed and doc- the lower-level products to be integrated.
umented to capture this planning. Small projects
and activities may be able to include this as part of • Product integration-enabling products: These
their SEMP. Some activities may have their integra- would include any enabling products, such as
tion plans captured under the integration plan of the holding fixtures, necessary to successfully inte-
sponsoring flight program or R&T program. Larger grate the lower-level products to create the end
programs and projects need to have a separate inte- product for this product layer.
gration plan to clearly lay out the complex analysis
and tests that need to occur. An example outline for 5.2.1.2  Process Activities
a separate integration plan is provided in Appendix H. This subsection addresses the approach to the imple-
mentation of the Product Integration Process, includ-
During project closeout, a separate closeout plan ing the activities required to support the process. The
should be produced describing the decommission- basic tasks that need to be established involve the
ing and disposal of program assets. (For example, management of internal and external interactions of
see National Space Transportation System (NSTS) the various levels of products and operator tasks to
60576, Space Shuttle Program, Transition Management support product integration and are as follows:

NASA SYSTEMS ENGINEERING HANDBOOK 85
5.0  Product Realization

Prepare to conduct product
integration

From Product
Transition Process Obtain lower level products for
Lower Level assembly and integration To Product
Products to Be Verification Process
Integrated
Confirm that received products Integrated Product
have been validated
From Configuration
Management Process
End Product Design Prepare the integration environment To Technical Data
Specifications and for assembly and integration Management Process
Configuration
Documentation Product Documents
Assemble and integrate the and Manuals
received products into the desired
From existing end product
resources or Product
Product Integration
Transition Process
Work Products
Product Integration– Prepare appropriate product
Enabling Products support documentation

Capture work products from
product integration activities

FIGURE 5.2-1  Product Integration Process

5.2.1.2.1  Prepare to Conduct Product Integration of receipt, assembly, and activation of the various
Prepare to conduct product integration by (1) review- components that make up the system. This strategy
ing the product integration strategy/plan (see Section should use technical, cost, and schedule factors to
6.1.2.4.4), generating detailed planning for the inte- ensure an assembly, activation, and loading sequence
gration, and developing integration sequences and that minimizes cost and assembly difficulties. The
procedures; and (2) determining whether the prod- larger or more complex the system or the more deli-
uct configuration documentation is adequate to con- cate the element, the more critical the proper sequence
duct the type of product integration applicable for becomes, as small changes can cause large impacts on
the product life cycle phase, location of the product project results.
in the system structure, and management phase suc-
cess criteria. The optimal sequence of assembly is built from the
bottom up as components become sub-elements,
An integration strategy is developed and documented elements, and subsystems, each of which should be
in an integration plan. This plan, as well as support- checked prior to fitting it into the next higher assem-
ing documentation, identifies the optimal sequence bly. The sequence will encompass any effort needed

NASA SYSTEMS ENGINEERING HANDBOOK 86
5.0  Product Realization

to establish and equip the assembly facilities; e.g., Assemble and Integrate the Received Products
5.2.1.2.5 
raised floor, hoists, jigs, test equipment, input/out- into the Desired End Product
put, and power connections. Once established, the Assemble and integrate the received products into
sequence should be periodically reviewed to ensure the desired end product in accordance with the spec-
that variations in production and delivery schedules ified requirements, configuration documentation,
have not had an adverse impact on the sequence or interface requirements, applicable standards, and
compromised the factors on which earlier decisions integration sequencing and procedures. This activ-
were made. ity includes managing, evaluating, and controlling
physical, functional, and data interfaces among the
Obtain Lower-Level Products for Assembly
5.2.1.2.2  products being integrated.
and Integration
Each of the lower-level products that is needed for Functional testing of the assembled or integrated
assembly and integration is obtained from the tran- unit is conducted to ensure that assembly is ready to
sitioning lower-level product owners or a storage enter verification testing and ready to be integrated
facility as appropriate. Received products should be into the next level. Typically, all or key representative
inspected to ensure no damages occurred during the functions are checked to ensure that the assembled
transitioning process. system is functioning as expected. Formal product
verification and validation will be performed in the
Confirm That Received Products Have Been
5.2.1.2.3  next process.
Validated
Confirm that the received products that are to be Prepare Appropriate Product Support
5.2.1.2.6 
assembled and integrated have been validated to Documentation
demonstrate that the individual products satisfy the Prepare appropriate product support documentation,
agreed-to set of stakeholder expectations, including such as special procedures for performing product
interface requirements. This validation can be con- verification and product validation. Drawings or
ducted by the receiving organization or by the pro- accurate models of the assembled system are devel-
viding organization if fully documented or witnessed oped and confirmed to be representative of the
by the receiving representative. assembled system.

Prepare the Integration Environment for
5.2.1.2.4  5.2.1.2.7  Capture Product Integration Work Products
Assembly and Integration Capture work products and related information
Prepare the integration environment in which assem- generated while performing the Product Integration
bly and integration will take place, including evaluat- Process activities. These work products include system
ing the readiness of the product integration-enabling models, system analysis data and assessment reports,
products and the assigned workforce. These enabling derived requirements, the procedures that were used
products may include facilities, equipment jigs, tool- in the assembly, decisions made and supporting ratio-
ing, and assembly/production lines. The integration nale, assumptions that were made, identified anoma-
environment includes test equipment, simulators, lies and associated corrective actions, lessons learned
models, storage areas, and recording devices. in performing the assembly, and updated product
configuration and support documentation.

NASA SYSTEMS ENGINEERING HANDBOOK 87
5.0  Product Realization

5.2.1.3  Outputs 5.3 Product Verification
The following are typical outputs from this process The Product Verification Process is the first of the
and destinations for the products from this process: verification and validation processes conducted on an
end product. As used in the context of the systems
• Integrated product(s) with all system interactions engineering common technical processes, a product
identified and properly balanced. is one provided by either the Product Implementation
Process or the Product Integration Process in a form
• Documentation and manuals, including system suitable for meeting applicable life cycle phase success
analysis models, data, and reports supporting criteria. Realization is the act of implementing, inte-
flight-readiness rationale and available for future grating, verifying, validating, and transitioning the
analysis during the operation of the system in the end product for use at the next level up of the system
mission-execution phase. structure or to the customer. At this point, the end
product can be referred to as a “realized product” or
• Work products, including reports, records, and “realized end product.”
non-deliverable outcomes of product integra-
tion activities (to support the Technical Data Product verification proves that an end product
Management Process); integration strategy doc- (whether built, coded, bought, or reused) for any
ument; assembly/check area drawings; system/ element within the system structure conforms to its
component documentation sequences and ratio- requirements or specifications. Such specifications
nale for selected assemblies; interface manage- and other design description documentation estab-
ment documentation; personnel requirements; lish the configuration baseline of that product, which
special handling requirements; system documen- may have to be modified at a later time. Without a
tation; shipping schedules; test equipment and verified baseline and appropriate configuration con-
drivers’ requirements; emulator requirements; trols, such later modifications could be costly or
and identification of limitations for both hard- cause major performance problems.
ware and software.
From a process perspective, product verification and
5.2.2  Product Integration Guidance validation may be similar in nature, but the objec-
Refer to Section 5.2.2 in the NASA Expanded tives are fundamentally different. A customer is inter-
Guidance for Systems Engineering at https://nen.nasa. ested in whether the end product provided will do
gov/web/se/doc-repository for additional guidance on: what the customer intended within the environment
of use. Examination of this condition is validation.
• product integration strategies, Simply put, the Product Verification Process answers
• the relationship to product implementation, the critical question, “Was the end product realized
• product integration support, right?” The Product Validation Process addresses the
• product integration of the design solution, equally critical question, “Was the right end prod-
• system analysis, and uct realized?” When cost effective and warranted by
• interface system integration. analysis, the expense of validation testing alone can
be mitigated by combining tests to perform verifica-
tion and validation simultaneously.

NASA SYSTEMS ENGINEERING HANDBOOK 88
5.0  Product Realization

DIFFERENCES BETWEEN VERIFICATION AND VALIDATION TESTING

Testing is a detailed evaluation method of both verification and validation

Verification Testing: Verification testing relates back to the approved requirements set (such as an SRD)
and can be performed at different stages in the product life cycle. Verification tests are the official “for
the record” testing performed on a system or element to show that it meets its allocated requirements
or specifications including physical and functional interfaces. Verification tests use instrumentation and
measurements and are generally accomplished by engineers, technicians, or operator-maintainer test
personnel in a controlled environment to facilitate failure analysis.

Validation Testing: Validation relates back to the ConOps document. Validation testing is conducted
under realistic conditions (or simulated conditions) on any end product to determine the effectiveness and
suitability of the product for use in mission operations by typical users and to evaluate the results of such
tests. It ensures that the system is operating as expected when placed in a realistic environment.

The outcome of the Product Verification Process is con- it was assembled correctly. Any supporting doc-
firmation that the end product, whether achieved by umentation should be supplied with the product.
implementation or integration, conforms to its speci-
fied requirements, i.e., verification of the end product. • Verification plan: This plan will have been devel-
This subsection discusses the process activities, inputs, oped under the Technical Planning Process and
outcomes, and potential product deficiencies. baselined before entering this verification.

5.3.1 Process Description • Specified requirements baseline: These are the
FIGURE 5.3-1, taken from NPR 7123.1, provides a typ- requirements that have been identified to be verified
ical flow diagram for the Product Verification Process for this product. Acceptance criteria should have
and identifies typical inputs, outputs, and activities been identified for each requirement to be verified.
to consider in addressing product verification.
• Enabling products: Any other products needed
5.3.1.1  Inputs to perform the Product Verification Process. This
Key inputs to the process are: may include test fixtures and support equipment.

• The product to be verified: This product Additional work products such as the ConOps, mis-
will have been transitioned from either the sion needs and goals, interface control drawings, test-
Product Implementation Process or the Product ing standards and policies, and Agency standards and
Integration Process. The product will likely have policies may also be needed to put verification activ-
been through at least a functional test to ensure ities into context.

NASA SYSTEMS ENGINEERING HANDBOOK 89
5.0  Product Realization

DIFFERENCES BETWEEN VERIFICATION, QUALIFICATION,
ACCEPTANCE AND CERTIFICATION

Verification: Verification is a formal process, using the method of test, analysis, inspection or
demonstration, to confirm that a system and its associated hardware and software components satisfy all
specified requirements. The Verification program is performed once regardless of how many flight units may
be generated (as long as the design doesn’t change).

Qualification: Qualification activities are performed to ensure that the flight unit design will meet functional
and performance requirements in anticipated environmental conditions. A subset of the verification program
is performed at the extremes of the environmental envelope and will ensure the design will operate properly
with the expected margins. Qualification is performed once regardless of how many flight units may be
generated (as long as the design doesn’t change).

Acceptance: smaller subset of the verification program is selected as criteria for the acceptance program.
The selected Acceptance activities are performed on each of the flight units as they are manufactured and
readied for flight/use. An Acceptance Data Package is prepared for each of the flight units and shipped with
the unit. The acceptance test/analysis criteria are selected to show that the manufacturing/workmanship of
the unit conforms to the design that was previously verified/qualified. Acceptance testing is performed for
each flight unit produced.

Certification: Certification is the audit process by which the body of evidence that results from the
verification activities and other activities are provided to the appropriate certifying authority to indicate the
design is certified for flight/use. The Certification activity is performed once regardless of how many flight
units may be generated.

5.3.1.2  Process Activities (QA) personnel are also critical in the verification
There are five major activities in the Product planning and execution activities.
Verification Process: (1) prepare to conduct prod-
uct verification; (2) perform verification; (3) analyze A verification approach should be adapted (tailored)
verification results; (4) preparing a product verifica- to the project it supports. The project manager and
tion report; and (5) capture work products generated systems engineer should work with the verification
during the verification activities. lead engineer to develop a verification approach and
plan the activities. Many factors need to be consid-
Product Verification is often performed by the devel- ered in developing this approach and the subsequent
oper that produced the end product with participa- verification program. These factors include:
tion of the end user and customer. Quality Assurance

NASA SYSTEMS ENGINEERING HANDBOOK 90
5.0  Product Realization

From Product
Implementation or
Product Integration Process
To Product
End Product Validation Process
to Be Verified Prepare to conduct product
verification Verified End Product
From Configura
ation
Management Process
Perform the product To Technical
Specified Requirements
verification Assessment Process
Baseline
Product Verification
Results
Analyze the outcomes of
the product verification
To Technical Data
Product Verification Management Process
Plan
Prepare a product Product Verification
verification report Report
From existing
resources or Product
Transition Process
Product Verification
Capture work products from
Product Verification Work Products
product verification activities
Enabling Products

FIGURE 5.3-1  Product Verification Process

• Project type, especially for flight projects. »» Project cost and schedule implications.
Verification activities and timing depend on the Verification activities can be significant driv-
following: ers of a project’s cost and schedule, and these
implications should be considered early in the
»» The type of flight article involved (e.g., an development of the verification plan. Trade
experiment, payload, or launch vehicle). studies should be performed early in the life
»» For missions required to follow NPR cycle to support decisions about verification
7120.5, NASA Space Flight Program and methods and types and the selection of facil-
Project Management Requirements, NASA ity capabilities and locations. For example, a
payload classification (NPR 8705.4, Risk trade study might be made to decide between
Classification for NASA Payloads) guidelines performing a test at a centralized facility or at
are intended to serve as a starting point for several decentralized locations.
establishing the formality of verification »» Risk management should be considered in
approaches that can be adapted to the needs the development of the verification approach.
of a specific project based on the “A–D” Qualitative risk assessments and quantita-
payload classification. Further flexibility is tive risk analyses (e.g., a Failure Mode and
imparted to projects following NPR 7120.8, Effects Analysis (FMEA)) often identify new
NASA Research and Technology Program concerns that can be mitigated by additional
and Project Management Requirements. verifications, thus increasing the extent of

NASA SYSTEMS ENGINEERING HANDBOOK 91
5.0  Product Realization

verification activities. Other risk assessments of the project life cycle and matures as the design is
contribute to trade studies that determine matured. The verification environment is considered as
the preferred methods of verification to be part of procedure development. Operational scenarios
used and when those methods should be are assessed to explore all possible verification activities
performed. For example, a trade might be to be performed. The final element is preparation of the
made between performing a model test versus verification environment; e.g., facilities, equipment,
determining model characteristics by a less tools, measuring devices, and climatic conditions.
costly but less revealing analysis. The project
manager/systems engineer should determine When operator or other user interaction is involved,
what risks are acceptable in terms of the proj- it is important to ensure that humans are properly
ect’s cost and schedule. represented in the verification activities. This includes
physical size, skills, knowledge, training, clothing,
• Availability of verification facilities/sites and special gear, and tools. Note: Testing that includes
transportation assets to move an article from one representatives of the human in the system is often
location to another (when needed). This requires referred to as “human-in-the-loop” testing.
coordination with the Integrated Logistics
Support (ILS) engineer.
NOTE: Depending on the nature of the verifica-
• Availability of appropriately trained users for tion effort and the life cycle phase the program is in,
interaction with systems having human interfaces. some type of review to assess readiness for verifi-
cation (as well as validation later) is typically held. In
• Acquisition strategy; i.e., in-house development or earlier phases of the life cycle, these Test Readiness
system contract. A NASA field Center can often Reviews (TRRs) may be held informally; in later
shape a contractor’s verification process through phases of the life cycle, this review may become a
the project’s SOW. more formal event. TRRs and other technical reviews
are an activity of the Technical Assessment Process.
• Degree of design heritage and hardware/software reuse. On most projects, a number of TRRs with tailored
entrance/success criteria are held to assess the
5.3.1.2.1  Product Verification Preparation readiness and availability of test ranges, test facili-
In preparation for verification, the verification ties, trained testers, instrumentation, integration labs,
plan and the specified requirements are collected, support equipment, and other enabling products.
reviewed, and confirmed. The product to be verified Peer reviews are additional reviews that may be
is obtained (output from the Product Implementation conducted formally or informally to ensure readiness
Process or the Product Integration Process) along for verification (as well as the results of the verifi-
with any enabling products, such as those repre- cation process). Guidelines for conducting a peer
senting external interfacing products and support review are discussed in Section 6.7.2.4.5.
resources (including personnel) that are necessary for
verification. Procedures capturing detailed step-by-
step activities and based on the verification type and TABLE 5.3-1 provides an example of the type of infor-
methods are finalized and approved. Development of mation that may be included in a verification proce-
procedures typically begins during the design phase dure and a verification report.

NASA SYSTEMS ENGINEERING HANDBOOK 92
5.0  Product Realization

METHODS OF VERIFICATION

Analysis: The use of mathematical modeling and analytical techniques to predict the suitability of a
design to stakeholder expectations based on calculated data or data derived from lower system structure
end product verifications. Analysis is generally used when a prototype; engineering model; or fabricated,
assembled, and integrated product is not available. Analysis includes the use of modeling and simulation
as analytical tools. A model is a mathematical representation of reality. A simulation is the manipulation of a
model. Analysis can include verification by similarity of a heritage product.

Demonstration: Showing that the use of an end product achieves the individual specified requirement.
It is generally a basic confirmation of performance capability, differentiated from testing by the lack of
detailed data gathering. Demonstrations can involve the use of physical models or mock-ups; for example,
a requirement that all controls shall be reachable by the pilot could be verified by having a pilot perform
flight-related tasks in a cockpit mock-up or simulator. A demonstration could also be the actual operation
of the end product by highly qualified personnel, such as test pilots, who perform a one-time event that
demonstrates a capability to operate at extreme limits of system performance, an operation not normally
expected from a representative operational pilot.

Inspection: The visual examination of a realized end product. Inspection is generally used to verify physical
design features or specific manufacturer identification. For example, if there is a requirement that the safety
arming pin has a red flag with the words “Remove Before Flight” stenciled on the flag in black letters, a
visual inspection of the arming pin flag can be used to determine if this requirement was met. Inspection can
include inspection of drawings, documents, or other records.

Test: The use of an end product to obtain detailed data needed to verify performance or provide sufficient
information to verify performance through further analysis. Testing can be conducted on final end products,
breadboards, brassboards, or prototypes. Testing produces data at discrete points for each specified
requirement under controlled conditions and is the most resource-intensive verification technique. As the
saying goes, “Test as you fly, and fly as you test.” (See Section 5.3.2.5 in the NASA Expanded Guidance for
Systems Engineering at https://nen.nasa.gov/web/se/doc-repository)

Outcomes of verification preparation include the • Articles/models to be verified and verification-en-
following: abling products are on hand, assembled, and
integrated with the verification environment
• The verification plan, approved procedures, and according to verification plans and schedules;
an appropriate baseline set of specified require-
ments and supporting configuration documenta- • The resources (funding, facilities, and people
tion is available and on hand; including appropriately skilled operators) needed

NASA SYSTEMS ENGINEERING HANDBOOK 93
5.0  Product Realization

TABLE 5.3-1  Example information in Verification Procedures and Reports
Verification Procedure Verification Report
Nomenclature and identification of the test article or Verification objectives and the degree to which they were
material; met;

Identification of test configuration and any differences Description of verification activity including deviations
from flight operational configuration; from nominal results (discrepancies);

Identification of objectives and criteria established for the Test configuration and differences from the flight
verification by the applicable requirements specification; operational configuration;

Characteristics and design criteria to be inspected, Specific result of each activity and each procedure,
demonstrated, or tested, including values with tolerances including the location or link to verification data/artifacts;
for acceptance or rejection;

Description, in sequence, of steps, operations, and Specific result of each analysis including those associated
observations to be taken; with test-data analysis;

Identification of computer software required; Test performance data tables, graphs, illustrations, and
pictures;

Identification of measuring, test, and recording equipment Summary of nonconformance/discrepancy reports,
to be used, specifying range, accuracy, and type; including dispositions with approved corrective actions
and planned retest activity if available;

Provision for recording equipment calibration or software Conclusions and recommendations relative to the
version data; success of verification activity;

Credentials showing that required computer test Status of Government-Supplied Equipment (GSE) and
programs/support equipment and software have been other enabling support equipment as affected by test;
verified prior to use with flight operational hardware;

Any special instructions for operating data recording Copy of the as-run procedure (may include redlines); and
equipment or other automated test equipment as
applicable;

Layouts, schematics, or diagrams showing identification, Authentication of test results and authorization of
location, and interconnection of test equipment, test acceptability.
articles, and measuring points and any other associated
design or configuration work products;

Identification of hazardous situations or operations;

Precautions and safety instructions to ensure safety of
personnel and prevent degradation of test articles and
measuring equipment;

Environmental and/or other conditions to be maintained
with tolerances;

Constraints on inspection or testing;

Provision or instructions for the recording of verification
results and other artifacts;

Special instructions for instances of nonconformance and
anomalous occurrences or results; and

Specifications for facility, equipment maintenance,
housekeeping, quality inspection, and safety and handling
requirements before, during, and after the total verification
activity.

NASA SYSTEMS ENGINEERING HANDBOOK 94
5.0  Product Realization

to conduct the verification are available according are some of the most important tests for the systems
to the verification plans and schedules; and engineers to participate in or to lead. They review
the overall compatibility of the various systems and
• The verification environment is evaluated for ade- demonstrate compliance with system-level require-
quacy, completeness, readiness, and integration. ments and whether the system behaves as expected
by the stakeholders.
5.3.1.2.2  Perform Product Verification
The actual act of verifying the end product is per- End-to-end testing includes executing complete
formed as spelled out in the plans and procedures, threads or operational scenarios across multi-
and conformance is established with each specified ple configuration items, ensuring that all mission
product requirement. The verification lead should requirements are verified and validated. Operational
ensure that the procedures were followed and per- scenarios are used extensively to ensure that the
formed as planned, the verification-enabling prod- mission system (or collections of systems) will suc-
ucts and instrumentation were calibrated correctly, cessfully execute mission requirements. Operational
and the data were collected and recorded for required scenarios are a step-by-step description of how the
verification measures. system should operate and interact with its users and
its external interfaces (e.g., other systems). Scenarios
A verification program may include verifications at should be described in a manner that allows engi-
several layers in the product hierarchy. Some verifica- neers to walk through them and gain an understand-
tions need to be performed at the lowest component ing of how all the various parts of the system should
level if the ability to verify a requirement at a higher function and interact as well as verify that the system
assembly is not possible. Likewise, there may be veri- will satisfy the user’s goals and expectations (MOEs).
fications at assemblies, sub-systems and system levels. Operational scenarios should be described for all
If practicable, a final set of testing with as much of operational modes, mission phases (e.g., installation,
the end-to-end configuration as possible is important. startup, typical examples of normal and contingency
operations, shutdown, and maintenance), and critical
The purpose of end-to-end testing is to demonstrate sequences of activities for all classes of users identi-
interface compatibility and desired total function- fied. Each scenario should include events, actions,
ality among different elements of a mission system, stimuli, information, and interactions as appropri-
between systems (the system of interest and external ate to provide a comprehensive understanding of the
enabling systems), and within a system as a whole. operational aspects of the system.
It can involve real or representative input and oper-
ational scenarios. End-to-end tests performed on FIGURE 5.3-2 presents an example of an end-to-end
the integrated ground and flight assets include all data flow for a scientific satellite mission. Each arrow
elements of the flight article (payload or vehicle), in the diagram represents one or more data or control
its control, stimulation, communications, and data flows between two hardware, software, subsystem, or
processing to demonstrate that the entire integrated system configuration items. End-to-end testing veri-
mission system is operating in a manner to fulfill all fies that the data flows throughout the multisystem
mission requirements and objectives. End-to-end tests environment are correct, that the system provides the
may be performed as part of investigative engineering required functionality, and that the outputs at the
tests, verification testing, or validation testing. These eventual end points correspond to expected results.

NASA SYSTEMS ENGINEERING HANDBOOK 95
5.0  Product Realization

EXTERNAL GROUND SYSTEM EXTERNAL
FLIGHT SYSTEM
SYSTEMS STIMULI

Uplink Process X-rays
Instrument
Set A
Mission Command
Planning Transmission Visible
Planning Generation

Software
Spacecraft
Loads Ultraviolet
Execution

Data Infrared
Archival Analysis
Community Capture Instrument
Set B

Downlink Process Microwave

FIGURE 5.3-2  Example of End-to-End Data Flow for a Scientific Satellite Mission

Since the test environment is as close an approxi- human-based. Internal interfaces (e.g., software sub-
mation as possible to the operational environment, routine calls, analog-to-digital conversion) of a desig-
system performance requirements testing is also nated configuration item are not within the scope of
included. This figure is not intended to show the end-to-end testing.
full extent of end-to-end testing. Each system shown
would need to be broken down into a further level of When a “discrepancy” is observed (i.e., any variance,
granularity for completeness. lack of agreement, or contradiction with the required
or expected outcome, configuration, or result), verifi-
End-to-end testing is an integral part of the verifi- cation activities should stop and a discrepancy report
cation and validation of the total (mission) system. should be generated. The activities and events leading
It is a set of activities that can be employed during up to the discrepancy should be analyzed to deter-
selected hardware, software, and system phases mine if a nonconforming product exists or there is an
throughout the life cycle using developmental forms issue with the verification procedure or conduct. The
and external simulators. However, final end-to-end Decision Analysis Process should be used to make
testing should be done on the flight articles in the decisions with respect to needed changes in the veri-
flight configuration if possible and prior to deploy- fication plans, environment, and/or procedures.
ment and launch. In comparison with configura-
tion item testing, end-to-end testing addresses each Outcomes of performing product verification include
configuration item (end product) only down to the the following:
level designated by the verification plan (generally,
the segment r element) and focuses on external inter- • A verified product is established with supporting
faces, which can be either hardware, software, or confirmation that the product (in the appropriate

NASA SYSTEMS ENGINEERING HANDBOOK 96
5.0  Product Realization

form for the life cycle phase) complies with its NOTE: Nonconformance and discrepancy
specified requirements, and if it does not, a non- reports may be directly linked with the Technical
conformance report delineating the variance is Risk Management Process. Depending on the
available. nature of the nonconformance, approval through
such bodies as a Material Review Board or
• A determination is made as to whether the appro- Configuration Control Board (which typically
priate results were collected and evaluated to show includes risk management participation) may
completion of verification objectives throughout be required.
their performance envelope.

• A determination is made that the verification System design and product realization process activ-
product was appropriately integrated with the ities may be required to resolve product nonconfor-
enabling products and verification environment. mance. If the mitigation of the nonconformance
results in a change to the product, the verification
Analyze Product Verification Results and
5.3.1.2.3  may need to be planned and performed again.
Report
As the verification activities are completed, the Outcomes of analyzing the verification results include
results are collected and analyzed. The data are ana- the following:
lyzed for quality, integrity, correctness, consistency,
and validity. Any verification discrepancies (anom- • Product nonconformance (not compliant with
alies, variations, and out-of-compliance conditions) product requirement) is identified.
are identified and reviewed to determine if there is a
nonconforming product not resulting from poor ver- • Appropriate replanning, redefinition of require-
ification conduct, procedure, or conditions. If possi- ments, redesign, implementation/integration, mod-
ble, this analysis is performed while the test/analysis ification, and reverification have been accomplished
configuration is still intact. This allows a quick turn- for resolution of the nonconforming product.
around in case the data indicates that a correction to
the test or analysis run needs to be performed again. • Appropriate facility modifications, procedure
corrections, enabling product modification, and
Discrepancies and nonconforming products should reverification have been performed for non-prod-
be recorded and reported for follow-up action and uct-related discrepancies.
closure. Verification results should be recorded in
a requirements compliance or verification matrix • Waivers for nonconforming products are accepted.
or other method developed during the Technical
Requirements Definition Process to trace compliance • Discrepancy and nonconformance reports including
for each product requirement. Waivers needed as a corrective actions have been generated as needed.
result of verification to request relief from or modify
a requirement are identified. • The verification report is completed.

NASA SYSTEMS ENGINEERING HANDBOOK 97
5.0  Product Realization

Re-engineering • Proof that the realized end product did or did not
Based on analysis of verification results, it could be satisfy the specified requirements is documented.
necessary to re-realize the end product used for ver-
ification or to re-engineer the end products assem- • The verification report is developed, including:
bled and integrated into the product being verified,
based on where and what type of nonconformance »» recorded test/verification results/data;
was found. »» version of the set of specified requirements
used;
Re-engineering could require the reapplication of the »» version of the product verified;
system design processes (Stakeholder Expectations »» version or standard for tools, data, and equip-
Definition Process, Technical Requirements ment used;
Definition Process, Logical Decomposition Process, »» results of each verification including pass or
and Design Solution Definition Process). fail declarations; and
»» discrepancies.
5.3.1.2.4  Capture Product Verification Work Products
Verification work products (inputs to the Technical 5.3.1.3  Outputs
Data Management Process) take many forms and Key outputs from the process are:
involve many sources of information. The capture
and recording of verification results and related data • Verified product ready for validation: After the
is a very important, but often underemphasized, step product is verified, it will next pass through the
in the Product Verification Process. Product Validation Process.

Verification results, peer review reports, anomalies, • Product verification results: Results from executed
and any corrective action(s) taken should be captured, procedures are passed to technical assessment.
as should all relevant results from the application of
the Product Verification Process (related decisions, • Product verification report(s): A report shows the
rationale for the decisions made, assumptions, and results of the verification activities. It includes the
lessons learned). requirement that was to be verified and its bidi-
rectional traceability, the verification method
Outcomes of capturing verification work products used, and reference to any special equipment,
include the following: conditions, or procedures used. It also includes
the results of the verification, any anomalies,
• Verification of work products is recorded, e.g., variations or out-of-compliance results noted and
method of verification, procedures, environ- associated corrective actions taken.
ments, outcomes, decisions, assumptions, correc-
tive actions, and lessons learned. • Product verification work products: These
include discrepancy and nonconformance reports
• Variations, anomalies, and out-of-compliance with identified correction actions; updates
conditions have been identified and documented, to requirements compliance documentation;
including the actions taken to resolve them. changes needed to the procedures, equipment or

NASA SYSTEMS ENGINEERING HANDBOOK 98
5.0  Product Realization

environment; configuration drawings; calibra- System validation confirms that the integrated real-
tions; operator certifications; and other records. ized end products conform to stakeholder expec-
tations as captured in the MOEs, MOPs, and
Criteria for completing verification of the product ConOps. Validation also ensures that any anomalies
include: (1) documented objective evidence of com- discovered are appropriately resolved prior to prod-
pliance with requirements or waiver and (2) closure uct delivery. This section discusses the process activ-
of all discrepancy and nonconformance reports. ities, methods of validation, inputs and outputs, and
potential deficiencies.
5.3.2  Product Verification Guidance
Refer to Section 5.3.2 in the NASA Expanded See Section 2.4 for a discussion about the distinctions
Guidance for Systems Engineering at https://nen.nasa. between Product Verification and Product Validation.
gov/web/se/doc-repository for additional guidance on:
5.4.1 Process Description
• the verification approach, FIGURE 5.4-1, taken from NPR 7123.1, provides a typ-
• verification in the life cycle, ical flow diagram for the Product Validation Process
• verification procedures, and identifies typical inputs, outputs, and activities
• verification reports to consider in addressing product validation.
• end-to-end testing,
• use of modeling and simulations, and 5.4.1.1  Inputs
• hardware-in-the-loop testing. Key inputs to the process are:

• End product to be validated: This is the end prod-
5.4 Product Validation
uct that is to be validated and which has success-
The Product Validation Process is the second of the fully passed through the verification process.
verification and validation processes conducted on an
implemented or integrated end product. While veri- • Validation plan: This plan would have been devel-
fication proves whether “the product was done right,” oped under the Technical Planning Process and
validation proves whether “the right product was baselined prior to entering this process. This plan
done.” In other words, verification provides objective may be a separate document or a section within
evidence that every “shall” statement in the require- the Verification and Validation Plan.
ments document or specification was met, whereas
validation is performed for the benefit of the custom- • Baselined stakeholder expectations: These would
ers and users to ensure that the system functions in have been developed for the product at this level
the expected manner when placed in the intended during the Stakeholder Expectations Definition
environment. This is achieved by examining the Process. It includes the needs, goals, and objec-
products of the system at every level of the product tives as well as the baselined and updated concept
structure and comparing them to the stakeholder of operations and MOEs.
expectations for that level. A well-structured valida-
tion process can save cost and schedule while meeting • Any enabling products: These are any special
the stakeholder expectations. equipment, facilities, test fixtures, applications,

NASA SYSTEMS ENGINEERING HANDBOOK 99
5.0  Product Realization

or other items needed to perform the Product The objectives of the Product Validation Process are:
Validation Process.
• To confirm that the end product fulfills its intended
5.4.1.2  Process Activities use when operated in its intended environment:
The Product Validation Process demonstrates that
the end product satisfies its stakeholder (customer »» Validation is performed for each implemented
and other interested party) expectations (MOEs) or integrated and verified end product from
within the intended operational environments, with the lowest end product in a system structure
validation performed by anticipated operators and/or branch up to the top level end product (the
users whenever possible. The method of validation is system).
a function of the life cycle phase and the position of »» Evidence is generated as necessary to con-
the end product within the system structure. firm that products at each layer of the sys-
tem structure meet the capability and other
There are five major steps in the validation process: operational expectations of the customer/
(1) preparing to conduct validation, (2) conduct user/operator and other interested parties for
planned validation (perform validation), (3) analyze that product.
validation results, (4) prepare a validation report, and
(5) capture the validation work products.

From Product
Verification Process To Product
Transition Process
End Product
Prepare to conduct product
to Be Validated
validation Validated End Product

From Configuration
Management Process To Technical
Perform the product
Assessment Process
Stakeholder Expectation validation
Baseline Product Validation
Results
From Design Solution Analyze the outcomes of
and Technical the product validation
To Technical Data
Planning Processes
Management Process
Product Validation
Plan Prepare a product Product Validation
validation report Report
From existing
resources or Product
Transition Process Product Validation
Capture the work products
Work Products
from product validation activities
Product Validation–
Enabling Products

FIGURE 5.4-1  Product Validation Process

NASA SYSTEMS ENGINEERING HANDBOOK 100
5.0  Product Realization

• To ensure the human has been properly integrated • To ensure that any problems discovered are
into the system: appropriately resolved prior to delivery of the end
product (if validation is done by the supplier of
»» The user interface meets human engineering the product) or prior to integration with other
criteria. products into a higher level assembled product (if
»» Operators and maintainers have the required validation is done by the receiver of the product).
skills and abilities.
»» Instructions are provided and training pro- 5.4.1.2.1  Product Validation Preparation
grams are in place. To prepare for performing product validation, the
»» The working environment supports crew appropriate set of expectations, including MOEs and
health and safety. MOPs, against which the validation is to be made

METHODS OF VALIDATION

Analysis: The use of mathematical modeling and analytical techniques to predict the suitability of a
design to stakeholder expectations based on calculated data or data derived from lower system structure
end product verifications. Analysis is generally used when a prototype; engineering model; or fabricated,
assembled, and integrated product is not available. Analysis includes the use of modeling and simulation
as analytical tools. A model is a mathematical representation of reality. A simulation is the manipulation of
a model.

Demonstration: Showing that the use of an end product achieves the stakeholder expectations as defined
in the NGOs and the ConOps. It is generally a basic confirmation of behavioral capability, differentiated
from testing by the lack of detailed data gathering. Demonstrations can involve the use of physical models
or mock-ups; for example, an expectation that controls are readable by the pilot in low light conditions
could be validated by having a pilot perform flight-related tasks in a cockpit mock-up or simulator under
those conditions.

Inspection: The visual examination of a realized end product. Inspection is generally used to validate the
presence of a physical design features or specific manufacturer identification. For example, if there is an
expectation that the safety arming pin has a red flag with the words “Remove Before Flight” stenciled on the
flag in black letters, a visual inspection of the arming pin flag can be used to determine if this expectation
has been met.

Test: The use of an end product to obtain detailed data needed to determine a behavior, or provide
sufficient information to determine a behavior through further analysis. Testing can be conducted on
final end products, breadboards, brassboards, or prototypes. Testing produces information at discrete
points for each specified expectation under controlled conditions and is the most resource-intensive
validation technique.

NASA SYSTEMS ENGINEERING HANDBOOK 101
5.0  Product Realization

should be obtained. In addition to the V&V Plan, • Enabling products are integrated within the valida-
other documentation such as the ConOps and HSI tion environment according to plans and schedules;
Plan may be useful. The product to be validated (out-
put from implementation, or integration and verifi- • Users/operators and other resources are available
cation), as well as the appropriate enabling products according to validation plans and schedules; and
and support resources (requirements identified and
acquisition initiated by design solution activities) • The validation environment is evaluated for ade-
with which validation will be conducted should be quacy, completeness, readiness, and integration.
collected. Enabling products includes those repre-
senting external interfacing products and special 5.4.1.2.2  Perform Product Validation
test equipment. Support resources include person- The act of validating the end product is performed
nel necessary to support validation and operators. as spelled out in the validation plans and procedures,
Procedures, capturing detailed step-by-step activities and the conformance established to each specified
and based on the validation type and methods are stakeholder expectation (MOEs and ConOps) shows
finalized and approved. Development of procedures that the validation objectives were met. Validation
typically begins during the design phase of the proj- differs from qualification testing. Validation testing is
ect life cycle and matures as the design is matured. focused on the expected environments and operations
The validation environment is considered as part of of the system where as qualification testing includes
procedure development. Operational scenarios are the worst case loads and environmental requirements
assessed to explore all possible validation activities to within which the system is expected to perform or
be performed. The final element is preparation of the survive. The verification lead should ensure that the
validation environment; e.g., facilities, equipment, procedures were followed and performed as planned,
software, and climatic conditions. the validation-enabling products and instrumenta-
tion were calibrated correctly, and the data were col-
When operator or other user interaction is involved, lected and recorded for required validation measures.
it is important to ensure that humans are properly
represented in the validation activities. This includes When a discrepancy is observed, the validation should
physical size, skills, knowledge, training, clothing, be stopped and a discrepancy report generated. The
special gear, and tools. When possible, actual end activities and events leading up to the discrepancy
users/operators should be used and other stakeholders should be analyzed to determine if a nonconforming
should participate or observe activities as appropriate product exists or there is an issue with the verifica-
and practical. tion procedure, conduct, or conditions. If there are
no product issues, the validation is replanned as nec-
Outcomes of validation preparation include the essary, the environment preparation anomalies are
following: corrected, and the validation is conducted again with
improved or correct procedures and resources. The
• The validation plan, approved procedures, sup- Decision Analysis Process should be used to make
porting configuration documentation, and an decisions with respect to needed changes to the vali-
appropriate baseline set of stakeholder expecta- dation plans, environment, and/or conduct.
tions are available and on hand;

NASA SYSTEMS ENGINEERING HANDBOOK 102
5.0  Product Realization

Outcomes of performing validation include the the end product. If it is found to be a result of the test
following: configuration, the configuration should be corrected
and the validation repeated. If it is found to be a result
• A validated product is established with support- of the end product being validated, discussions with
ing confirmation that the appropriate results were the customer should be held and any required sys-
collected and evaluated to show completion of tem design and product realization process activities
validation objectives. should be conducted to resolve deficiencies. The defi-
ciencies along with recommended corrective actions
• A determination is made as to whether the fabri- and resolution results should be recorded, and valida-
cated/manufactured or assembled and integrated tion should be repeated, as required.
products (including software or firmware builds
and human element allocations) comply with Outcomes of analyzing validation results include the
their respective stakeholder expectations. following:

• A determination is made that the validated prod- • Product anomalies, variations, deficiencies, non-
uct was appropriately integrated with the valida- conformance and/or issues are identified.
tion environment and the selected stakeholder
expectations set was properly validated. • Assurances that appropriate replanning, redefi-
nition of requirements, design, and revalidation
• A determination is made that the product being have been accomplished for resolution of anoma-
validated functions together with interfacing lies, variations, deficiencies or out-of-compliance
products throughout their operational envelopes. conditions (for problems not caused by poor vali-
dation conduct).
5.4.1.2.3  Analyze Product Validation Results
Once the validation activities have been completed, • Discrepancy and corrective action reports are gen-
the results are collected and the data are analyzed erated as needed.
to confirm that the end product provided will sup-
ply the customer’s needed capabilities within the • The validation report is completed.
intended environments of use, validation procedures
were followed, and enabling products and supporting Re-engineering
resources functioned correctly. The data are also ana- Based on the results of the Product Validation Process,
lyzed for quality, integrity, correctness, consistency, it could become necessary to re-engineer a deficient
and validity, and any unsuitable products or product end product. Care should be taken that correcting a
attributes are identified and reported. deficiency or set of deficiencies does not generate a
new issue with a part or performance that had previ-
It is important to compare the actual validation ously operated satisfactorily. Regression testing, a for-
results to the expected results. If discrepancies are mal process of rerunning previously used acceptance
found, it needs to be determined if they are a result tests (primarily used for software), is one method to
of the test configuration or analysis assumptions or ensure a change does not affect function or perfor-
whether they are a true characteristic or behavior of mance that was previously accepted.

NASA SYSTEMS ENGINEERING HANDBOOK 103
5.0  Product Realization

Validation Deficiencies Of course, the ultimate reason for performing vali-
Validation outcomes can be unsatisfactory for several dation is to determine if the design itself is the right
reasons. One reason is poor conduct of the validation design for meeting stakeholder expectations. After
(e.g., enabling products and supporting resources any and all validation test deficiencies are ruled
missing or not functioning correctly, untrained out, the true value of validation is to identify design
operators, procedures not followed, equipment not changes needed to ensure the program/product’s
calibrated, or improper validation environmental mission. Validation should be performed as early and
conditions) and failure to control other variables not as iteratively as possible in the SE process since the
involved in validating a set of stakeholder expecta- earlier re-engineering needs are discovered, the less
tions. A second reason could be a shortfall in the ver- expensive they are to resolve.
ification process of the end product. This could create
the need for: Pass Verification but Fail Validation?
Sometimes systems successfully complete verification
• Re-engineering end products lower in the system but then are unsuccessful in some critical phase of
structure that make up the end product that was the validation process, delaying development and
found to be deficient (i.e., that failed to satisfy val- causing extensive rework and possible compromises
idation requirements); and/or with the stakeholder. Developing a solid ConOps in
early phases of the project (and refining it through the
• Re-performing any needed verification and vali- requirements development and design phases) is crit-
dation processes. ical to preventing unsuccessful validation. Similarly,
developing clear expectations for user community
Other reasons for validation deficiencies (particularly involvement in the HSI Plan is critical to success-
when M&S are involved) may be incorrect and/or ful validation. Frequent and iterative communica-
inappropriate initial or boundary conditions; poor tions with stakeholders helps to identify operational
formulation of the modeled equations or behaviors; scenarios and key needs that should be understood
the impact of approximations within the modeled when designing and implementing the end product.
equations or behaviors; failure to provide the required Should the product fail validation, redesign may be a
geometric and physics fidelities needed for credible necessary reality. Review of the understood require-
simulations for the intended purpose; and/or poor ments set, the existing design, operational scenarios,
spatial, temporal, and perhaps, statistical resolution user population numbers and skills, training, and
of physical phenomena used in M&S. support material may be necessary, as well as nego-
tiations and compromises with the customer, other
stakeholders, and/or end users to determine what, if
NOTE: Care should be exercised to ensure that anything, can be done to correct or resolve the situ-
the corrective actions identified to remove valida- ation. This can add time and cost to the overall proj-
tion deficiencies do not conflict with the baselined ect or, in some cases, cause the project to fail or be
stakeholder expectations without first coordinating canceled. However, recall from FIGURE 2.5-1 that the
such changes with the appropriate stakeholders. earlier design issues are discovered, the less costly the
corrective action.

NASA SYSTEMS ENGINEERING HANDBOOK 104
5.0  Product Realization

Prepare Report and Capture Product
5.4.1.2.4  »» Version or standard for tools and equipment
Validation Work Products used, together with applicable calibration
Validation work products (inputs to the Technical data;
Data Management Process) take many forms and »» Outcome of each validation including pass or
involve many sources of information. The capture fail declarations; and
and recording of validation-related data is a very »» Discrepancy between expected and actual
important, but often underemphasized, step in the results.
Product Validation Process.

Validation results, deficiencies identified, and correc- NOTE: For systems where only a single deliv-
tive actions taken should be captured, as should all erable item is developed, the Product Validation
relevant results from the application of the Product Process normally completes acceptance testing
Validation Process (related decisions, rationale for of the system. However, for systems with several
decisions made, assumptions, and lessons learned). production units, it is important to understand
that continuing verification and validation is not an
Outcomes of capturing validation work products appropriate approach to use for the items following
include the following: the first deliverable. Instead, acceptance testing
is the preferred means to ensure that subsequent
• Work products and related information generated deliverables meet stakeholder expectations.
while doing Product Validation Process activities
and tasks are recorded; i.e., method of validation
conducted, the form of the end product used for 5.4.1.3  Outputs
validation, validation procedures used, validation Key outputs of validation are:
environments, outcomes, decisions, assumptions,
corrective actions, lessons learned, etc. (often cap- • Validated end product: This is the end product
tured in a matrix or other tool—see Appendix E). that has successfully passed validation and is
ready to be transitioned to the next product layer
• Deficiencies (e.g., variations and anomalies and or to the customer.
out-of-compliance conditions) are identified and
documented, including the actions taken to resolve. • Product validation results: These are the raw
results of performing the validations.
• Proof is provided that the end product is in con-
formance with the stakeholder expectation set • Product validation report: This report provides
used in the validation. the evidence of product conformance with the
stakeholder expectations that were identified as
• Validation report including: being validated for the product at this layer. It
»» Recorded validation results/data; includes any nonconformance, anomalies, or
»» Version of the set of stakeholder expectations other corrective actions that were taken.
used;
»» Version and form of the end product • Work products: These include procedures,
validated; required personnel training, certifications,

NASA SYSTEMS ENGINEERING HANDBOOK 105
5.0  Product Realization

configuration drawings, and other records gener- these paper or soft products are transformed through
ated during the validation activities. implementation and integration processes into hard-
ware and software solutions to meet the stakeholder
Success criteria for this process include: (1) objective expectations. They are repeated with different degrees
evidence of performance and the results of each sys- of rigor throughout the life cycle. The Product
tem-of-interest validation activity are documented, Transition Process includes product transitions from
and (2) the validation process should not be consid- one level of the system architecture upward. The
ered or designated as complete until all issues and Product Transition Process is the last of the product
actions are resolved. realization processes, and it is a bridge from one level
of the system to the next higher level.
5.4.2  Product Validation Guidance
Refer to Section 5.4.2 in the NASA Expanded The Product Transition Process is the key to bridge
Guidance for Systems Engineering at https://nen.nasa. from one activity, subsystem, or element to the overall
gov/web/se/doc-repository for additional guidance on: engineered system. As the system development nears
completion, the Product Transition Process is again
• use of modeling and simulation, applied for the end product, but with much more
• software validation, and rigor since now the transition objective is delivery of
• taking credit for validation. the system-level end product to the actual end user.
Depending on the kind or category of system devel-
oped, this may involve a Center or the Agency and
5.5 Product Transition
impact thousands of individuals storing, handling,
The Product Transition Process is used to transition a and transporting multiple end products; preparing
verified and validated end product that has been gen- user sites; training operators and maintenance per-
erated by product implementation or product inte- sonnel; and installing and sustaining, as applicable.
gration to the customer at the next level in the system Examples are transitioning the external tank, solid
structure for integration into an end product or, for rocket boosters, and orbiter to Kennedy Space Center
the top-level end product, transitioned to the intended (KSC) for integration and flight. Another example is
end user. The form of the product transitioned will be the transition of a software subsystem for integration
a function of the product life cycle phase success cri- into a combined hardware/software system.
teria and the location within the system structure of
the WBS model in which the end product exists. The 5.5.1  Process Description
systems engineer involvement in this process includes FIGURE 5.5-1 provides a typical flow diagram for
ensuring the product being transitioned has been the Product Transition Process and identifies typical
properly tested and verified/validated prior to being inputs, outputs, and activities to consider in address-
shipped to the next level stakeholder/customer. ing product transition.

Product transition occurs during all phases of the 5.5.1.1  Inputs
life cycle. During the early phases, the technical Inputs to the Product Transition Process depend pri-
team’s products are documents, models, studies, and marily on the transition requirements, the product
reports. As the project moves through the life cycle, that is being transitioned, the form of the product

NASA SYSTEMS ENGINEERING HANDBOOK 106
5.0  Product Realization

Prepare to conduct product To end user or Product
transition Integration Process
(recursive loop)
End Product to Be Delivered End Product
Evaluate the end product, personnel,
Transitioned With Applicable
and enabling product readiness for
Documentation
product transition

From Technical Data To Technical Data
Prepare the end product for transition
Management Process Management Process
Documentation to
Product Transition
Accompany the Transition the end product to the Work Products
Delivered End Product customer with required documentation
based on the type of transition required
To Product
Implementation,
Prepare sites, as required, where the Integration, Verification,
end product will be stored, assembled, Validation, and
Product Transition– integrated, installed, used, and/or Transition Processes
Enabling Products maintained
Realized Enabling
Products
Capture work products from product
transition activities

FIGURE 5.5-1  Product Transition Process

transition that is taking place, and the location to Early phase products can take the form of infor-
which the product is transitioning. Typical inputs are mation or data generated from basic or applied
shown in FIGURE 5.5-1 and described below. research using analytical or physical models and
are often in paper or electronic form. In fact, the
• The end product or products to be transitioned end product for many NASA research projects
(from the Product Validation Process): The prod- or science activities is a report, paper, model, or
uct to be transitioned can take several forms. It even an oral presentation. In a sense, the dissem-
can be a subsystem component, system assembly, ination of information gathered through NASA
or top-level end product. It can be hardware, ana- research and development is an important form
lytical models, or software. It can be newly built, of product transition.
purchased, or reused. A product can transition
from a lower system product to a higher one by • Documentation including manuals, procedures,
being integrated with other transitioned products. and processes that accompany the end product
This process may be repeated until the final end (from the Technical Data Management Process):
product is achieved. Each succeeding transition The documentation required for the Product
requires unique input considerations when pre- Transition Process depends on the specific end
paring the validated product for transition to the product; its current location within the system
next level.

NASA SYSTEMS ENGINEERING HANDBOOK 107
5.0  Product Realization

structure; and the requirements identified in transition product at its next higher level or for
various agreements, plans, or requirements doc- the transition of the final end product. Some or all
uments. Typically, a product has a unique iden- of the enabling products may be defined in tran-
tification (i.e., serial or version number) and may sition-related agreements, system requirements
have a pedigree (documentation) that specifies its documents, or project plans. In some cases, prod-
heritage and current state. Pertinent information uct transition-enabling products are developed
may be controlled using a configuration control during the realization of the product itself or may
process or work order system as well as design be required to be developed during the transition
drawings and test reports. Documentation often stage.
includes proof of verification and validation con-
formance. A COTS product would typically con- As a product is developed, special containers,
tain a manufacturer’s specification or fact sheet. holders, or other devices may also be developed to
Documentation may include operations manuals, aid in the storing and transporting of the product
installation instructions, and other information. through development and realization. These may
be temporary accommodations that do not sat-
The documentation level of detail is dependent isfy all the transition requirements, but allow the
upon where the product is within the product hier- product to be initiated into the transition process.
archy and the life cycle. Early in the life cycle, this In such cases, the temporary accommodations
documentation may be conceptual or preliminary will have to be modified or new accommodations
in nature. Later in the life cycle, the documentation will need to be designed and built or procured to
may be detailed design documents, user manuals, meet specific transportation, handling, storage,
drawings, or other work products. Documentation and shipping requirements.
that is gathered during the input process for the
transition phase may require editing, assembling, Sensitive or hazardous products may require spe-
or repackaging to ensure it is in the required condi- cial enabling products such as monitoring equip-
tion for acceptance by the customer. ment, security features, inspection devices, safety
devices, and personnel training to ensure ade-
Special consideration should be given to safety, quate safety and environmental requirements are
including clearly identifiable tags and markings achieved and maintained.
that identify the use of hazardous materials, special
handling instructions, and storage requirements. 5.5.1.2  Process Activities
Transitioning the product can take one of two forms:
• Product transition-enabling products, includ-
ing packaging materials; containers; handling • The delivery of lower system end products to
equipment; and storage, receiving, and ship- higher ones for integration into another end prod-
ping facilities (from existing resources or the uct; or
Product Transition Process for enabling product
realization): Product transition-enabling prod- • The delivery of the final end product to the cus-
ucts may be required to facilitate the implementa- tomer or user that will use it in its operational
tion, integration, evaluation, transition, training, environment.
operations, support, and/or retirement of the

NASA SYSTEMS ENGINEERING HANDBOOK 108
5.0  Product Realization

In the first case, the end product is one of perhaps planning and the kind of packaging, handling, stor-
several other pieces that will ultimately be integrated age, and transportation that is required. The cus-
together to form the item. In the second case, the tomer and other stakeholder expectations, as well
end product is for final delivery to the customer. For as the specific design solution, may indicate special
example, the end product might be one of several transition procedures or enabling product needs for
circuit cards that will be integrated together to form packaging, storage, handling, shipping/transporting,
the final unit that is delivered. Or that unit might site preparation, installation, and/or sustainability.
also be one of several units that have to be integrated These requirements need to be reviewed during the
together to form the final product. preparation stage.

The form of the product transitioned is not only a Other tasks in preparing to transition a product
function of the location of that product within the involve making sure the end product, personnel, and
system product hierarchy, but also a function of the any enabling products are ready for that transition.
life cycle phase. Early life cycle phase products may be This includes the availability of the documentation
in the form of paper, electronic files, physical models, or models that will be sent with the end product,
or technology demonstration prototypes. Later phase including proof of verification and validation confor-
products may be preproduction prototypes (engineer- mance. The appropriateness of detail for that docu-
ing models), the final study report, or the flight units. mentation depends upon where the product is within
the product hierarchy and the life cycle. Early in the
FIGURE 5.5-1 shows what kind of inputs, outputs, life cycle, this documentation may be preliminary
and activities are performed during product transi- in nature. Later in the life cycle, the documentation
tion regardless of where in the product hierarchy or may be detailed design documents, user manuals,
life cycle the product is. These activities include pre- drawings, or other work products. Procedures neces-
paring to conduct the transition; making sure the end sary for conducting the transition should be reviewed
product, all personnel, and any enabling products are and approved by this time.
ready for transitioning; preparing the site; and per-
forming the transition including capturing and doc- Finally, the availability and skills of personnel needed
umenting all work products. to conduct the transition as well as the availability
of any necessary packaging materials/containers,
How these activities are performed and what form the handling equipment, storage facilities, and shipping/
documentation takes depends on where the end items transporter services should also be reviewed. Any
are in the product hierarchy and the life cycle phase. special training necessary for the personnel to per-
form their tasks needs to be performed by this time.
5.5.1.2.1  Prepare to Conduct Transition
The first task is to identify which of the two forms 5.5.1.2.2  Prepare the Site to Receive the Product
of transition is needed: (1) the delivery of lower sys- For either of the forms of product transition, the receiv-
tem end products to higher ones for integration into ing site needs to be prepared to receive the product.
another end product; or (2) the delivery of the final Here the end product is stored, assembled, integrated,
end product to the customer or user that will use the installed, used, and/or maintained as appropriate for
end product in its operational environment. The form the life cycle phase, position of the end product in the
of the product being transitioned affects transition system structure, and customer agreement.

NASA SYSTEMS ENGINEERING HANDBOOK 109
5.0  Product Realization

A vast number of key complex activities, many of or between organizations by land, air, and/or water
them outside direct control of the technical team, vehicles. Particular emphasis needs to be on protect-
need to be synchronized to ensure smooth transition ing surfaces from physical damage, preventing cor-
to the end user. If transition activities are not care- rosion, eliminating damage to electronic wiring or
fully controlled, there can be impacts on schedule, cabling, shock or stress damage, heat warping or cold
cost, and safety of the end product. fractures, moisture, and other particulate intrusion
that could damage moving parts.
A site survey may need to be performed to deter-
mine the issues and needs. This should address the The design requirements should have already
adequacy of existing facilities to accept, store, and addressed the ease of handling or transporting the
operate the new end product and identify any logis- product such as component staking, addition of
tical-support-enabling products and services required transportation hooks, crating, etc. The ease and
but not planned for. Additionally, any modifications safety of packing and unpacking the product should
to existing facilities should be planned well in advance also have been addressed. Additional measures may
of fielding; therefore, the site survey should be made also need to be implemented to show accountability
during an early phase in the product life cycle. These and to securely track the product during transporta-
may include logistical enabling products and ser- tion. In cases where hazardous materials are involved,
vices to provide support for end-product use, opera- special labeling or handling needs, including trans-
tions, maintenance, and disposal. Training for users, portation routes, need to be in place.
operators, maintainers, and other support personnel
may need to be conducted. National Environmental 5.5.1.2.4  Transition the Product
Policy Act documentation or approvals may need to The end product is then transitioned (i.e., moved,
be obtained prior to the receipt of the end product. transported, or shipped) with required documen-
tation to the customer based on the type of tran-
Prior to shipment or after receipt, the end product sition required, e.g., to the next higher level item
may need to be stored in suitable storage conditions in the product hierarchy (often called the Product
to protect and secure the product and prevent dam- Breakdown Structure (PBS)) for product integra-
age or the deterioration of it. These conditions should tion or to the end user. Documentation may include
have been identified early in the design life cycle. operations manuals, installation instructions, and
other information.
5.5.1.2.3  Prepare the Product for Transition
Whether transitioning a product to the next room for The end product is finally installed into the next
integration into the next higher assembly, or for final higher assembly or into the customer/user site using
transportation across the country to the customer, the preapproved installation procedures.
care should be taken to ensure the safe transporta-
tion of the product. The requirements for packag- Confirm Ready to Support
ing, handling, storage, training, and transportation After installation, whether into the next higher
should have been identified during system design. assembly or into the final customer site, functional
Preparing the packaging for protection, security, and and acceptance testing of the end product should be
prevention of deterioration is critical for products conducted. This ensures no damage from the ship-
placed in storage or when it is necessary to transport ping/handling process has occurred and that the
or ship between and within organizational facilities product is ready for support. Any final transitional

NASA SYSTEMS ENGINEERING HANDBOOK 110
5.0  Product Realization

work products should be captured as well as docu- verification and validation of the end product.
mentation of product acceptance. If the end user will perform end product val-
idation, sufficient documentation to support
5.5.1.2.5  Capture Product Transition Work Products end user validation activities is delivered with
Other work products generated during the transition the end product.
process are captured and archived as appropriate.
These may include site plans, special handling proce- • Work products from transition activities to tech-
dures, training, certifications, videos, inspections, or nical data management: Work products could
other products from these activities. include the transition plan, site surveys, measures,
training modules, procedures, decisions, lessons
5.5.1.3  Outputs learned, corrective actions, etc.
• Delivered end product with applicable docu-
mentation: This may take one of two forms: • Realized enabling end products to appropri-
ate life cycle support organization: Some of the
1. Delivered end product for integration to next enabling products that were developed during the
level up in system structure: This includes the various phases could include fabrication or integra-
appropriate documentation. The form of the tion specialized machines; tools; jigs; fabrication
end product and applicable documentation processes and manuals; integration processes and
are a function of the life cycle phase and the manuals; specialized inspection, analysis, demon-
placement within the system structure. (The stration, or test equipment; tools; test stands;
form of the end product could be hardware, specialized packaging materials and containers;
software, model, prototype, first article for handling equipment; storage-site environments;
test, or single operational article or multiple shipping or transportation vehicles or equipment;
production articles.) Documentation includes specialized courseware; instructional site environ-
applicable draft installation, operation, user, ments; and delivery of the training instruction.
maintenance, or training manuals; applicable For the later life cycle phases, enabling products
baseline documents (configuration baseline, that are to be delivered may include specialized
specifications, and stakeholder expectations); mission control equipment; data collection equip-
and test results that reflect completion of veri- ment; data analysis equipment; operations man-
fication and validation of the end product. uals; specialized maintenance equipment, tools,
manuals, and spare parts; specialized recovery
2. Delivered operational end product for end equipment; disposal equipment; and readying
users: The appropriate documentation is recovery or disposal site environments.
to accompany the delivered end product as
well as the operational end product appro- The process is complete when the following activities
priately packaged. Documentation includes have been accomplished:
applicable final installation, operation, user,
maintenance, or training manuals; applica- • For deliveries to the integration path, the end
ble baseline documents (configuration base- product is delivered to intended usage sites in
line, specifications, stakeholder expectations); a condition suitable for integration with other
and test results that reflect completion of end products or composites of end products.

NASA SYSTEMS ENGINEERING HANDBOOK 111
5.0  Product Realization

Procedures, decisions, assumptions, anomalies, • Any realized enabling end products are also deliv-
corrective actions, lessons learned, etc., resulting ered as appropriate including procedures, deci-
from transition for integration are recorded. sions, assumptions, anomalies, corrective actions,
lessons learned, etc., resulting from transition-en-
• For delivery to the end user path, the end abling products.
products are installed at the appropriate sites;
appropriate acceptance and certification activi- 5.5.2  Product Transition Guidance
ties are completed; training of users, operators, Refer to Section 5.5.2 in the NASA Expanded
maintainers, and other necessary personnel is Guidance for Systems Engineering at https://nen.nasa.
completed; and delivery is closed out with appro- gov/web/se/doc-repository for additional guidance on:
priate acceptance documentation.
• additional product transition considerations and
• what’s next after product transition to the end user.

NASA SYSTEMS ENGINEERING HANDBOOK 112
6.0 Crosscutting Technical
Management

T his chapter describes the activities in the techni-
cal management processes listed in the systems
engineering engine (FIGURE 2.1-1). The processes
the project’s objectives. Without these crosscutting
processes, individual members and tasks cannot be
integrated into a functioning system that meets the
described in Chapters 4 and 5 are performed through ConOps within cost and schedule. These technical
the design and realization phases of a product. These processes also support the project management team
processes can occur throughout the product life cycle, in executing project control.
from concept through disposal. They may occur
simultaneously with any of the other processes. The The next sections describe each of the eight technical
chapter is separated into sections corresponding to management processes and their associated products
the technical management processes 10 through 17 for a given NASA mission.
listed in FIGURE 2.1-1. Each technical management
process is discussed in terms of the inputs, the activi-
ties, and the outputs. Additional guidance is provided 6.1 Technical Planning
using examples that are relevant to NASA projects. The Technical Planning Process, the first of the eight
technical management processes contained in the sys-
The technical management processes are the bridges tems engineering engine, establishes a plan for apply-
between project management and the technical team. ing and managing each of the common technical
In this portion of the engine, eight processes provide processes that will be used to drive the development
the crosscutting functions that allow the design solu- of system products and associated work products.
tion to be developed, realized, and to operate. Even This process also establishes a plan for identifying
though every technical team member might not be and defining the technical effort required to satisfy
directly involved with these eight processes, they the project objectives and life cycle phase success cri-
are indirectly affected by these key functions. Every teria within the cost, schedule, and risk constraints
member of the technical team relies on technical of the project.
planning; management of requirements, interfaces,
technical risk, configuration, and technical data; This effort starts with the technical team conduct-
technical assessment; and decision analysis to meet ing extensive planning early in Pre-Phase A. With

NASA SYSTEMS ENGINEERING HANDBOOK 113
6.0  Crosscutting Technical Management

CROSSCUTTING TECHNICAL MANAGEMENT KEYS

• Thoroughly understand and plan the scope of the technical effort by investing time upfront to develop the
technical product breakdown structure, the technical schedule and workflow diagrams, and the technical
resource requirements and constraints (funding, budget, facilities, and long-lead items) that will be the
technical planning infrastructure. The systems engineer also needs to be familiar with the non-technical
aspects of the project.
• Define all interfaces and assign interface authorities and responsibilities to each, both intra-and inter-
organizational. This includes understanding potential incompatibilities and defining the transition
processes.
• Control of the configuration is critical to understanding how changes will impact the system. For example,
changes in design and environment could invalidate previous analysis results.
• Conduct milestone reviews to enable a critical and valuable assessment to be performed. These reviews
are not to be solely used to meet contractual or scheduling incentives. These reviews have specific
entrance criteria and should be conducted when these are met.
• Understand any biases, assumptions, and constraints that impact the analysis results.
• Place all analysis under configuration control to be able to track the impact of changes and understand
when the analysis needs to be reevaluated.

this early planning, technical team members will schedule. At a minimum, these updates will occur at
understand the roles and responsibilities of each every Key Decision Point (KDP). However, if there
team member, and can establish cost and schedule is a significant change in the project, such as new
goals and objectives. From this effort, the Systems stakeholder expectations, resource adjustments, or
Engineering Management Plan (SEMP) and other other constraints, all plans should be analyzed for the
technical plans are developed and baselined. Once impact of these changes on the baselined project.
the SEMP and technical plans have been established,
they should be synchronized with the project master 6.1.1 Process Description
plans and schedule. In addition, the plans for estab- FIGURE 6.1-1 provides a typical flow diagram for the
lishing and executing all technical contracting efforts Technical Planning Process and identifies typical
are identified. inputs, outputs, and activities to consider in address-
ing technical planning.
This is a recursive and iterative process. Early in the
life cycle, the technical plans are established and syn- 6.1.1.1  Inputs
chronized to run the design and realization processes. Input to the Technical Planning Process comes from
As the system matures and progresses through the both the project management and technical teams as
life cycle, these plans should be updated as neces- outputs from the other common technical processes.
sary to reflect the current environment and resources Initial planning utilizing external inputs from the
and to control the project’s performance, cost, and project to determine the general scope and framework

NASA SYSTEMS ENGINEERING HANDBOOK 114
6.0  Crosscutting Technical Management

From project To project
Project T ort Prepare to conduct technical Cost Estimates,
Requirements and Project planning Schedules, and Resource
Resource Constraints Requests

Agreements, Capability k To Technical
Needs, Applicable Product- Assessment Process
Line Life-Cycle Phase
Schedule, organize, and cost Product and Process
the technical work Measures
Applicable Policies,
Procedures, Standards,
and Organizational To applicable technical
Processes Prepare SEMP and other processes
technical plans
SEMP and Other
From Technical Data Technical Plans
Management Process
Obtain stakeholder commitments
Prior Phase or to technical plans
To applicable technical teams
Baseline Plans
Technical Work Directives
Issue authorized technical
From Technical Assessment
work directives
and Technical Risk
Management Processes To Technical Data
Management Process
Replanning Needs Capture work products from
technical planning activities Technical Planning
Work Products

FIGURE 6.1‑1  Technical Planning Process

of the technical effort will be based on known tech- needs. Although the budget and schedule allo-
nical and programmatic requirements, constraints, cated to the program/project serve as constraints,
policies, and processes. Throughout the project’s the technical team generates a technical cost
life cycle, the technical team continually incorpo- estimate and schedule based on the actual work
rates results into the technical planning strategy required to satisfy the technical requirements.
and documentation and any internal changes based Discrepancies between the allocated budget and
on decisions and assessments generated by the other schedule and the technical team’s actual cost esti-
processes of the SE engine or from requirements and mate and schedule should be reconciled continu-
constraints mandated by the project. ously throughout the life cycle.

• Project Technical Effort Requirements and • Agreements, Capability Needs, Applicable
Project Resource Constraints: The program/ Product Life Cycle Phase: The program/project
project plan provides the project’s top-level tech- plan also defines the applicable life cycle phases
nical requirements, the available budget allocated and milestones, as well as any internal and exter-
to the program/project from the program, and nal agreements or capability needs required for
the desired schedule to support overall program successful execution. The life cycle phases and

NASA SYSTEMS ENGINEERING HANDBOOK 115
6.0  Crosscutting Technical Management

programmatic milestones provide the general (see Section 6.1.2.1 in the NASA Expanded Guidance
framework for establishing the technical plan- for Systems Engineering at https://nen.nasa.gov/web/
ning effort and for generating the detailed tech- se/doc-repository.) within the system structure to
nical activities and products required to meet the meet product life cycle phase success criteria. A key
overall milestones in each of the life cycle phases. document capturing and updating the details from
the technical planning process is the SEMP.
• Applicable Policies, Procedures, Standards,
and Organizational Processes: The program/ The SEMP is a subordinate document to the proj-
project plan includes all programmatic policies, ect plan. The project plan defines how the project
procedures, standards, and organizational pro- will be managed to achieve its goals and objectives
cesses that should be adhered to during execution within defined programmatic constraints. The SEMP
of the technical effort. The technical team should defines for all project participants how the project
develop a technical approach that ensures the pro- will be technically managed within the constraints
gram/project requirements are satisfied and that established by the project. The SEMP also commu-
any technical procedures, processes, and stan- nicates how the systems engineering management
dards to be used in developing the intermediate techniques will be applied throughout all phases of
and final products comply with the policies and the project life cycle.
processes mandated in the program/project plan.
Technical planning should be tightly integrated
• Prior Phase or Baseline Plans: The latest tech- with the Technical Risk Management Process (see
nical plans (either baselined or from the previous Section 6.4) and the Technical Assessment Process
life cycle phase) from the Data Management or (see Section 6.7) to ensure corrective action for future
Configuration Management Processes should be activities will be incorporated based on current issues
used in updating the technical planning for the identified within the project.
upcoming life cycle phase.
Technical planning, as opposed to program or proj-
• Replanning Needs: Technical planning updates ect planning, addresses the scope of the technical
may be required based on results from technical effort required to develop the system products. While
reviews conducted in the Technical Assessment the project manager concentrates on managing the
Process, issues identified during the Technical overall project life cycle, the technical team, led by
Risk Management Process, or from decisions the systems engineer, concentrates on managing the
made during the Decision Analysis Process. technical aspects of the project. The technical team
identifies, defines, and develops plans for performing
6.1.1.2  Process Activities decomposition, definition, integration, verification,
Technical planning as it relates to systems engineering and validation of the system while orchestrating and
at NASA is intended to define how the project will be incorporating the appropriate concurrent and cross-
organized, structured, and conducted and to identify, cutting engineering. Additional planning includes
define, and plan how the 17 common technical pro- defining and planning for the appropriate technical
cesses in NPR 7123.1, NASA Systems Engineering reviews, audits, assessments, and status reports and
Processes and Requirements will be applied in each determining crosscutting engineering discipline and/
life cycle phase for all levels of the product hierarchy or design verification requirements.

NASA SYSTEMS ENGINEERING HANDBOOK 116
6.0  Crosscutting Technical Management

This section describes how to perform the activities in leading design teams for a particular mission or
contained in the Technical Planning Process shown technical application. Some of the subjective consid-
in FIGURE 6.1-1. The initial technical planning at the erations involved include the product and its require-
beginning of the project establishes the technical ments, the mission class, and the project phase.
team members; their roles and responsibilities; and
the tools, processes, and resources that will be uti- Continuing with a spacecraft example, most teams
lized in executing the technical effort. In addition, typically share a common core of required skills, such
the expected activities that the technical team will as subsystem engineering for mechanical, thermal,
perform and the products it will produce are iden- power, etc. However, the particular requirements of
tified, defined, and scheduled. Technical planning a spacecraft and mission can cause the skill mix to
continues to evolve as actual data from completed vary. For example, as opposed to robotic space mis-
tasks are received and details of near-term and future sions, human-rated systems typically add the need
activities are known. for human systems discipline engineering and envi-
ronmental control and life support subsystem engi-
6.1.1.2.1  Technical Planning Preparation neering. As opposed to near Earth space missions,
For technical planning to be conducted properly, the deep space missions may add the need for safety and
processes and procedures that are needed to conduct planetary protection discipline engineering specific
technical planning should be identified, defined, to contamination of the Earth or remote solar system
and communicated. As participants are identified, bodies. And, as opposed to teams designing space-
their roles and responsibilities and any training and/ craft instruments that operate at moderate tempera-
or certification activities should be clearly defined tures, teams designing spacecraft instruments that
and communicated. operate at cryogenic temperatures will need cryogen-
ics subsystem support.
Team Selection
Teams engaged in the early part of the technical Mission class and project phase may also influence the
planning process need to identify the required skill required team skill mix. For example, with respect to
mix for technical teams that will develop and pro- mission class, certain discipline analyses needed for
duce a product. Typically, a technical team consists Class A and B missions may not be required for Class
of a mix of both subsystem and discipline engineers. D (or lower) missions. And with respect to project
Considering a spacecraft example, subsystem engi- phase, some design and analyses may be performed by
neers normally have cognizance over development of a single general discipline in Pre-Phase A and Phase
a particular subsystem (e.g., mechanical, power, etc.), A, whereas the need to conduct design and analyses
whereas discipline engineers normally provide spe- in more detail in Phases B and C may indicate the
cific analyses (e.g., flight dynamics, radiation, etc.). need for multiple specialized subsystem design and
The availability of appropriately skilled personnel discipline engineering skills.
also needs to be considered.
An example skill mix for a Pre-Phase A technical
To an extent, determining the skill mix required team tasked to design a cryogenic interferometer
for developing any particular product is a subjec- space observatory is shown in TABLE 6.1-1 for pur-
tive process. Due to this, the skill mix is normally poses of illustration. For simplicity, analysis and
determined in consultation with people experienced technology development is assumed to be included

NASA SYSTEMS ENGINEERING HANDBOOK 117
6.0  Crosscutting Technical Management

TABLE 6.1-1  Example Engineering Team Disciplines in Pre-Phase A for Robotic Infrared Observatory
Systems Engineering
Mission Systems Engineer
Instrument Systems Engineer
Spacecraft Bus, Flight Dynamics, Launch Vehicle Interface, Ground System Interface Subteam
Flight Dynamics Analysis
Mission Operations (includes ConOps, & interfaces with ground station, mission ops center, science ops center)
Bus Mechanical Subsystem (includes mechanisms)
Bus Power Subsystem (includes electrical harness)
Bus Thermal Subsystem
Bus Propulsion Subsystem
Bus Attitude Control and Determination Subsystem
Bus Avionics Subsystem
Bus Communications Subsystem
Bus Flight Software Subsystem
Integration and Test (bus, observatory)
Launch Vehicle Integration
Radiation Analysis
Orbital Debris/End of Mission Planning Analysis
System Reliability/Fault Tolerance Analysis (includes analysis of instrument)
Instrument Subteam
Mechanical Subsystem
Mechanisms Subsystem
Thermal Subsystem
Cryogenics Subsystem
Avionics Subsystem (incl. Electrical Harness)
Mechanism Drive Electronics Subsystem
Detector Subsystem
Optics Subsystem
Control Subsystem
Metrology Subsystem
Flight Software Subsystem
Integration and Test
Stray Light/Radiometry Analysis
Other Specialty Disciplines (e.g., Contamination Analysis) as needed

NASA SYSTEMS ENGINEERING HANDBOOK 118
6.0  Crosscutting Technical Management

in the subsystem or discipline shown. For example, • The approach for technology development and
this means “mechanical subsystem” includes both how the resulting technology will be incorporated
loads and dynamics analysis and mechanical tech- into the project;
nology development. • The identification and definition of the technical
metrics for measuring and tracking progress to
Once the processes, people, and roles and responsi- the realized product;
bilities are in place, a planning strategy may be for- • The criteria for make, buy, or reuse decisions and
mulated for the technical effort. A basic technical incorporation criteria for Commercial Off-the-
planning strategy should address the following: Shelf (COTS) software and hardware;
• The plan to identify and mitigate off-nominal
• The communication strategy within the technical performance;
team and for up and out communications; • The “how-tos” for contingency planning and
• Identification and tailoring of NASA procedural replanning;
requirements that apply to each level of the PBS • The plan for status assessment and reporting;
structure; • The approach to decision analysis, including
• The level of planning documentation required materials needed, skills required, and expecta-
for the SEMP and all other technical planning tions in terms of accuracy; and
documents; • The plan for managing the human element in the
• Identifying and collecting input documentation; technical activities and product.
• The sequence of technical work to be conducted,
including inputs and outputs; By addressing these items and others unique to
• The deliverable products from the technical work; the project, the technical team will have a basis for
• How to capture the work products of technical understanding and defining the scope of the tech-
activities; nical effort, including the deliverable products that
• How technical risks will be identified and the overall technical effort will produce, the schedule
managed; and key milestones for the project that the technical
• The tools, methods, and training needed to con- team should support, and the resources required by
duct the technical effort; the technical team to perform the work.
• The involvement of stakeholders in each facet of
the technical effort; A key element in defining the technical planning
• How the NASA technical team will be involved effort is understanding the amount of work associated
with the technical efforts of external contractors; with performing the identified activities. Once the
• The entry and success criteria for milestones, such scope of the technical effort begins to coalesce, the
as technical reviews and life cycle phases; technical team may begin to define specific planning
• The identification, definition, and control of activities and to estimate the amount of effort and
internal and external interfaces; resources required to perform each task. Historically,
• The identification and incorporation of relevant many projects have underestimated the resources
lessons learned into the technical planning; required to perform proper planning activities and
• The team’s approach to capturing lessons learned have been forced into a position of continuous cri-
during the project and how those lessons will be sis management in order to keep up with changes in
recorded; the project.

NASA SYSTEMS ENGINEERING HANDBOOK 119
6.0  Crosscutting Technical Management

TABLE 6.1-2 Examples of Types of Facilities to Consider During Planning

Communications and Tracking Labs Models and Simulation Labs Thermal Chambers
Power Systems Labs Prototype Development Shops Vibration Labs
Propulsion Test Stands Calibration Labs Radiation Labs
Mechanical/Structures Labs Biological Labs Animal Care Labs
Instrumentation Labs Space Materials Curation Labs Flight Hardware Storage Areas
Human Systems Labs Electromagnetic Effects Labs Design Visualization
Guidance and Navigation Labs Materials Labs Wiring Shops
Robotics Labs Vacuum Chambers NDE Labs
Software Development Environment Mission Control Center Logistics Warehouse
Meeting Rooms Training Facilities Conference Facilities
Education/Outreach Centers Server Farms Project Documentation Centers

Identifying Facilities troubleshooting errors/anomalies. Frequently,
The planning process also includes identifying the software engineers are called upon to help trou-
required facilities, laboratories, test beds, and instru- bleshoot problems and pinpoint the source of
mentation needed to build, test, launch, and operate errors in hardware and systems development and
a variety of commercial and Government products. A testing (e.g., for writing additional test drivers to
sample list of the kinds of facilities that might be con- debug hardware problems). Additional software
sidered when planning is illustrated in TABLE 6.1-2. staff should be planned into the project contin-
gencies to accommodate inevitable component
6.1.1.2.2  Define the Technical Work and system debugging and avoid cost and sched-
The technical effort should be defined commensurate ule overruns.
with the level of detail needed for the life cycle phase.
When performing the technical planning, realistic • Hardware-In-the-Loop (HWIL) should be
values for cost, schedule, and labor resources should be accounted for in the technical planning contin-
used. Whether extrapolated from historical databases gencies. HWIL testing is typically accomplished
or from interactive planning sessions with the proj- as a debugging exercise where the hardware and
ect and stakeholders, realistic values should be calcu- software are brought together for the first time
lated and provided to the project team. Contingency in the costly environment of HWIL. If upfront
should be included in any estimate and should be work is not done to understand the messages and
based on the complexity and criticality of the effort. errors arising during this test, additional time in
Contingency planning should be conducted. The fol- the HWIL facility may result in significant cost
lowing are examples of contingency planning: and schedule impacts. Impacts may be mitigated
through upfront planning, such as making appro-
• Additional, unplanned-for software engineering priate debugging software available to the techni-
resources are typically needed during hardware cal team prior to the test, etc.
and systems development and testing to aid in

NASA SYSTEMS ENGINEERING HANDBOOK 120
6.0  Crosscutting Technical Management

• Similarly, Human-In-The-Loop (HITL) evalua- method for defining and communicating what needs
tions identify contingency operational issues. HITL to be done, how long it will take, and how each ele-
investigations are particularly critical early in the ment of the project WBS might affect other elements.
design process to expose, identify, and cost-effec- A complete network schedule may be used to calcu-
tively correct operational issues—nominal, main- late how long it will take to complete a project; which
tenance, repair, off-nominal, training, etc.—in activities determine that duration (i.e., critical path
the required human interactions with the planned activities); and how much spare time (i.e., float) exists
design. HITL testing should also be approached for all the other activities of the project.
as a debugging exercise where hardware, software,
and human elements interact and their perfor- “Critical path” is the sequence of dependent tasks
mance is evaluated. If operational design and/or that determines the longest duration of time needed
performance issues are not identified early, the cost to complete the project. These tasks drive the schedule
of late design changes will be significant. and continually change, so they should be updated.
The critical path may encompass only one task or a
Schedule, Organize, and Budget the Technical
6.1.1.2.3  series of interrelated tasks. It is important to identify
Effort the critical path and the resources needed to com-
Once the technical team has defined the technical plete the critical tasks along the path if the project is
work to be done, efforts can focus on producing a to be completed on time and within its resources. As
schedule and cost estimate for the technical portion the project progresses, the critical path will change as
of the project. The technical team should organize the critical tasks are completed or as other tasks are
the technical tasks according to the project WBS delayed. This evolving critical path with its identified
in a logical sequence of events, taking into consid- tasks needs to be carefully monitored during the pro-
eration the major project milestones, phasing of gression of the project.
available funding, and timing of the availability of
supporting resources. Network scheduling systems help managers accu-
rately assess the impact of both technical and resource
Scheduling changes on the cost and schedule of a project. Cost
Products described in the WBS are the result of activ- and technical problems often show up first as sched-
ities that take time to complete. These activities have ule problems. Understanding the project’s schedule
time precedence relationships among them that may is a prerequisite for determining an accurate project
be used to create a network schedule explicitly defin- budget and for tracking performance and progress.
ing the dependencies of each activity on other activ- Because network schedules show how each activity
ities, the availability of resources, and the receipt of affects other activities, they assist in assessing and
receivables from outside sources. Use of a scheduling predicting the consequences of schedule slips or
tool may facilitate the development and maintenance accelerations of an activity on the entire project.
of the schedule.
For additional information on scheduling, refer to
Scheduling is an essential component of planning NASA/SP-2010-3403, NASA Schedule Management
and managing the activities of a project. The process Handbook.
of creating a network schedule provides a standard

NASA SYSTEMS ENGINEERING HANDBOOK 121
6.0  Crosscutting Technical Management

Budgeting type of project, the phase in the project life cycle, and
Budgeting and resource planning involve establish- the technical development risks and is written specif-
ing a reasonable project baseline budget and the capa- ically for each project or project element. While the
bility to analyze changes to that baseline resulting specific content of the SEMP is tailored to the project,
from technical and/or schedule changes. The proj- the recommended content is discussed in Appendix J.
ect’s WBS, baseline schedule, and budget should be It is important to remember that the main value of
viewed as mutually dependent, reflecting the techni- the SEMP is in the work that goes into the planning.
cal content, time, and cost of meeting the project’s
goals and objectives. The budgeting process needs to The technical team, working under the overall project
take into account whether a fixed cost cap or fixed plan, develops and updates the SEMP as necessary.
cost profile exists. When no such cap or profile exists, The technical team works with the project manager
a baseline budget is developed from the WBS and to review the content and obtain concurrence. This
network schedule. This specifically involves combin- allows for thorough discussion and coordination of
ing the project’s workforce and other resource needs how the proposed technical activities would impact
with the appropriate workforce rates and other finan- the programmatic, cost, and schedule aspects of the
cial and programmatic factors to obtain cost element project. The SEMP provides the specifics of the tech-
estimates. These elements of cost include nical effort and describes the technical processes that
will be used, how the processes will be applied using
• direct labor costs, appropriate activities, how the project will be orga-
• overhead costs, nized to accomplish the activities, and the cost and
• other direct costs (travel, data processing, etc.), schedule associated with accomplishing the activities.
• subcontract costs,
• material costs, The physical length of a SEMP is not what is import-
• equipment costs, ant. This will vary from project to project. The plan
• general and administrative costs, needs to be adequate to address the specific techni-
• cost of money (i.e., interest payments, if cal needs of the project. It is a living document that
applicable), is updated as often as necessary to incorporate new
• fee (if applicable), and information as it becomes available and as the proj-
• contingency (Unallocated Future Expenses (UFE)). ect develops through the Implementation Phase. The
SEMP should not duplicate other project documents;
For additional information on cost estimating, refer however, the SEMP should reference and summarize
to the NASA Cost Estimating Handbook and NPR the content of other technical plans.
7120.5, NASA Space Flight Program and Project
Management Requirements. The systems engineer and project manager should
identify additional required technical plans based on
Prepare the SEMP and Other Technical Plans
6.1.1.2.4  the project scope and type. If plans are not included
Systems Engineering Management Plan in the SEMP, they should be referenced and coordi-
The SEMP is the primary, top-level technical man- nated in the development of the SEMP. Other plans,
agement document for the project and is developed such as system safety, probabilistic risk assessment,
early in the Formulation Phase and updated through- and an HSI Plan also need to be planned for and
out the project life cycle. The SEMP is driven by the coordinated with the SEMP. If a technical plan is a

NASA SYSTEMS ENGINEERING HANDBOOK 122
6.0  Crosscutting Technical Management

stand-alone, it should be referenced in the SEMP. it will become outmoded and unused and the proj-
Depending on the size and complexity of the project, ect could slide into an uncontrolled state. The lead
these may be separate plans or they may be included NASA field Center should have its SEMP developed
within the SEMP. Once identified, the plans can be before attempting to prepare an initial cost estimate
developed, training on these plans established, and since activities that incur cost, such as technical risk
the plans implemented. Examples of technical plans reduction and human element accounting, need to be
in addition to the SEMP are listed in Appendix K. identified and described beforehand. The contractor
should have its SEMP developed during the proposal
The SEMP should be developed during pre- process (prior to costing and pricing) because the
formulation. In developing the SEMP, the technical SEMP describes the technical content of the project,
approach to the project’s life cycle is developed. This the potentially costly risk management activities, and
determines the project’s length and cost. The devel- the verification and validation techniques to be used,
opment of the programmatic and technical man- all of which should be included in the preparation of
agement approaches requires that the key project project cost estimates. The SEMPs from the support-
personnel develop an understanding of the work to ing Centers should be developed along with the pri-
be performed and the relationships among the vari- mary project SEMP. The project SEMP is the senior
ous parts of that work. Refer to Sections 6.1.2.1 and technical management document for the project;
6.1.1.2 on WBSs and network scheduling, respec- all other technical plans should comply with it. The
tively. The SEMP then flows into the project plan to SEMP should be comprehensive and describe how a
ensure the proper allocation of resources including fully integrated engineering effort will be managed
cost, schedule, and personnel. and conducted.

The SEMP’s development requires contributions Verification Plan
from knowledgeable programmatic and technical The verification plan is developed as part of the
experts from all areas of the project that can signifi- Technical Planning Process and is baselined at PDR.
cantly influence the project’s outcome. The involve- As the design matures throughout the life cycle, the
ment of recognized experts is needed to establish a plan is updated and refined as needed. The task of
SEMP that is credible to the project manager and to preparing the verification plan includes establishing
secure the full commitment of the project team. the method of verification to be performed, depen-
dent on the life cycle phase; the position of the prod-
Role of the SEMP uct in the system structure; the form of the product
The SEMP is the rule book that describes to all par- used; and the related costs of verification of individ-
ticipants how the project will be technically man- ual specified requirements. The verification methods
aged. The NASA technical team on the project include analyses, inspection, demonstration, and test.
should have a SEMP to describe how it will con- In some cases, the complete verification of a given
duct its technical management, and each contractor requirement might require more than one method.
should have a SEMP to describe how it will man- For example, to verify the performance of a product
age in accordance with both its contract and NASA’s may require looking at many use cases. This might be
technical management practices. Since the SEMP accomplished by running a Monte Carlo simulation
is unique to a project and contract, it should be (analysis) and also running actual tests on a few of
updated for each significant programmatic change or the key cases. The verification plan, typically written

NASA SYSTEMS ENGINEERING HANDBOOK 123
6.0  Crosscutting Technical Management

at a detailed technical level, plays a pivotal role in • simulated (algorithmic models, virtual reality
bottom-up product realization. simulator);
• mock-up (plywood, brassboard, breadboard);
A phase product can be verified recursively through- • concept description (paper report);
out the project life cycle and on a wide variety of • engineering unit (fully functional but may not be
product forms. For example: same form/fit);
• prototype (form, fit, and function);

TYPES OF HARDWARE

Breadboard: A low fidelity unit that demonstrates function only without considering form or fit in the case
of hardware or platform in the case of software. It often uses commercial and/or ad hoc components and is
not intended to provide definitive information regarding operational performance.

Brassboard: A medium fidelity functional unit that typically tries to make use of as much operational
hardware/software as possible and begins to address scaling issues associated with the operational
system. It does not have the engineering pedigree in all aspects, but is structured to be able to operate in
simulated operational environments in order to assess performance of critical functions.

Engineering Unit: A high fidelity unit that demonstrates critical aspects of the engineering processes
involved in the development of the operational unit. Engineering test units are intended to closely resemble
the final product (hardware/software) to the maximum extent possible and are built and tested so as
to establish confidence that the design will function in the expected environments. In some cases, the
engineering unit will become the final product, assuming proper traceability has been exercised over the
components and hardware handling.

Prototype Unit: The prototype unit demonstrates form, fit, and function at a scale deemed to be
representative of the final product operating in its operational environment. A subscale test article provides
fidelity sufficient to permit validation of analytical models capable of predicting the behavior of full-scale
systems in an operational environment.

Qualification Unit: A unit that is the same as the flight unit (form, fit, function, components, etc.) that will be
exposed to the extremes of the environmental criteria (thermal, vibration, etc.). The unit will typically not be
flown due to these off-nominal stresses.

Protoflight Unit: In projects that will not develop a qualification unit, the flight unit may be designated as a
protoflight unit and a limited version of qualification test ranges will be applied. This unit will be flown.

Flight Unit: The end product that will be flown and will typically undergo acceptance level testing.

NASA SYSTEMS ENGINEERING HANDBOOK 124
6.0  Crosscutting Technical Management

• design verification test units (form, fit, and func- Procedures should be prepared to conduct verifica-
tion is the same, but they may not have flight parts); tion based on the method (e.g., analysis, inspection,
demonstration, or test) planned. These procedures
• qualification units (identical to flight units but are typically developed during the design phase
may be subjected to extreme environments); and of the project life cycle and matured as the design
• flight units (end product that is flown, including is matured. Operational use scenarios are thought
protoflight units). through in order to explore all possible verification
activities to be performed.
Verification of the end product—that is, the official
“run for the record” verification where the program/
project takes credit for meeting a requirement—is NOTE: Verification planning begins early in the
usually performed on a qualification, protoflight, or project life cycle during the requirements devel-
flight unit to ensure its applicability to the flight sys- opment phase. (See Section 4.2.) The verification
tem. However, with discussion and approval from the approach to use should be included as part of
program/project and systems engineering teams, ver- requirements development to plan for future activi-
ification credit may be taken on lower fidelity units ties, to establish special requirements derived from
if they can be shown to be sufficiently like the flight identified verification-enabling products, and to
units in the areas to be verified. ensure that the requirements are verifiable. Updates
to verification planning continue throughout logical
Any of these types of product forms may be in any of decomposition and design development, especially
these states: as design reviews and simulations shed light on
items under consideration. (See Section 6.1.)
• produced (built, fabricated, manufactured, or coded);
• reused (modified internal non-developmental
products or OTS product); or As appropriate, project risk items are updated based
• assembled and integrated (a composite of low- on approved verification strategies that cannot dupli-
er-level products). cate fully integrated test systems, configurations,
and/or target operating environments. Rationales,
The conditions and environment under which the trade space, optimization results, and implications of
product is to be verified should be established and the approaches are documented in the new or revised
the verification should be planned based on the asso- risk statements as well as references to accommodate
ciated entrance/exit criteria that are identified. The future design, test, and operational changes to the
Decision Analysis Process should be used to help project baseline.
finalize the planning details.
Validation Plan
The validation plan is one of the work products of the
NOTE: The final, official verification of the end Technical Planning Process and is generated during
product should be on a controlled unit. Typically, the Design Solution Process to validate the end prod-
attempting to “buy off” a “shall” on a prototype is uct against the baselined stakeholder expectations.
not acceptable; it is usually completed on a qualifi- This plan can take many forms. The plan describes
cation, flight, or other more final, controlled unit. the total Test and Evaluation (T&E) planning from

NASA SYSTEMS ENGINEERING HANDBOOK 125
6.0  Crosscutting Technical Management

development of lower-end through higher-end prod- Procedures should be prepared to conduct valida-
ucts in the system structure and through operational tion based on the method planned; e.g., analysis,
T&E into production and acceptance. It may com- inspection, demonstration, or test). These proce-
bine the verification and validation plans into a single dures are typically developed during the design phase
document. (See Appendix I for a sample Verification of the project life cycle and matured as the design
and Validation Plan outline.) is matured. Operational and use-case scenarios are
thought through in order to explore all possible vali-
The methods of validation include test, demonstra- dation activities to be performed.
tion, inspection, and analysis. While the name of
each method is the same as the name of the methods Validation is conducted by the user/operator or by
for verification, the purpose and intent as described the developer as determined by NASA Center direc-
above are quite different. tives or the contract with the developers. Systems-
level validation (e.g., customer Test and Evaluation
Planning to conduct the product validation is a key (T&E) and some other types of validation) may be
first step. The method of validation to be used (e.g., performed by an acquirer testing organization. For
analysis, demonstration, inspection, or test) should those portions of validation performed by the devel-
be established based on the form of the realized end oper, appropriate agreements should be negotiated
product, the applicable life cycle phase, cost, sched- to ensure that validation proof-of-documentation is
ule, resources available, and location of the system delivered with the product.
product within the system structure.
Regardless of the source (buy, make, reuse, assemble
An established set or subset of expectations or behav- and integrate) and the position in the system struc-
iors to be validated should be identified and the val- ture, all realized end products should be validated
idation plan reviewed (an output of the Technical to demonstrate/confirm satisfaction of stakeholder
Planning Process, based on design solution outputs) expectations. Variations, anomalies, and out-of-com-
for any specific procedures, constraints, success cri- pliance conditions, where such have been detected,
teria, or other validation requirements. The condi- are documented along with the actions taken to
tions and environment under which the product is resolve the discrepancies. Validation is typically car-
to be validated should be established and the vali- ried out in the intended operational environment or
dation should be planned based on the relevant life a relevant environment under simulated or actual
cycle phase and associated success criteria identified. operational conditions, not necessarily under the
The Decision Analysis Process should be used to help tightly controlled conditions usually employed for
finalize the planning details. the Product Verification Process.

It is important to review the validation plans with rel- Validation of phase products can be performed recur-
evant stakeholders and to understand the relationship sively throughout the project life cycle and on a wide
between the context of the validation and the con- variety of product forms. For example:
text of use (human involvement). As part of the plan-
ning process, validation-enabling products should be • simulated (algorithmic models, virtual reality
identified and scheduling and/or acquisition should simulator);
be initiated. • mock-up (plywood, brassboard, breadboard);

NASA SYSTEMS ENGINEERING HANDBOOK 126
6.0  Crosscutting Technical Management

ENVIRONMENTS

Relevant Environment: Not all systems, subsystems, and/or components need to be operated in the
operational environment in order to satisfactorily address performance margin requirements or stakeholder
expectations. Consequently, the relevant environment is the specific subset of the operational environment
that is required to demonstrate critical “at risk” aspects of the final product performance in an operational
environment.

Operational Environment: The environment in which the final product will be operated. In the case of
space flight hardware/software, it is space. In the case of ground-based or airborne systems that are not
directed toward space flight, it is the environments defined by the scope of operations. For software, the
environment is defined by the operational platform.

• concept description (paper report); prototype is not acceptable: it is usually completed
• engineering unit (functional but may not be same on a qualification, flight, or other more final, con-
form/fit); trolled unit.
• prototype (product with form, fit, and function);
• design validation test units (form, fit, and func-
tion may be the same, but they may not have NOTE: In planning for validation, consideration
flight parts); should be given to the extent to which validation
• qualification unit (identical to flight unit but may testing will be done. In many instances, off-nominal
be subjected to extreme environments); and operational scenarios and nominal operational sce-
• flight unit (end product that is flown). narios should be utilized. Off-nominal testing offers
insight into a system’s total performance charac-
Any of these types of product forms may be in any of teristics and often assists in identifying the design
these states: issues and human-machine interface, training, and
procedural changes required to meet the mission
• produced (built, fabricated, manufactured, or goals and objectives. Off-nominal testing as well as
coded); nominal testing should be included when planning
• reused (modified internal non-developmental for validation.
products or off-the-shelf product); or
• assembled and integrated (a composite of lower
level products). For additional information on technical plans, refer
to the following appendices of this document and to
NOTE: The final, official validation of the end Section 6.1.1.2.4 of the NASA Expanded Guidance
product should be for a controlled unit. Typically, for Systems Engineering at https://nen.nasa.gov/web/
attempting final validation against the ConOps on a se/doc-repository:

NASA SYSTEMS ENGINEERING HANDBOOK 127
6.0  Crosscutting Technical Management

• Appendix H Integration Plan Outline down through the Logical Decomposition Process,
• Appendix I Verification and Validation Plan and specific stakeholders are identified for all of the
Outline primary and derived requirements. A critical part
• Appendix J SEMP Content Outline of the stakeholders’ involvement is in the definition
• Appendix K Technical Plans of the technical requirements. As requirements and
• Appendix L Interface Requirements Document the ConOps are developed, the stakeholders will be
Outline required to agree to these products. Inadequate stake-
• Appendix M CM Plan Outline holder involvement leads to inadequate requirements
• Appendix R HSI Plan Content Outline and a resultant product that does not meet the stake-
• Appendix S Concept of Operations Annotated holder expectations. Status on relevant stakeholder
Outline involvement should be tracked and corrective action
taken if stakeholders are not participating as planned.
Obtain Stakeholder Commitments to
6.1.1.2.5 
Technical Plans Stakeholder Agreements
Stakeholder Roles in Project Planning Throughout the project life cycle, communication
To obtain commitments to the technical plans from with the stakeholders and commitments from the
the stakeholders, the technical team should ensure stakeholders may be accomplished through the use of
that the appropriate stakeholders, including subject agreements. Organizations may use an Internal Task
domain experts, have a method to provide inputs and Agreement (ITA), a Memorandum Of Understanding
to review the project planning for implementation of (MOU), or other similar documentation to establish
stakeholder interests. the relationship between the project and the stake-
holder. These agreements are also used to document
During the Formulation Phase, the roles of the stake- the customer and provider responsibilities for defin-
holders should be defined in the project plan and the ing products to be delivered. These agreements should
SEMP. Review of these plans and the agreements establish the Measures of Effectiveness (MOEs) or
from the stakeholders to the content of these plans Measures of Performance (MOPs) that will be used
constitutes buy-in from the stakeholders to the techni- to monitor the progress of activities. Reporting
cal approach. It is essential to identify the stakeholders requirements and schedule requirements should be
and get their concurrence on the technical approach. established in these agreements. Preparation of these
agreements will ensure that the stakeholders’ roles
Later in the project life cycle, stakeholders may be and responsibilities support the project goals and that
responsible for delivering products to the project. the project has a method to address risks and issues as
Initial agreements regarding the responsibilities of they are identified.
the stakeholders are key to ensuring that the proj-
ect technical team obtains the appropriate deliveries Stakeholder Support for Forums
from stakeholders. During development of the project plan and the
SEMP, forums are established to facilitate communi-
Stakeholder Involvement in Defining Requirements cation and document decisions during the life cycle of
The identification of stakeholders is one of the early the project. These forums include meetings, working
steps in the systems engineering process. As the proj- groups, decision panels, and control boards. Each of
ect progresses, stakeholder expectations are flowed these forums should establish a charter to define the

NASA SYSTEMS ENGINEERING HANDBOOK 128
6.0  Crosscutting Technical Management

scope and authority of the forum and identify neces- Directives, refer to Section 6.1.1.2.6 in the NASA
sary voting or nonvoting participants. Ad hoc mem- Expanded Guidance for Systems Engineering at
bers may be identified when the expertise or input of https://nen.nasa.gov/web/se/doc-repository.
specific stakeholders is needed when specific topics
are addressed. It is important to ensure that stake- 6.1.1.2.7  Capture Technical Planning Work Products
holders have been identified to support the forum. The work products from the Technical Planning
Process should be managed using either the Technical
6.1.1.2.6  Issue Technical Work Directives Data Management Process or the Configuration
The technical team provides technical work directives Management Process as required. Some of the more
to Cost Account Managers (CAMs). This enables the important products of technical planning (i.e., the
CAMs to prepare detailed plans that are mutually WBS, the SEMP, and the schedule, etc.) are kept
consistent and collectively address all of the work to under configuration control and captured using
be performed. These plans include the detailed sched- the CM process. The Technical Data Management
ules and budgets for cost accounts that are needed for Process is used to capture trade studies, cost estimates,
cost management and EVM. technical analyses, reports, and other important
documents not under formal configuration control.
Issuing technical work directives is an essential Work products, such as meeting minutes and corre-
activity during Phase B of a project when a detailed spondence (including e-mail) containing decisions or
planning baseline is required. If this activity is not agreements with stakeholders also should be retained
implemented, then the CAMs are often left with and stored in project files for later reference.
insufficient guidance for detailed planning. The
schedules and budgets that are needed for EVM will 6.1.1.3  Outputs
then be based on assumptions and local interpreta- Typical outputs from technical planning activities are:
tions of project-level information. If this is the case,
it is highly likely that substantial variances will occur • Technical work cost estimates, schedules, and
between the baseline plan and the work performed. resource needs: e.g., funds, workforce, facili-
Providing technical work directives to CAMs pro- ties, and equipment (to the project) within the
duces a more organized technical team. This activity project resources;
may be repeated when replanning occurs.
• Product and process measures: Those needed
This “technical work directives” step produces: (1) to assess progress of the technical effort and
planning directives to CAMs that result in (2) a con- the effectiveness of processes (to the Technical
sistent set of cost account plans. Where EVM is called Assessment Process);
for, it produces (3) an EVM planning baseline, includ-
ing a Budgeted Cost of Work Scheduled (BCWS). • SEMP and other technical plans: Technical plan-
ning strategy, WBS, SEMP, HSI Plan, V&V Plan,
This activity is not limited to systems engineering. and other technical plans that support implemen-
This is a normal part of project planning wherever tation of the technical effort (to all processes;
there is a need for an accurate planning baseline. applicable plans to technical processes);
For additional information on Technical Work

NASA SYSTEMS ENGINEERING HANDBOOK 129
6.0  Crosscutting Technical Management

• Technical work directives: e.g., work packages or • Work Breakdown Structure (WBS),
task orders with work authorization (to applicable • cost definition and modeling, and
technical teams); and • lessons learned.

• Technical Planning Process work products: Additional information on the WBS can also
Includes products needed to provide reports, records, be found in NASA/SP-2010-3404, NASA Work
and non-deliverable outcomes of process activities Breakdown Structure Handbook and on costing in the
(to the Technical Data Management Process). NASA Cost Estimating Handbook.

The resulting technical planning strategy constitutes
an outline, or rough draft, of the SEMP. This serves 6.2 Requirements Management
as a starting point for the overall Technical Planning Requirements management activities apply to the
Process after initial preparation is complete. When management of all stakeholder expectations, cus-
preparations for technical planning are complete, the tomer requirements, and technical product require-
technical team should have a cost estimate and sched- ments down to the lowest level product component
ule for the technical planning effort. The budget and requirements (hereafter referred to as expectations
schedule to support the defined technical planning and requirements). This includes physical functional
effort can then be negotiated with the project man- and operational requirements, including those that
ager to resolve any discrepancies between what is result from interfaces between the systems in ques-
needed and what is available. The SEMP baseline tion and other external entities and environments.
needs to be completed. Planning for the update of The Requirements Management Process is used to:
the SEMP based on programmatic changes needs to
be developed and implemented. The SEMP needs to • Identify, control, decompose, and allocate require-
be approved by the appropriate level of authority. ments across all levels of the WBS.

6.1.2 Technical Planning Guidance • Provide bidirectional traceability.
Refer to Section 6.1.2 in the NASA Expanded
Guidance for Systems Engineering at https://nen.nasa.
gov/web/se/doc-repository for additional guidance on:

DEFINITIONS

Traceability: A discernible association between two or more logical entities such as requirements, system
elements, verifications, or tasks.

Bidirectional traceability: The ability to trace any given requirement/expectation to its parent requirement/
expectation and to its allocated children requirements/expectations.

NASA SYSTEMS ENGINEERING HANDBOOK 130
6.0  Crosscutting Technical Management

From system design processes
Expectations and
Requirements to Be Prepare to conduct
To Configuration
Managed requirements management
Management Process

From project and Technical Requirements
Assessment Process Conduct requirements Documents
management
Requirements Change
Requests
Approved Changes to
Conduct expectations and
Requirements Baselines
From Technical requirements traceability
Assessment Process

TPM Estimation/ To Technical Data
Manage expectations
Evaluation Results Management Process
and requirement changes
Requirements
From Product Verification Management Work
and Validation Processes Capture work products from Products
requirements management
Product Verification and
activities
Product Validation Results

FIGURE 6.2‑1  Requirements Management Process

• Manage the changes to established requirement • Requirement change requests: The Requirements
baselines over the life cycle of the system products. Management Process should be prepared to deal
with requirement change requests that can be
6.2.1 Process Description generated at any time during the project life cycle
FIGURE 6.2-1 provides a typical flow diagram for the or as a result of reviews and assessments as part of
Requirements Management Process and identifies the Technical Assessment Process.
typical inputs, outputs, and activities to consider in
addressing requirements management. • TPM estimation/evaluation results: TPM esti-
mation/evaluation results from the Technical
6.2.1.1  Inputs Assessment Process provide an early warning of
There are several fundamental inputs to the the adequacy of a design in satisfying selected crit-
Requirements Management Process. ical technical parameter requirements. Variances
from expected values of product performance
• Expectations and requirements to be managed: may trigger changes to requirements.
Requirements and stakeholder expectations are
identified during the system design processes, • Product verification and validation results:
primarily from the Stakeholder Expectations Product verification and product validation
Definition Process and the Technical results from the Product Verification and
Requirements Definition Process. Product Validation Processes are mapped into the

NASA SYSTEMS ENGINEERING HANDBOOK 131
6.0  Crosscutting Technical Management

requirements database with the goal of verifying and initiate corrective actions to eliminate
and validating all requirements. inconsistencies.

6.2.1.2  Process Activities Conduct Expectations and Requirements
6.2.1.2.3 
Prepare to Conduct Requirements
6.2.1.2.1  Traceability
Management As each requirement is documented, its bidirectional
Preparing to conduct requirements management traceability should be recorded. Each requirement
includes gathering the requirements that were defined should be traced back to a parent/source requirement
and baselined during the Requirements Definition or expectation in a baselined document or identified
Process. Identification of the sources/owners of each as self-derived and concurrence on it sought from the
requirement should be checked for currency. The next higher level requirements sources. Examples of
organization (e.g., change board) and procedures to self-derived requirements are requirements that are
perform requirements management are established. locally adopted as good practices or are the result
of design decisions made while performing the
6.2.1.2.2  Conduct Requirements Management activities of the Logical Decomposition and Design
The Requirements Management Process involves Solution Processes.
managing all changes to expectations and require-
ments baselines over the life of the product and main- The requirements should be evaluated, independently
taining bidirectional traceability between stakeholder if possible, to ensure that the requirements trace is
expectations, customer requirements, technical prod- correct and that it fully addresses its parent require-
uct requirements, product component requirements, ments. If it does not, some other requirement(s)
design documents, and test plans and procedures. should complete fulfillment of the parent require-
The successful management of requirements involves ment and be included in the traceability matrix. In
several key activities: addition, ensure that all top-level parent document
requirements have been allocated to the lower level
• Establish a plan for executing requirements requirements. If there is no parent for a particular
management. requirement and it is not an acceptable self-derived
requirement, it should be assumed either that the
• Receive requirements from the system design pro- traceability process is flawed and should be redone
cesses and organize them in a hierarchical tree or that the requirement is “gold plating” and should
structure. be eliminated. Duplication between levels should be
resolved. If a requirement is simply repeated at a lower
• Maintain bidirectional traceability between level and it is not an externally imposed constraint,
requirements. it may not belong at the higher level. Requirements
traceability is usually recorded in a requirements
• Evaluate all change requests to the requirements matrix or through the use of a requirements model-
baseline over the life of the project and make ing application.
changes if approved by change board.

• Maintain consistency between the require-
ments, the ConOps, and the architecture/design,

NASA SYSTEMS ENGINEERING HANDBOOK 132
6.0  Crosscutting Technical Management

Managing Expectations and Requirement
6.2.1.2.4  6.2.1.2.5  Key Issues for Requirements Management
Changes Requirements Changes
Throughout early Phase A, changes in requirements Effective management of requirements changes
and constraints will occur as they are initially defined requires a process that assesses the impact of the pro-
and matured. It is imperative that all changes be posed changes prior to approval and implementation
thoroughly evaluated to determine the impacts on of the change. This is normally accomplished through
the cost, schedule, architecture, design, interfaces, the use of the Configuration Management Process.
ConOps, and higher and lower level requirements. In order for CM to perform this function, a baseline
Performing functional and sensitivity analyses will configuration should be documented and tools used
ensure that the requirements are realistic and evenly to assess impacts to the baseline. Typical tools used to
allocated. Rigorous requirements verification and analyze the change impacts are as follows:
validation will ensure that the requirements can
be satisfied and conform to mission objectives. All • Performance Margins: This tool is a list of key
changes should be subjected to a review and approval performance margins for the system and the
cycle to maintain traceability and to ensure that the current status of the margin. For example, the
impacts are fully assessed for all parts of the system. propellant performance margin will provide the
necessary propellant available versus the propel-
Once the requirements have been validated and lant necessary to complete the mission. Changes
reviewed in the System Requirements Review (SRR) should be assessed for their impact on perfor-
in late Phase A, they are placed under formal con- mance margins.
figuration control. Thereafter, any changes to the
requirements should be approved by a Configuration • CM Topic Evaluators List: This list is developed
Control Board (CCB) or equivalent authority. The by the project office to ensure that the appropriate
systems engineer, project manager, and other key persons are evaluating the changes and provid-
engineers usually participate in the CCB approval ing impacts to the change. All changes need to
processes to assess the impact of the change including be routed to the appropriate individuals to ensure
cost, performance, programmatic, and safety. that the change has had all impacts identified.
This list will need to be updated periodically.
Requirement changes during Phases B and C are
more likely to cause significant adverse impacts to • Risk System and Threats List: The risk system
the project cost and schedule. It is even more import- can be used to identify risks to the project and
ant that these late changes are carefully evaluated to the cost, schedule, and technical aspects of the
fully understand their impact on cost, schedule, and risk. Changes to the baseline can affect the con-
technical designs. sequences and likelihood of identified risk or can
introduce new risk to the project. A threats list
The technical team should also ensure that the is normally used to identify the costs associated
approved requirements are communicated in a timely with all the risks for the project. Project reserves
manner to all relevant people. Each project should are used to mitigate the appropriate risk. Analyses
have already established the mechanism to track and of the reserves available versus the needs identified
disseminate the latest project information. Further by the threats list assist in the prioritization for
information on Configuration Management (CM) reserve use.
can be found in Section 6.5.

NASA SYSTEMS ENGINEERING HANDBOOK 133
6.0  Crosscutting Technical Management

The process for managing requirements changes needs • Establish a strict process for assessing require-
to take into account the distribution of information ment changes as part of the Configuration
related to the decisions made during the change pro- Management Process.
cess. The Configuration Management Process needs
to communicate the requirements change decisions to • Establish official channels for submitting change
the affected organizations. During a board meeting to requests. This will determine who has the author-
approve a change, actions to update documentation ity to generate requirement changes and submit
need to be included as part of the change package. them formally to the CCB (e.g., a contractor-des-
These actions should be tracked to ensure that affected ignated representative, project technical leads,
documentation is updated in a timely manner. customer/science team lead, or user).

Requirements Creep • Measure the functionality of each requirement
“Requirements creep” is the term used to describe change request and assess its impact on the rest of
the subtle way that requirements grow imperceptibly the system. Compare this impact with the conse-
during the course of a project. The tendency for the quences of not approving the change. What is the
set of requirements is to relentlessly increase in size risk if the change is not approved?
during the course of development, resulting in a sys-
tem that is more expensive and complex than origi- • Determine if the proposed change can be accom-
nally intended. Often the changes are quite innocent modated within the fiscal and technical resource
and what appear to be changes to a system are really budgets. If it cannot be accommodated within
enhancements in disguise. the established resource margins, then the change
most likely should be denied.
However, some of the requirements creep involves
truly new requirements that did not exist, and could 6.2.1.2.6  Capture Work Products
not have been anticipated, during the Technical These products include maintaining and reporting
Requirements Definition Process. These new require- information on the rationale for and disposition and
ments are the result of evolution, and if we are to implementation of change actions, current require-
build a relevant system, we cannot ignore them. ment compliance status and expectation, and require-
ment baselines.
There are several techniques for avoiding or at least
minimizing requirements creep: 6.2.1.3  Outputs
Typical outputs from the requirements management
• The first line of defense is a good ConOps that has activities are:
been thoroughly discussed and agreed-to by the
customer and relevant stakeholders. • Requirements Documents: Requirements doc-
uments are submitted to the Configuration
• In the early requirements definition phase, flush Management Process when the requirements are
out the conscious, unconscious, and undreamed-of baselined. The official controlled versions of these
requirements that might otherwise not be stated. documents are generally maintained in electronic
format within the requirements management tool

NASA SYSTEMS ENGINEERING HANDBOOK 134
6.0  Crosscutting Technical Management

that has been selected by the project. In this way, The basic tasks that need to be established involve
they are linked to the requirements matrix with the management of internal and external interfaces
all of its traceable relationships. of the various levels of products and operator tasks
to support product integration. These basic tasks are
• Approved Changes to the Requirements as follows:
Baselines: Approved changes to the require-
ments baselines are issued as an output of the • Define interfaces;
Requirements Management Process after careful
assessment of all the impacts of the requirements • Identify the characteristics of the interfaces (phys-
change across the entire product or system. A sin- ical, electrical, mechanical, human, etc.);
gle change can have a far-reaching ripple effect,
which may result in several requirement changes • Ensure interface compatibility at all defined inter-
in a number of documents. faces by using a process documented and approved
by the project;
• Various Requirements Management Work
Products: Requirements management work prod- • Strictly control all of the interface processes
ucts are any reports, records, and undeliverable out- during design, construction, operation, etc.;
comes of the Requirements Management Process.
For example, the bidirectional traceability status • Identify lower level products to be assembled and
would be one of the work products that would be integrated (from the Product Transition Process);
used in the verification and validation reports.
• Identify assembly drawings or other documenta-
6.2.2 Requirements Management tion that show the complete configuration of the
Guidance product being integrated, a parts list, and any
Refer to Section 6.2.2 in the NASA Expanded assembly instructions (e.g., torque requirements
Guidance for Systems Engineering at https://nen.nasa. for fasteners);
gov/web/se/doc-repository for additional guidance on:
• Identify end-product, design-definition-specified
• the Requirements Management Plan and requirements (specifications), and configuration
• requirements management tools. documentation for the applicable work break-
down structure model, including interface spec-
ifications, in the form appropriate to satisfy the
6.3 Interface Management
product life cycle phase success criteria (from the
The definition, management, and control of inter- Configuration Management Process); and
faces are crucial to successful programs or projects.
Interface management is a process to assist in con- • Identify product integration-enabling products
trolling product development when efforts are divided (from existing resources or the Product Transition
among parties (e.g., Government, contractors, geo- Process for enabling product realization).
graphically diverse technical teams, etc.) and/or to
define and maintain compliance among the products
that should interoperate.

NASA SYSTEMS ENGINEERING HANDBOOK 135
6.0  Crosscutting Technical Management

Prepare or update interface
management procedures To Configuration
Management Process
From user or program and
system design processes Interface Control
Conduct interface management during Documents
Interface system design activities for each WBS-
Requirements like model in the system structure
Approved Interface
Requirement
From Project and
Conduct interface management during Changes
Technical Assessment
Processes product integration activities
To Technical Data
Interface Management Process
Changes Conduct interface control Interface Management
Work Products

Capture work products from interface
management activities

FIGURE 6.3‑1  Interface Management Process

6.3.1 Process Description Other inputs that might be useful are:
FIGURE 6.3-1 provides a typical flow diagram for the
Interface Management Process and identifies typical • System Description: This allows the design of
inputs, outputs, and activities to consider in address- the system to be explored and examined to deter-
ing interface management. mine where system interfaces exist. Contractor
arrangements will also dictate where interfaces
6.3.1.1  Inputs are needed.
Typical inputs needed to understand and address
interface management would include the following: • System Boundaries: Documented physical
boundaries, components, and/or subsystems,
• Interface Requirements: These include the inter- which are all drivers for determining where inter-
nal and external functional, physical, and perfor- faces exist.
mance interface requirements developed as part
of the Technical Requirements Definition Process • Organizational Structure: Decisions on which
for the product(s). organization will dictate interfaces, particularly
when there is the need to jointly agree on shared
• Interface Change Requests: These include interface parameters of a system. The program
changes resulting from program or project agree- and project WBS will also provide organizational
ments or changes on the part of the technical interface boundaries.
team as part of the Technical Assessment Process.
• Boards Structure: Defined board structure that
identifies organizational interface responsibilities.

NASA SYSTEMS ENGINEERING HANDBOOK 136
6.0  Crosscutting Technical Management

6.3.1.2  Process Activities Interface requirements verification is a critical aspect
Prepare or Update Interface Management
6.3.1.2.1  of the overall system verification.
Procedures
These procedures establish the interface management 6.3.1.2.4  Conduct Interface Control
responsibilities, what process will be used to main- Typically, an Interface Working Group (IWG) estab-
tain and control the internal and external functional lishes communication links between those responsi-
and physical interfaces (including human), and how ble for interfacing systems, end products, enabling
the change process will be conducted. Training of the products, and subsystems. The IWG has the respon-
technical teams or other support may also be required sibility to ensure accomplishment of the planning,
and planned. scheduling, and execution of all interface activities.
An IWG is typically a technical team with appro-
Conduct Interface Management during
6.3.1.2.2  priate technical membership from the interfacing
System Design Activities parties (e.g., the project, the contractor, etc.). The
During project Formulation, the ConOps of the IWG may work independently or as a part of a larger
product is analyzed to identify both external and change control board.
internal interfaces. This analysis will establish the
origin, destination, stimuli, and special character- 6.3.1.2.5  Capture Work Products
istics of the interfaces that need to be documented Work products include the strategy and procedures
and maintained. As the system structure and archi- for conducting interface management, rationale
tecture emerges, interfaces will be added and existing for interface decisions made, assumptions made in
interfaces will be changed and should be maintained. approving or denying an interface change, actions
Thus, the Interface Management Process has a close taken to correct identified interface anomalies, les-
relationship to other areas, such as requirements sons learned and updated support and interface
definition and configuration management, during agreement documentation.
this period.
6.3.1.3  Outputs
Conduct Interface Management during
6.3.1.2.3  Typical outputs needed to capture interface manage-
Product Integration ment would include:
During product integration, interface management
activities would support the review of integration and • Interface control documentation. This is the
assembly procedures to ensure interfaces are prop- documentation that identifies and captures the
erly marked and compatible with specifications and interface information and the approved inter-
interface control documents. The interface manage- face change requests. Types of interface docu-
ment process has a close relationship to verification mentation include the Interface Requirements
and validation. Interface control documentation and Document (IRD), Interface Control Document/
approved interface requirement changes are used as Drawing (ICD), Interface Definition Document
inputs to the Product Verification Process and the (IDD), and Interface Control Plan (ICP). These
Product Validation Process, particularly where ver- outputs will then be maintained and approved
ification test constraints and interface parameters using the Configuration Management Process
are needed to set the test objectives and test plans. and become a part of the overall technical data
package for the project.

NASA SYSTEMS ENGINEERING HANDBOOK 137
6.0  Crosscutting Technical Management

• Approved interface requirement changes. After 6.4 Technical Risk Management
the interface requirements have been baselined, The Technical Risk Management Process is one of
the Requirements Management Process should the crosscutting technical management processes.
be used to identify the need for changes, eval- Risk is the potential for performance shortfalls,
uate the impact of the proposed change, docu- which may be realized in the future, with respect
ment the final approval/disapproval, and update to achieving explicitly established and stated per-
the requirements documentation/tool/database. formance requirements. The performance shortfalls
For interfaces that require approval from all may be related to institutional support for mission
sides, unanimous approval is required. Changing execution or related to any one or more of the follow-
interface requirements late in the design or ing mission execution domains:
implementation life cycle is more likely to have a
significant impact on the cost, schedule, or tech- • safety
nical design/operations. • technical
• cost
• Other work products. These work products • schedule
include the strategy and procedures for con-
ducting interface management, the rationale for Systems engineers are involved in this process to help
interface decisions made, the assumption made identify potential technical risks, develop mitigation
in approving or denying an interface change, plans, monitor progress of the technical effort to deter-
the actions taken to correct identified interface mine if new risks arise or old risks can be retired, and
anomalies, the lessons learned in performing the to be available to answer questions and resolve issues.
interface management activities, and the updated The following is guidance in implementation of risk
support and interface agreement documentation. management in general. Thus, when implementing
risk management on any given program/project, the
6.3.2  Interface Management Guidance responsible systems engineer should direct the effort
Refer to Section 6.3.2 in the NASA Expanded accordingly. This may involve more or less rigor and
Guidance for Systems Engineering at https://nen.nasa. formality than that specified in governing docu-
gov/web/se/doc-repository for additional guidance on: ments such as NPRs. Of course, if deviating from
NPR “requirements,” the responsible engineer must
• interface requirements documents, follow the deviation approval process. The idea is to
• interface control documents, tailor the risk management process so that it meets
• interface control drawings, the needs of the individual program/project being
• interface definition documents, executed while working within the bounds of the
• the interface control plans, and governing documentation (e.g., NPRs). For detailed
• interface management tasks. information on the Risk Management Process, refer
to the NASA Risk Management Handbook (NASA/
SP-2011-3422).

NASA SYSTEMS ENGINEERING HANDBOOK 138
6.0  Crosscutting Technical Management

1. What can go wrong? 2. How frequently does 3. What are the
(definition of scenarios) it happen? consequences?
(scenario frequency (scenario consequence
quantification) quantification)

Scenario Development
Initiating Scenario
Scenario Consequence
Event Frequency
Modeling Modeling
Selection Evaluation

Uncertainty Analysis

FIGURE 6.4-1  Risk Scenario Development (Source: NASA/SP-2011-3421)

Risk is characterized by three basic components: Uncertainties are included in the evaluation of likeli-
hoods and consequences.
1. The scenario(s) leading to degraded performance
with respect to one or more performance mea- Scenarios begin with a set of initiating events that
sures (e.g., scenarios leading to injury, fatality, cause the activity to depart from its intended state.
destruction of key assets; scenarios leading to For each initiating event, other events that are rele-
exceedance of mass limits; scenarios leading vant to the evolution of the scenario may (or may not)
to cost overruns; scenarios leading to schedule occur and may have either a mitigating or exacerbat-
slippage); ing effect on the scenario progression. The frequen-
cies of scenarios with undesired consequences are
2. The likelihood(s) (qualitative or quantitative) of determined. Finally, the multitude of such scenarios
those scenario(s); and is put together, with an understanding of the uncer-
tainties, to create the risk profile of the system.
3. The consequence(s) (qualitative or quantitative
severity of the performance degradation) that This “risk triplet” conceptualization of risk is illus-
would result if the scenario(s) was (were) to occur. trated in FIGURES 6.4-1 and 6.4-2 .

Likelihood and Consequence Severity
Structure of Scenario Its Uncertainty and Its Uncertainty
RISK =

Likelihood and Consequence Severity
Structure of Scenario Its Uncertainty and Its Uncertainty

Likelihood and Consequence Severity
Structure of Scenario Its Uncertainty and Its Uncertainty

FIGURE 6.4-2  Risk as an Aggregate Set of Risk Triplets

NASA SYSTEMS ENGINEERING HANDBOOK 139
6.0  Crosscutting Technical Management

Key Concepts in Risk Management Risk: Risk is the potential for shortfalls, which may be realized in the
future with respect to achieving explicitly-stated requirements. The performance shortfalls may be related to
institutional support for mission execution, or related to any one or more of the following mission execution
domains: safety, technical, cost, schedule. Risk is characterized as a set of triplets:

>> The scenario(s) leading to degraded performance in one or more performance measures.
>> The likelihood(s) of those scenarios.
>> The consequence(s), impact, or severity of the impact on performance that would result if those
scenarios were to occur.

Uncertainties are included in the evaluation of likelihoods and consequences.

Cost Risk: This is the risk associated with the ability of the program/project to achieve its life-cycle cost
objectives and secure appropriate funding. Two risk areas bearing on cost are (1) the risk that the cost
estimates and objectives are not accurate and reasonable; and (2) the risk that program execution will not
meet the cost objectives as a result of a failure to handle cost, schedule, and performance risks.

Schedule Risk: Schedule risks are those associated with the adequacy of the time estimated and allocated
for the development, production, implementation, and operation of the system. Two risk areas bearing on
schedule risk are (1) the risk that the schedule estimates and objectives are not realistic and reasonable; and
(2) the risk that program execution will fall short of the schedule objectives as a result of failure to handle
cost, schedule, or performance risks.

Technical Risk: This is the risk associated with the evolution of the design and the production of the
system of interest affecting the level of performance necessary to meet the stakeholder expectations and
technical requirements. The design, test, and production processes (process risk) influence the technical
risk and the nature of the product as depicted in the various levels of the PBS (product risk).

Programmatic Risk: This is the risk associated with action or inaction from outside the project, over
which the project manager has no control, but which may have significant impact on the project. These
impacts may manifest themselves in terms of technical, cost, and/or schedule. This includes such activities
as: International Traffic in Arms Regulations (ITAR), import/export control, partner agreements with other
domestic or foreign organizations, congressional direction or earmarks, Office of Management and Budget
(OMB) direction, industrial contractor restructuring, external organizational changes, etc.

Scenario: A sequence of credible events that specifies the evolution of a system or process from a given
state to a future state. In the context of risk management, scenarios are used to identify the ways in which a
system or process in its current state can evolve to an undesirable state.

NASA SYSTEMS ENGINEERING HANDBOOK 140
6.0  Crosscutting Technical Management

Undesired scenario(s) might come from technical or • Project Risk Management Plan: The Risk
programmatic sources (e.g., a cost overrun, schedule Management Plan is developed under the
slippage, safety mishap, health problem, malicious Technical Planning Process and defines how risk
activities, environmental impact, or failure to achieve will be identified, mitigated, monitored, and con-
a needed scientific or technological objective or suc- trolled within the project.
cess criterion). Both the likelihood and consequences
may have associated uncertainties. • Technical Risk Issues: These will be the techni-
cal issues identified as the project progresses that
6.4.1 
Risk Management Process pose a risk to the successful accomplishment of
Description the project mission/goals.
FIGURE 6.4-3 provides a typical flow diagram for
the Risk Management Process and identifies typical • Technical Risk Status Measurements: These
inputs, activities, and outputs to consider in address- are any measures that are established that help
ing risk management. to monitor and report the status of project
technical risks.
6.4.1.1  Inputs
The following are typical inputs to risk management:

From project
Prepare a strategy to conduct technical
Project Risk risk management To Technical
Management Plan
Planning Process
Technical Risk
From project and all Identify technical risks Mitigation and/or
technical processes Contingency Actions

Technical Risk Conduct technical risk assessment
Issues To project and Technical
Data Management
Process
From Technical Prepare for technical risk mitigation
Assessment and Technical Risk
Decision Analysis Reports
Processes Monitor the status of each technical
risk periodically
Technical Risk Status
To Technical Data
Measurements
Management Process
Implement technical risk mitigation and Work Products of
From project and
contingency action plans as triggered Technical Risk
Technical Assessment
Management
Process
Technical Risk
Reporting Capture work products from technical
Requirements risk management activities

FIGURE 6.4-3  Risk Management Process

NASA SYSTEMS ENGINEERING HANDBOOK 141
6.0  Crosscutting Technical Management

• Technical Risk Reporting Requirements: management of individual risk issues during imple-
Includes requirements of how technical risks will mentation. In December of 2008, NASA revised its
be reported, how often, and to whom. RM approach in order to more effectively foster pro-
active risk management. The new approach, which is
Additional inputs that may be useful: outlined in NPR 8000.4, Agency Risk Management
Procedural Requirements and further developed
• Other Plans and Policies: Systems Engineering in NASA/SP-2011-3422, NASA Risk Management
Management Plan, form of technical data prod- Handbook, evolves NASA‘s risk management to
ucts, and policy input to metrics and thresholds. entail two complementary processes: Risk-Informed
Decision Making (RIDM) and CRM. RIDM is
• Technical Inputs: Stakeholder expectations, con- intended to inform direction-setting systems engi-
cept of operations, imposed constraints, tracked neering (SE) decisions (e.g., design decisions) through
observables, current program baseline, perfor- better use of risk and uncertainty information in
mance requirements, and relevant experience data. selecting alternatives and establishing baseline perfor-
mance requirements (for additional RIDM technical
6.4.1.2  Activities information, guidance, and process description, see
Prepare a Strategy to Conduct Technical Risk
6.4.1.2.1  NASA/SP-2010-576 Version 1, NASA Risk-Informed
Management Decision Making Handbook).
This strategy would include documenting how the
program/project risk management plan (as devel- CRM is then used to manage risks over the course
oped during the Technical Planning Process) will be of the development and implementation phases of
implemented, identifying any additional technical the life cycle to assure that requirements related to
risk sources and categories not captured in the plan, safety, technical, cost, and schedule are met. In the
identifying what will trigger actions and how these past, RM was considered equivalent to the CRM
activities will be communicated to the internal and process; now, RM is defined as comprising both the
external teams. RIDM and CRM processes, which work together to
assure proactive risk management as NASA programs
6.4.1.2.2  Identify Technical Risks and projects are conceived, developed, and executed.
On a continuing basis, the technical team will iden- FIGURE 6.4-4 illustrates the concept.
tify technical risks including their source, analyze
the potential consequence and likelihood of the risks
occurring, and prepare clear risk statements for entry
into the program/project risk management system.
Coordination with the relevant stakeholders for the
identified risks is included. For more information on
identifying technical risks, see Section 6.4.2.1.

6.4.1.2.3  Conduct Technical Risk Assessment
Until recently, NASA’s Risk Management (RM)
FIGURE 6.4-4  Risk Management as the Interaction of
approach was based almost exclusively on Continuous Risk-Informed Decision Making and Continuous Risk
Risk Management (CRM), which stresses the Management (Source: NASA/SP-2011-3422)

NASA SYSTEMS ENGINEERING HANDBOOK 142
6.0  Crosscutting Technical Management

6.4.1.2.4  Prepare for Technical Risk Mitigation 6.4.1.3  Outputs
This includes selecting the risks that will be mitigated Following are key risk outputs from activities:
and more closely monitored, identifying the risk level
or threshold that will trigger a risk mitigation action • Technical Risk Mitigation and/or Contingency
plan, and identifying for each risk which stakeholders Actions: Actions taken to mitigate identified risks
will need to be informed that a mitigation/contin- or contingency actions taken in case risks are
gency action is determined as well as which organi- realized.
zations will need to become involved to perform the
mitigation/contingency action. • Technical Risk Reports: Reports of the techni-
cal risk policies, status, remaining residual risks,
Monitor the Status of Each Technical Risk
6.4.1.2.5  actions taken, etc. Output at the agreed-to fre-
Periodically quency and recipients.
Risk status will need to be monitored periodically at
a frequency identified in the risk plan. Risks that are • Work Products: Includes the procedures for con-
approaching the trigger thresholds will be monitored ducting technical risk management; rationale for
on a more frequent basis. Reports of the status are decisions made; selected decision alternatives;
made to the appropriate program/project manage- assumptions made in prioritizing, handling, and
ment or board for communication and for decisions reporting technical risks; and lessons learned.
whether to trigger a mitigation action early. Risk sta-
tus will also be reported at most life cycle reviews. 6.4.2 
Risk Management Process
Guidance
Implement Technical Risk Mitigation and
6.4.1.2.6  For additional guidance on risk management, refer to
Contingency Action Plans as Triggered NASA/SP-2010-576, NASA RIDM Handbook and NASA/
When the applicable thresholds are triggered, the SP-2011-3422, NASA Risk Management Handbook.
technical risk mitigation and contingency action
plans are implemented. This includes monitoring the
results of the action plan implementation and mod- 6.5 Configuration Management
ifying them as necessary, continuing the mitigation Configuration management is a management disci-
until the residual risk and/or consequence impacts pline applied over the product’s life cycle to provide
are acceptable, and communicating the actions and visibility into and to control changes to perfor-
results to the identified stakeholders. Action plan mance and functional and physical characteristics.
reports are prepared and results reported at appropri- Additionally, according to SAE Electronic Industries
ate boards and at life cycle reviews. Alliance (EIA) 649B, improper configuration man-
agement may result in incorrect, ineffective, and/or
6.4.1.2.7  Capture Work Products unsafe products being released. Therefore, in order to
Work products include the strategy and procedures for protect and ensure the integrity of NASA products,
conducting technical risk management; the rationale NASA has endorsed the implementation of the five
for decisions made; assumptions made in prioritizing, configuration management functions and the asso-
handling, and reporting technical risks and action ciated 37 underlying principles defined within SAE/
plan effectiveness; actions taken to correct action plan EIA-649-2 Configuration Management Requirements
implementation anomalies; and lessons learned. for NASA Enterprises.

NASA SYSTEMS ENGINEERING HANDBOOK 143
6.0  Crosscutting Technical Management

Together, these standards address what configuration 6.5.1  Process Description
management activities are to be done, when they are FIGURE 6.5-1 provides a typical flow diagram for the
to happen in the product life cycle, and what plan- Configuration Management Process and identifies
ning and resources are required. Configuration man- typical inputs, outputs, and activities to consider in
agement is a key systems engineering practice that, addressing CM.
when properly implemented, provides visibility of a
true representation of a product and attains the prod- 6.5.1.1  Inputs
uct’s integrity by controlling the changes made to the The inputs for this process are:
baseline configuration and tracking such changes.
Configuration management ensures that the con- • CM plan: This plan would have been developed
figuration of a product is known and reflected in under the Technical Planning Process and serves
product information, that any product change is ben- as the overall guidance for this process for the
eficial and is effected without adverse consequences, program/project
and that changes are managed.
• Engineering change proposals: These are the
CM reduces technical risks by ensuring correct prod- requests for changes to the established baselines
uct configurations, distinguishes among product in whatever form they may appear throughout the
versions, ensures consistency between the product life cycle.
and information about the product, and avoids the
embarrassment cost of stakeholder dissatisfaction and • Expectation, requirements and interface docu-
complaint. In general, NASA adopts the CM prin- ments: These baselined documents or models are
ciples as defined by SAE/EIA 649B, Configuration key to the design and development of the product.
Management Standard, in addition to implementa-
tion as defined by NASA CM professionals and as • Approved requirements baseline changes: The
approved by NASA management. approved requests for changes will authorize the
update of the associated baselined document or
When applied to the design, fabrication/assembly, model.
system/subsystem testing, integration, and opera-
tional and sustaining activities of complex technol- • Designated configuration items to be controlled:
ogy items, CM represents the “backbone” of the As part of technical planning, a list or philosophy
enterprise structure. It instills discipline and keeps would have been developed that identifies the
the product attributes and documentation consistent. types of items that will need to be placed under
CM enables all stakeholders in the technical effort, at configuration control.
any given time in the life of a product, to use identical
data for development activities and decision-making. 6.5.1.2  Process Activities
CM principles are applied to keep the documentation There are five elements of CM (see FIGURE 6.5-2):
consistent with the approved product, and to ensure
that the product conforms to the functional and • configuration planning and management
physical requirements of the approved design. • configuration identification,

NASA SYSTEMS ENGINEERING HANDBOOK 144
6.0  Crosscutting Technical Management

From project
Project Configuration
Management Plan To applicable
technical processes
Prepare a strategy to conduct
Engineering Change List of Configuration
configuration management
Proposals Items Under
Control
From Requirements
and Interface Identify baseline to be under
Management Processes configuration control Current
Baselines
Expectation,
Requirements, and Manage configuration
Maintain the status ofchange control
configuration
Interface Documents To project and
Technical Data
Management Process
Approved Maintain the status of configuration
Requirement Configuration
documentation
Baseline Changes Management
Reports

Designated Conduct configuration audits To Technical Data
Configuration Items Management Process
to Be Controlled Configuration
Capture work products from Management
configuration management activities Work Products
Identified Risk
to Be Controlled

FIGURE 6.5‑1  Configuration Management Process

• configuration change management, CM surveillance of the contractors. Reviews by the
• Configuration Status Accounting (CSA), and Center Configuration Management Organization
• configuration verification. (CMO) are warranted and will cost resources and
time, but the correction of systemic CM problems
6.5.1.2.1  Prepare a Strategy to Conduct CM before they erupt into losing configuration control
CM planning starts at a program’s or project’s incep- are always preferable to explaining why incorrect or
tion. The CM office should carefully weigh the misidentified parts are causing major problems in the
value of prioritizing resources into CM tools or into program/project.

CONFIGURATION
MANAGEMENT

Configuration Configuration Configuration
Configuration Configuration
Planning and Change Status
Identification Verification
Management Management Accounting

FIGURE 6.5‑2  Five Elements of Configuration Management

NASA SYSTEMS ENGINEERING HANDBOOK 145
6.0  Crosscutting Technical Management

One of the key inputs to preparing for CM imple- baseline identifies an agreed-to description of attri-
mentation is a strategic plan for the project’s com- butes of a CI at a point in time and provides a known
plete CM process. This is typically contained in a configuration to which changes are addressed.
CM plan. See Appendix M for an outline of a typical Baselines are established by agreeing to (and docu-
CM plan. menting) the stated definition of a CI’s attributes.
The approved “current” baseline defines the basis of
This plan has both internal and external uses: the subsequent change. The system specification is
typically finalized following the SRR. The functional
• Internal: It is used within the program/project baseline is established at the SDR and will usually
office to guide, monitor, and measure the over- transfer to NASA’s control at that time for contract-
all CM process. It describes all the CM activities ing efforts. For in-house efforts, the baseline is set/
and the schedule for implementing those activities controlled by the NASA program/project.
within the program/project.
The four baselines (see FIGURE 6.5-3) normally con-
• External: The CM plan is used to communicate trolled by the program, project, or Center are the
the CM process to the contractors involved in following:
the program/project. It establishes consistent CM
processes and working relationships. • Functional Baseline: The functional baseline
is the approved configuration documentation
The CM plan may be a standalone document or it that describes a system’s or top-level CI’s perfor-
may be combined with other program/project plan- mance requirements (functional, interoperability,
ning documents. It should describe the criteria for and interface characteristics) and the verification
each technical baseline creation, technical approvals, required to demonstrate the achievement of those
and audits. specified characteristics. The functional baseline
is established at the SDR by the NASA program/
Identify Baseline to be Under Configuration
6.5.1.2.2  project. The program/project will direct through
Control contractual agreements, how the functional base-
Configuration identification is the systematic process lines are managed at the different functional lev-
of selecting, organizing, and stating the product attri- els. (Levels 1–4)
butes. Identification requires unique identifiers for a
product and its configuration documentation. The • Allocated Baseline: The allocated baseline is
CM activity associated with identification includes the approved performance-oriented configura-
selecting the Configuration Items (CIs), determin- tion documentation for a CI to be developed
ing the CIs’ associated configuration documentation, that describes the functional, performance, and
determining the appropriate change control author- interface characteristics that are allocated from
ity, issuing unique identifiers for both CIs and CI a higher level requirements document or a CI
documentation, releasing configuration documenta- and the verification required to demonstrate
tion, and establishing configuration baselines. achievement of those specified characteristics.
The allocated baseline extends the top-level per-
NASA has four baselines, each of which defines a dis- formance requirements of the functional baseline
tinct phase in the evolution of a product design. The to sufficient detail for defining the functional and

NASA SYSTEMS ENGINEERING HANDBOOK 146
6.0  Crosscutting Technical Management

Configuration Information Non-Configuration Milestones
Information (examples)
Level 1 Technical
Requirements Concept

MDR
FUNCTIONAL
BASELINE System Spec
Partial analyses
and studies
Major Architecture Program and
project plans
Aspects of Design SRR
Developmental
Complete data SDR

Segment Spec Segment Spec Segment Spec

PDR
ALLOCATED
BASELINE Complete
Prime Item Prime Item Prime Item analyses
Implementation Design-to-Spec Design-to-Spec Design-to-Spec Developmental
data
Aspects of Design
Test plans
Complete

End Item End Item End Item
Design-to-Spec Design-to-Spec Design-to-Spec
CDR
Test information
PRODUCT
BASELINE End Item
Manuals

Build-to-Spec Certifications
Realization Aspects of
Design Complete; Fabrication Product
SAR
structure
and Text Complete

ORR

FIGURE 6.5‑3  Evolution of Technical Baseline

performance characteristics and for initiating the configuration of a CI during the production,
detailed design for a CI. The allocated baseline fielding/deployment, and operational support
is usually controlled by the design organization phases of its life cycle. The established product
until all design requirements have been verified. baseline is controlled as described in the config-
The allocated baseline is typically established at uration management plan that was developed
the successful completion of the PDR. Prior to during Phase A. The product baseline is typically
CDR, NASA normally reviews design output established at the completion of the CDR. The
for conformance to design requirements through product baseline describes:
incremental deliveries of engineering data. NASA
control of the allocated baseline occurs through »» Detailed physical or form, fit, and function
review of the engineering deliveries as data items. characteristics of a CI;
»» The selected functional characteristics desig-
• Product Baseline: The product baseline is the nated for production acceptance testing; and
approved technical documentation that describes »» The production acceptance test requirements.

NASA SYSTEMS ENGINEERING HANDBOOK 147
6.0  Crosscutting Technical Management

• As-Deployed Baseline: The as-deployed baseline project objectives and requirements. The first step
occurs at the ORR. At this point, the design is consid- establishes a robust and well-disciplined internal
ered to be functional and ready for flight. All changes NASA Configuration Control Board (CCB) system,
will have been incorporated into the documentation. which is chaired by someone with program/project
change authority. CCB members represent the stake-
6.5.1.2.3  Manage Configuration Change Control holders with authority to commit the team they rep-
Configuration change management is a process to resent. The second step creates configuration change
manage approved designs and the implementation management surveillance of the contractor’s activity.
of approved changes. Configuration change man- The CM office advises the NASA program or project
agement is achieved through the systematic proposal, manager to achieve a balanced configuration change
justification, and evaluation of proposed changes management implementation that suits the unique
followed by incorporation of approved changes and program/project situation. See FIGURE 6.5-4 for an
verification of implementation. Implementing con- example of a typical configuration change manage-
figuration change management in a given program/ ment control process.
project requires unique knowledge of the program/

NORMAL CONFIGURATION CHANGE PROCESS
Responsible
Originator CM Function Reviewers Organization CCB Actionees

1. 2. 3. 4.
• Prepare and • Receive and log Review and Collect, track,
submit change request in system submit comments and adjudicate
request • Screen change comments (with
• Propose reviewers package reviewers and
originator)
• Release for review

5. 6. 7. 8a.
Schedule CCB and Prepare decision • Disposition Complete
prepare agenda package change request assigned actions
(if needed) • Assign action
items as needed

8b. 9.
• Release CCB • Execute approved
minutes changes
• Track actions • Finalize
documentation

10.
• Verify release
package
• Release final
documentation
• Close change
request

FIGURE 6.5‑4 Typical Change Control Process

NASA SYSTEMS ENGINEERING HANDBOOK 148
6.0  Crosscutting Technical Management

TYPES OF CONFIGURATION MANAGEMENT CHANGES

• Engineering Change: An engineering change is an iteration in the baseline. Changes can be major or
minor. They may or may not include a specification change. Changes affecting an external interface must
be coordinated and approved by all stakeholders affected.

> A “major” change is a change to the baseline configuration documentation that has significant
impact (i.e., requires retrofit of delivered products or affects the baseline specification, cost, safety,
compatibility with interfacing products, or operator, or maintenance training).
> A ”minor” change corrects or modifies configuration documentation or processes without impact to
the interchangeability of products or system elements in the system structure.

• Waiver: A waiver is a documented agreement intentionally releasing a program or project from meeting a
requirement. (Some Centers use deviations prior to Implementation and waivers during Implementation.)
Authorized waivers do not constitute a change to a baseline.

Maintain the Status of Configuration
6.5.1.2.4  • Historical traceability
Documentation • Software trouble reporting
Configuration Status Accounting (CSA) is the • Performance measurement data
recording and reporting of configuration data nec-
essary to manage CIs effectively. An effective CSA The following are critical functions or attributes to
system provides timely and accurate configuration consider if designing or purchasing software to assist
information such as: with the task of managing configuration.

• Complete current and historical configuration • Ability to share data real time with internal and
documentation and unique identifiers. external stakeholders securely;
• Version control and comparison (track history of
• Status of proposed changes, deviations, and waiv- an object or product);
ers from initiation to implementation. • Secure user checkout and check in;
• Tracking capabilities for gathering metrics (i.e.,
• Status and final disposition of identified discrep- time, date, who, time in phases, etc.);
ancies and actions identified during each config- • Web based;
uration audit. • Notification capability via e-mail;
• Integration with other databases or legacy systems;
Some useful purposes of the CSA data include: • Compatible with required support contractors
and/or suppliers (i.e., can accept data from a third
• An aid for proposed change evaluations, change party as required);
decisions, investigations of design problems, war- • Integration with drafting and modeling programs
ranties, and shelf-life calculations as required;

NASA SYSTEMS ENGINEERING HANDBOOK 149
6.0  Crosscutting Technical Management

• Provide neutral format viewer for users; transition between these phases. Key Decision Points
• License agreement allows for multiple users within (KDPs) define transitions between the phases. CM
an agreed-to number; plays an important role in determining whether a
• Workflow and life cycle management; KDP has been met. Major outputs of CM are:
• Limited customization;
• Migration support for software upgrades; • List of configuration items under control
• User friendly; (Configuration Status Accounting (CSA) reports):
• Consideration for users with limited access; This output is the list of all the items, documents,
• Ability to attach standard format files from hardware, software, models, etc., that were iden-
desktop tified as needing to be placed under configuration
• Workflow capability (i.e., route a CI as required control. CSA reports are updated and maintained
based on a specific set of criteria); and throughout the program and project life cycle.
• Capable of acting as the one and only source for
released information. • Current baselines: Baselines of the current configu-
rations of all items that are on the CM list are made
6.4.1.2.5  Conduct Configuration Audits available to all technical teams and stakeholders.
Configuration verification is accomplished by
inspecting documents, products, and records; • CM reports: Periodic reports on the status of the
reviewing procedures, processes, and systems of CM items should be available to all stakeholders
operations to verify that the product has achieved its on an agreed-to frequency and at key life cycle
required performance requirements and functional reviews.
attributes; and verifying that the product’s design is
documented. This is sometimes divided into func- • Other CM work products: Other work products
tional and physical configuration audits. (See Section include the strategy and procedures used for CM;
6.7.2.3 for more on technical reviews.) descriptions, drawings and/or models of the CM
items; change requests and their disposition and
6.4.1.2.6  Capture work Products accompanying rationale; reports; audit results as
These include the strategy and procedures for config- well as any corrective actions needed.
uration management, the list of identified configu-
ration items, descriptions of the configuration items, 6.5.2  CM Guidance
change requests, disposition of the requests, rational Refer to Section 6.5.2 in the NASA Expanded
for dispositions, reports, and audit results. Guidance for Systems Engineering at https://nen.nasa.
gov/web/se/doc-repository for additional guidance on:
6.5.1.3  Outputs
NPR 7120.5 defines a project’s life cycle in progressive • the impact of not doing CM,
phases. Beginning with Pre-Phase A, these steps in • warning signs when you know you are in trouble,
turn are grouped under the headings of Formulation and
and Implementation. Approval is required to • when it is acceptable to use redline drawings.

NASA SYSTEMS ENGINEERING HANDBOOK 150
6.0  Crosscutting Technical Management

6.6 Technical Data Management • analysis of data use,
The Technical Data Management Process is used • evaluation of data for future value to other pro-
to plan for, acquire, access, manage, protect, and grams/projects, and
use data of a technical nature to support the total • process access to information written in legacy
life cycle of a system. Data Management (DM) software.
includes the development, deployment, operations
and support, eventual retirement, and retention of The Technical Data Management and Configuration
appropriate technical, to include mission and science, Management Processes work side-by-side to ensure
data beyond system retirement as required by NPR all information about the project is safe, known, and
1441.1, NASA Records Retention Schedules. accessible. Changes to information under configu-
ration control require a Change Request (CR) and
DM is illustrated in FIGURE 6.6-1. Key aspects of DM are typically approved by a Configuration Control
for systems engineering include: Board. Changes to information under Technical
Data Management do not need a CR but still need to
• application of policies and procedures for data be managed by identifying who can make changes to
identification and control, each type of technical data.
• timely and economical acquisition of technical
data, 6.6.1  Process Description
• assurance of the adequacy of data and its FIGURE 6.6-1 provides a typical flow diagram for the
protection, Technical Data Management Process and identifies
• facilitating access to and distribution of the data typical inputs, outputs, and activities to consider in
to the point of use, addressing technical data management.

To all technical processes
and contractors
From all technical Prepare for technical data
processes and contractors management implementation Form of Technical
Data Products
Technical Data
Products to Be
Managed Collect and store required
technical data Technical Data
Electronic Exchange
Formats

Maintain stored technical data
To project and all
Technical Data technical processes
Requests
Provide technical data to authorized Delivered Technical
parties Data

Capture work products from technical
data management activities

FIGURE 6.6‑1  Technical Data Management Process

NASA SYSTEMS ENGINEERING HANDBOOK 151
6.0  Crosscutting Technical Management

6.6.1.1  Inputs • Control procedures—receipt, modification, review,
The inputs for this process are: and approval.

• Technical data products to be managed: • Guidance on how to access/search for data for users.
Technical data, regardless of the form or method
of recording and whether the data are generated • Data exchange formats that promote data reuse
by the contractor or Government during the life and help to ensure that data can be used consis-
cycle of the system being developed. (Electronic tently throughout the system, family of systems,
technical data should be stored with sufficient or system of systems.
metadata to enable easy retrieval and sorting.)
• Data rights and distribution limitations such as
• Technical data requests: External or internal export-control Sensitive But Unclassified (SBU).
requests for any of the technical data generated by
the program/project. • Storage and maintenance of data, including mas-
ter lists where documents and records are main-
6.6.1.2 Process Activities tained and managed.
Each Center is responsible for policies and procedures
for technical DM. NPR 7120.5 and NPR 7123.1 Prepare a technical data management strategy. This
define the need to manage data, but leave specifics to strategy can document how the program/project data
the individual Centers. However, NPR 7120.5 does management plan will be implemented by the tech-
require that DM planning be provided as either a nical effort or, in the absence of such a program-level
section in the program/project plan, CM plan, or as plan, be used as the basis for preparing a detailed
a separate document. The program or project man- technical data management plan, including:
ager is responsible for ensuring that the data required
are captured and stored, data integrity is maintained, • Items of data that will be managed according to
and data are disseminated as required. program/project or organizational policy, agree-
ments, or legislation;
Other NASA policies address the acquisition and
storage of data and not just the technical data used in • The data content and format;
the life cycle of a system.
• A framework for data flow within the program/
Prepare for Technical Data Management
6.6.1.2.1  project and to/from contractors including the lan-
Implementation guage(s) to be employed in technical effort infor-
The recommended procedure is that the DM plan be mation exchanges;
a separate plan apart from the program/project plan.
DM issues are usually of sufficient magnitude to jus- • Technical data management responsibilities and
tify a separate plan. The plan should cover the follow- authorities regarding the origin, generation, cap-
ing major DM topics: ture, archiving, security, privacy, and disposal of
data products;
• Identification/definition/management of data sets.

NASA SYSTEMS ENGINEERING HANDBOOK 152
6.0  Crosscutting Technical Management

• Establishing the rights, obligations, and commit- used for in-house data preparation. The standard
ments regarding the retention of, transmission of, description is modified to suit program/project-spe-
and access to data items; and cific needs, and appropriate language is included
in SOWs to implement actions resulting from the
• Relevant data storage, transformation, transmis- data evaluation. “Data suppliers” may be contrac-
sion, and presentation standards and conven- tors, academia, or the Government. Procurement of
tions to be used according to program/project or data from an outside supplier is a formal procure-
organizational policy, agreements, or legislative ment action that requires a procurement document;
constraints. in-house requirements may be handled using a less
formal method. Below are the different types of data
• Obtain strategy/plan commitment from relevant that might be utilized within a program/project:
stakeholders.
• Data
• Prepare procedures for implementing the techni- »» “Data” is defined in general as “recorded
cal data management strategy for the technical information regardless of the form or method
effort and/or for implementing the activities of of recording.” However, the terms “data” and
the technical data management plan. “information” are frequently used inter-
changeably. To be more precise, data gener-
• Establish a technical database(s) to use for tech- ally should be processed in some manner to
nical data maintenance and storage or work with generate useful, actionable information.
the program/project staff to arrange use of the
program/project database(s) for managing tech- »» “Data,” as used in SE DM, includes technical
nical data. data; computer software documentation; and
representation of facts, numbers, or data of
• Establish data collection tools, as appropriate to any nature that can be communicated, stored,
the technical data management scope and avail- and processed to form information required
able resources. by a contract or agreement to be delivered to,
or accessed by, the Government.
• Establish electronic data exchange interfaces in
accordance with international standards/agree- »» Data include that associated with system
ments and applicable NASA standards. development, modeling and simulation used
in development or test, test and evaluation,
Train appropriate stakeholders and other technical installation, parts, spares, repairs, usage
personnel in the established technical data manage- data required for product sustainability, and
ment strategy/plan, procedures, and data collection source and/or supplier data.
tools, as applicable
»» Data specifically not included in Technical
Data Identification/Definition Data Management would be data relating
Each program/project determines data needs during to general NASA workforce operations
the life cycle. Data types may be defined in standard information, communications information
documents. Center and Agency directives sometimes (except where related to a specific require-
specify content of documents and are appropriately ment), financial transactions, personnel data,

NASA SYSTEMS ENGINEERING HANDBOOK 153
6.0  Crosscutting Technical Management

transactional data, and other data of a purely these activities for implementing Technical Data
business nature. Management include:

• Data Call: Solicitation from Government stake- • Requirements relating to the flow/delivery of data
holders (specifically Integrated Product Team to or from a contractor should be specified in the
(IPT) leads and functional managers) identi- technical data management plan and included in
fies and justifies their data requirements from the Request for Proposal (RFP) and contractor
a proposed contracted procurement. Since agreement.
data provided by contractors have a cost to the
Government, a data call (or an equivalent activ- • NASA should not impose changes on existing
ity) is a common control mechanism used to contractor data management systems unless the
ensure that the requested data are truly needed. program/project technical data management
If approved by the data call, a description of each requirements, including data exchange require-
data item needed is then developed and placed ments, cannot otherwise be met.
on contract.
• Responsibility for data inputs into the technical
• Information: Information is generally consid- data management system lies solely with the orig-
ered as processed data. The form of the processed inator or generator of the data.
data is dependent on the documentation, report,
review formats, or templates that are applicable. • The availability/access of technical data lies with
the author, originator, or generator of the data in
• Technical Data Package: A technical data pack- conjunction with the manager of the technical
age is a technical description of an item adequate data management system.
for supporting an acquisition strategy, produc-
tion, engineering, and logistics support. The • The established availability/access description and
package defines the required design configuration list should be baselined and placed under config-
and procedures to ensure adequacy of item per- uration control.
formance. It consists of all applicable items such
as drawings, associated lists, specifications, stan- • For new programs/projects, a digital generation
dards, performance requirements, quality assur- and delivery medium is desired. Existing pro-
ance provisions, and packaging details. grams/projects should weigh the cost/benefit
trades of digitizing hard copy data.
• Technical Data Management System: The strat-
egies, plans, procedures, tools, people, data for- TABLE 6.6-1 defines the tasks required to capture
mats, data exchange rules, databases, and other technical data.
entities and descriptions required to manage the
technical data of a program/project. 6.6.1.2.3  Provide Data to Authorized Parties
All data deliverables should include distribution state-
6.6.1.2.2  Collect and Store Data ments and procedures to protect all data that contain
Subsequent activities collect, store, and maintain critical technology information, as well as to ensure
technical data and provide it to authorized par- that limited distribution data, intellectual property
ties as required. Some considerations that impact data, or proprietary data are properly handled during

NASA SYSTEMS ENGINEERING HANDBOOK 154
6.0  Crosscutting Technical Management

DATA COLLECTION CHECKLIST

• Have the frequency of collection and the points in the technical and technical management processes
when data inputs will be available been determined?
• Has the timeline that is required to move data from the point of origin to storage repositories or
stakeholders been established?
• Who is responsible for the input of the data?
• Who is responsible for data storage, retrieval, and security?
• Have necessary supporting tools been developed or acquired?

systems engineering activities. This injunction applies other ways to capture the formats used for the
whether the data are hard copy or digital. various data exchanges.

As part of overall asset protection planning, NASA • Delivered Technical Data: The data that were
has established special procedures for the protection delivered to the requester.
of Critical Program Information (CPI). CPI may
include components; engineering, design, or manu- Other work products generated as part of this process
facturing processes; technologies; system capabilities, include the strategy and procedures used for techni-
and vulnerabilities; and any other information that cal data management, request dispositions, decisions,
gives a system its distinctive operational capability. and assumptions.

CPI protection should be a key consideration for the 6.6.2 
Technical Data Management
technical data management effort and is part of the Guidance
asset protection planning process. Refer to Section 6.6.2 in the NASA Expanded
Guidance for Systems Engineering at https://nen.nasa.
6.6.1.3  Outputs gov/web/se/doc-repository for additional guidance on:
Outputs include timely, secure availability of needed
data in various representations to those authorized • data security and
to receive it. Major outputs from the Technical Data • ITAR.
Management Process include the following (see
FIGURE 6.6-1):
6.7 Technical Assessment
• Form of Technical Data Products: How each Technical assessment is the crosscutting process used
type of data is held and stored such as textual, to help monitor technical progress of a program/proj-
graphic, video, etc. ect through periodic technical reviews and through
monitoring of technical indicators such as MOEs,
• Technical Data Electronic Exchange Formats: MOPs, Key Performance Parameters (KPPs), and
Description and perhaps templates, models or TPMs. The reviews and metrics also provide status

NASA SYSTEMS ENGINEERING HANDBOOK 155
6.0  Crosscutting Technical Management

TABLE 6.6‑1 Technical Data Tasks
Description Tasks Expected Outcomes
Technical Collect and store inputs and technical effort outcomes from the technical and technical Sharable data needed to
data capture management processes, including: perform and control the
technical and technical
• results from technical assessments;
management processes is
• descriptions of methods, tools, and metrics used; collected and stored.
• recommendations, decisions, assumptions, and impacts of technical efforts and Stored data inventory.
decisions;
• lessons learned;
• deviations from plan;
• anomalies and out-of-tolerances relative to requirements; and
• other data for tracking requirements.
Perform data integrity checks on collected data to ensure compliance with content and
format as well as technical data checks to ensure there are no errors in specifying or
recording the data.
Report integrity check anomalies or variances to the authors or generators of the data for
correction.
Prioritize, review, and update data collection and storage procedures as part of regularly
scheduled maintenance.

Technical Implement technical management roles and responsibilities with technical data products Records of technical data
data received. maintenance.
maintenance
Manage database(s) to ensure that collected data have proper quality and integrity; and Technical effort data,
are properly retained, secure, and available to those with access authority. including captured work
products, contractor-
Periodically review technical data management activities to ensure consistency and
delivered documents,
identify anomalies and variances.
and acquirer-provided
Review stored data to ensure completeness, integrity, validity, availability, accuracy, documents are controlled and
currency, and traceability. maintained.
Perform technical data maintenance, as required. Status of data stored is
Identify and document significant issues, their impacts, and changes made to technical maintained to include: version
data to correct issues and mitigate impacts. description, timeline, and
security classification.
Maintain, control, and prevent the stored data from being used inappropriately.
Store data in a manner that enables easy and speedy retrieval.
Maintain stored data in a manner that protects the technical data against foreseeable
hazards, e.g., fire, flood, earthquake, etc.

Technical Maintain an information library or reference index to provide technical data availability and Access information (e.g.,
data/ access instructions. available data, access means,
information security procedures, time
Receive and evaluate requests to determine data requirements and delivery instructions.
distribution period for availability, and
Process special requests for technical effort data or information according to established personnel cleared for access)
procedures for handling such requests. is readily available.
Ensure that required and requested data are appropriately distributed to satisfy the Technical data are provided
needs of the acquirer and requesters in accordance with the agreement, program/project to authorize requesters in the
directives, and technical data management plans and procedures. appropriate format, with the
Ensure that electronic access rules are followed before database access is allowed or any appropriate content, and by
requested data are electronically released/transferred to the requester. a secure mode of delivery, as
applicable.
Provide proof of correctness, reliability, and security of technical data provided to internal
and external recipients.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 156
6.0  Crosscutting Technical Management

Description Tasks Expected Outcomes
Data Implement safeguards to ensure protection of the technical database and of en route Current technical data
management technical data from unauthorized access or intrusion. management system.
system
Establish proof of coherence of the overall technical dataset to facilitate effective and Technical data are
maintenance
efficient use. appropriately and regularly
backed up to prevent data
Maintain, as applicable, backups of each technical database.
loss.
Evaluate the technical data management system to identify collection and storage
performance issues and problems; satisfaction of data users; risks associated with
delayed or corrupted data, unauthorized access, or survivability of information from
hazards such as fire, flood, earthquake, etc.
Review systematically the technical data management system, including the database
capacity, to determine its appropriateness for successive phases of the Defense
Acquisition Framework.
Recommend improvements for discovered risks and problems:
Handle risks identified as part of technical risk management.
Control recommended changes through established program/project change management
activities.

information to support assessing system design, prod- adequacy and availability of non-budgetary resources,
uct realization, and technical management decisions. and adequacy of the risk management approach.

NASA has multiple review cycle processes for both 6.7.1.1  Inputs
space flight programs and projects (see NPR 7120.5), Typical inputs needed for the Technical Assessment
and research and technology programs and projects. Process would include the following:
(See NPR 7120.8, NASA Research and Technology
Program and Project Management Requirements.) • Technical Plans: These are the planning doc-
These different review cycles all support the same uments that will outline the technical reviews/
basic goals but with differing formats and formalities assessment process as well as identify the techni-
based on the particular program or project needs. cal product/process measures that will be tracked
and assessed to determine technical progress.
6.7.1 Process Description Examples of these plans are the program (or
FIGURE 6.7-1 provides a typical flow diagram for the project) plan, SEMP (if applicable), review plans
Technical Assessment Process and identifies typical (which may be part of the program or project
inputs, outputs, and activities to consider in address- plan), ILS plan, and EVM plan (if applicable).
ing technical assessment. Technical assessment is These plans contain the information and descrip-
focused on providing a periodic assessment of the tions of the program/project’s alignment with
program/project’s technical and programmatic sta- and contribution to Agency strategic goals, its
tus and health at key points in the life cycle. There management approach, its technical approach, its
are 6 criteria considered in this assessment process: integrated cost and schedule, its budget, resource
alignment with and contribution to Agency strategic allocations, and its risk management approach.
goals; adequacy of management approach; adequacy
of technical approach; adequacy of the integrated • Technical Process and Product Measures: These
cost and schedule estimates and funding strategy; are the identified technical measures that will be

NASA SYSTEMS ENGINEERING HANDBOOK 157
6.0  Crosscutting Technical Management

To Technical Planning,
From Technical Risk Management, and Requirements
Planning Process Management Processes
Product and Process Assessment Results/
Measures Findings
Prepare strategy for conducting
technical assessments
Technical Plans To Decision Analysis Process
Analysis Support
From project Requests
Assess technical work productivity
Risk Reporting (measure progress against plans)
To project and Technical Data
Requirements
Management Process
Technical Review
Technical Cost and Schedule Assess technical product quality Reports
Status Reports (measure progress against requirements)
To Technical Planning,
From Product Verification and Requirements, and
Product Validation Processes Interface Management Processes
Conduct horizontal and vertical progress
technical reviews Corrective Action
Product Measurements
Recommendations

From Decision Analysis Process To Technical Data
Capture work products from technical
Decision Support assessment activities Management Process
Recommendations and Work Products from
Impacts Technical Assessment

FIGURE 6.7‑1  Technical Assessment Process

assessed or tracked to determine technical prog- 6.7.1.2  Process Activities
ress. These measures are also referred to as MOEs, Prepare Strategy for Conducting Technical
6.7.1.2.1 
MOPs, KPPs, and TPMs. (See Section 6.7.2.6.2 Assessments
in the NASA Expanded Guidance for Systems As outlined in FIGURE 6.7-1, the technical plans pro-
Engineering at https://nen.nasa.gov/web/se/doc-re- vide the initial inputs into the Technical Assessment
pository.) They provide indications of the program/ Process. These documents outline the technical
project’s performance in key management, techni- reviews/assessment approach as well as identify the
cal, cost (budget), schedule, and risk areas. technical measures that will be tracked and assessed
to determine technical progress. An important part
• Reporting Requirements: These are the require- of the technical planning is determining what is
ments on the methodology in which the status of needed in time, resources, and performance to com-
the technical measures will be reported with regard plete a system that meets desired goals and objectives.
to management, technical cost (budget), sched- Project managers need visibility into the progress of
ule, and risk. The requirements apply internally those plans in order to exercise proper management
to the program/project and are used externally by control. Typical activities in determining progress
the Centers and Mission Directorates to assess the against the identified technical measures include sta-
performance of the program or project. The meth- tus reporting and assessing the data. Status report-
odology and tools used for reporting the status ing will identify where the project stands with regard
will be established on a project-by-project basis. to a particular technical measure. Assessing will

NASA SYSTEMS ENGINEERING HANDBOOK 158
6.0  Crosscutting Technical Management

analytically convert the output of the status report- • Maintain historical data for both trend identifica-
ing into a more useful form from which trends can tion and cross-project analyses.
be determined and variances from expected results
can be understood. Results of the assessment activ- • Encourage a logical process of rolling up technical
ity then feed into the Decision Analysis Process (see measures (e.g., use the WBS or PBS for project
Section 6.8) where potential corrective action may be progress status).
necessary.
• Support assessments with quantitative risk
These activities together form the feedback loop measures.
depicted in FIGURE 6.7-2 .
• Summarize the condition of the project by using
Status Not OK
color-coded (red, yellow, and green) alert zones
Execute Status for all technical measures.
(Re-) Decision-
Assessing
Planning Reporting making
Status OK Assess Technical Work Productivity and Product
6.7.1.2.2 
Quality and Conduct Progress Reviews
FIGURE 6.7‑2  Planning and Status Regular, periodic (e.g., monthly) tracking of the
Reporting Feedback Loop
technical measures is recommended, although some
measures should be tracked more often when there is
This loop takes place on a continual basis throughout rapid change or cause for concern. Key reviews, such
the project life cycle. This loop is applicable at each as PDRs and CDRs, or status reviews are points at
level of the project hierarchy. Planning data, status which technical measures and their trends should
reporting data, and assessments flow up the hierar- be carefully scrutinized for early warning signs of
chy with appropriate aggregation at each level; deci- potential problems. Should there be indications that
sions cause actions to be taken down the hierarchy. existing trends, if allowed to continue, will yield
Managers at each level determine (consistent with an unfavorable outcome, corrective action should
policies established at the next higher level of the begin as soon as practical. Section 6.7.2.6.1 in the
project hierarchy) how often and in what form sta- NASA Expanded Guidance for Systems Engineering
tus data should be reported and assessments should at https://nen.nasa.gov/web/se/doc-repository pro-
be made. In establishing these status reporting and vides additional information on status reporting
assessment requirements, some principles of good and assessment techniques for costs and schedules
practice are as follows: (including EVM), technical performance, and sys-
tems engineering process metrics.
• Use an agreed-upon set of well-defined technical
measures. (See Section 6.7.2.6.2 in the NASA The measures are predominantly assessed during
Expanded Guidance for Systems Engineering at the program and project technical reviews. Typical
https://nen.nasa.gov/web/se/doc-repository.) activities performed for technical reviews include
(1) identifying, planning, and conducting phase-to-
• Report these technical measures in a consistent phase technical reviews; (2) establishing each review’s
format at all project levels. purpose, objective, and entry and success criteria; (3)

NASA SYSTEMS ENGINEERING HANDBOOK 159
6.0  Crosscutting Technical Management

establishing the makeup of the review team; and (4) • Technical Review Reports/Minutes: This is the
identifying and resolving action items resulting from collective information coming out of each review
the review. Section 6.7.2.3 summarizes the types of that captures the results, recommendations, and
technical reviews typically conducted on a program/ actions with regard to meeting the review’s suc-
project and the role of these reviews in supporting cess criteria.
management decision processes. This section address
the types of technical reviews typically conducted • Other Work Products: These would include strat-
for both space flight and research and technology egies and procedures for technical assessment, key
programs/projects and the role of these reviews in decisions and associated rationale, assumptions,
supporting management decision processes. It also and lessons learned.
identifies some general principles for holding reviews,
but leaves explicit direction for executing a review to 6.7.2 Technical Assessment Guidance
the program/project team to define. Refer to Section 6.7.2 in the NASA Expanded
Guidance for Systems Engineering at https://nen.
The process of executing technical assessment has nasa.gov/web/se/doc-repository for additional guid-
close relationships to other areas, such as risk man- ance on:
agement, decision analysis, and technical planning.
These areas may provide input into the Technical • the basis of technical reviews,
Assessment Process or be the benefactor of outputs • audits,
from the process. • Key Decision Points,
• required technical reviews for space flight projects,
TABLE 6.7-1 provides a summary of the types of reviews • other reviews,
for a space flight project, their purpose, and timing. • status reporting and assessment (including MOEs,
MOPs, KPPs, TPMs, EVM and other metrics,
6.7.1.2.3  Capture Work Products
The work products generated during these activities Additional information is also available in NASA/
should be captured along with key decisions made, SP-2014-3705, NASA Space Flight Program and
supporting decision rationale and assumptions, Project Management Handbook.
and lessons learned in performing the Technical
Assessment Process.
6.8 Decision Analysis
6.7.1.3  Outputs The purpose of this section is to provide an over-
Typical outputs of the Technical Assessment Process view of the Decision Analysis Process, highlighting
would include the following: selected tools and methodologies. Decision Analysis
is a framework within which analyses of diverse types
• Assessment Results, Findings, and are applied to the formulation and characterization
Recommendations: This is the collective data of decision alternatives that best implement the deci-
on the established measures from which trends sion-maker’s priorities given the decision-maker’s
can be determined and variances from expected state of knowledge.
results can be understood. Results then feed into
the Decision Analysis Process where corrective
action may be necessary.

NASA SYSTEMS ENGINEERING HANDBOOK 160
6.0  Crosscutting Technical Management

TABLE 6.7-1  Purpose and Results for Life-Cycle Reviews for Spaceflight Projects
Name of Purpose Timing Entrance/Success Results of Review
Review Criteria
Mission Concept The MCR will affirm the mission The MCR should The MCR entrance and A successful MCR supports the
Review (MCR) need and evaluates the proposed be completed success criteria are determination that the proposed
objectives and the concept for prior to entering defined in Table G-3 of mission meets the customer need
meeting those objectives. the concept NPR 7123.1. and has sufficient quality and
development phase merit to support a field Center
(Phase A) management decision to propose
further study to the cognizant NASA
program Associate Administrator as
a candidate Phase A effort.

System The SRR evaluates the functional The SRR is The SRR entrance and Successful completion of the
Requirements and performance requirements conducted during success criteria for a SRR freezes program/project
Review (SRR) defined for the system and the concept program are defined in requirements and leads to a formal
the preliminary program or development phase Table G-1 of NPR 7123.1. decision by the cognizant program
project plan and ensures that (Phase A) and The SRR entrance and Associate Administrator to proceed
the requirements and selected before conducting success criteria for with proposal request preparations
concept will satisfy the mission. the SDR or MDR. projects and single-project for project implementation
programs are defined in
Table G-4 of NPR 7123.1.

Mission Sometimes called the MDR by The MDR/SDR The MDR/SDR entrance A successful MDR/SDR supports
Definition robotic projects and SDR for is conducted and success criteria for the decision to further develop the
Review (MDR)/ human flight projects, this review during the concept a program are defined in system architecture/design and any
System evaluates whether the proposed development phase Table G-2 of NPR 7123.1. technology needed to accomplish
Definition architecture is responsive to (Phase A) prior The MDR/SDR entrance the mission. The results reinforce
Review (SDR) the functional and performance to KDP B and the and success criteria for the mission/system’s merit and
requirements and that the start of preliminary projects and single-project provide a basis for the system
requirements have been allocated design. programs are defined in acquisition strategy. As a result
to all functional elements of the Table G-5 of NPR 7123.1. of successful completion, the
mission/system. mission/system and its operation
are well enough understood to
warrant design and acquisition of
the end items.

Preliminary The PDR demonstrates that PDR occurs near The entrance and success As a result of successful
Design Review the preliminary design meets the completion of criteria for the PDR are completion of the PDR, the
(PDR) all system requirements with the preliminary defined in Table G-6 of design-to baseline is approved.
acceptable risk and within the design phase NPR 7123.1. A successful review result also
cost and schedule constraints (Phase B) as the authorizes the project to proceed
and establishes the basis for last review in the into the Implementation Phase and
proceeding with detailed design. Formulation Phase. toward final design.
It shows that the correct design
options have been selected,
interfaces have been identified,
and verification methods have been
described. The PDR should address
and resolve critical, system-wide
issues and show that work can
begin on detailed design.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 161
6.0  Crosscutting Technical Management

Name of Purpose Timing Entrance/Success Results of Review
Review Criteria
Critical Design The CDR demonstrates that CDR occurs during The entrance and success As a result of successful
Review (CDR) the maturity of the design the final design criteria for the CDR are completion of the CDR, the
is appropriate to support phase (Phase C). defined in Table G-7 of build-to baseline, production, and
proceeding with full scale NPR 7123.1. verification plans are approved.
fabrication, assembly, A successful review result also
integration, and test. CDR authorizes coding of deliverable
determines if the technical software (according to the build-to
effort is on track to complete baseline and coding standards
the system development, presented in the review) and
meeting mission performance system qualification testing and
requirements within the identified integration. All open issues should
cost and schedule constraints. be resolved with closure actions
and schedules.

Production A PRR is held for projects PRR occurs during The entrance and success As a result of successful
Readiness developing or acquiring multiple the final design criteria for the PRR are completion of the PRR, the final
Review (PRR) or similar systems greater than phase (Phase C). defined in Table G-8 of production build-to baseline,
three or as determined by the NPR 7123.1. production, and verification plans
project. The PRR determines are approved. Approved drawings
the readiness of the system are released and authorized for
developers to efficiently produce production. A successful review
the required number of systems. result also authorizes coding of
It ensures that the production deliverable software (according to
plans; fabrication, assembly, and the build-to baseline and coding
integration-enabling products; standards presented in the review)
and personnel are in place and and system qualification testing
ready to begin production. and integration. All open issues
should be resolved with closure
actions and schedules.

System An SIR ensures segments, SIR occurs at The entrance and success As a result of successful
Integration components, and subsystems the end of the criteria for the SIR are completion of the SIR, the final
Review (SIR) are on schedule to be final design defined in Table G-9 of as-built baseline and verification
integrated into the system. phase (Phase C) NPR 7123.1. plans are approved. Approved
Integration facilities, support and before the drawings are released and
personnel, and integration plans systems assembly, authorized to support integration.
and procedures are on schedule integration, and All open issues should be resolved
to support integration. test phase (Phase with closure actions and schedules.
D) begins. The subsystems/systems
integration procedures, ground
support equipment, facilities,
logistical needs, and support
personnel are planned for and are
ready to support integration.

System The SAR verifies the The entrance and success As a result of successful
Acceptance completeness of the specific criteria for the SAR are completion of the SAR, the
Review (SAR) end products in relation to defined in Table G-11 of system is accepted by the buyer,
their expected maturity level NPR 7123.1. and authorization is given to
and assesses compliance to ship the hardware to the launch
stakeholder expectations. It also site or operational facility and to
ensures that the system has install software and hardware for
sufficient technical maturity to operational use.
authorize its shipment to the
designated operational facility or
launch site.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 162
6.0  Crosscutting Technical Management

Name of Purpose Timing Entrance/Success Results of Review
Review Criteria
Operational The ORR examines the actual The entrance and success As a result of successful ORR
Readiness system characteristics and criteria for the ORR are completion, the system is ready to
Review (ORR) procedures used in the system defined in Table G-12 of assume normal operations.
or end product’s operation. NPR 7123.1.
It ensures that all system
and support (flight and
ground) hardware, software,
personnel, procedures, and user
documentation accurately reflect
the deployed state of the system.

Flight Readiness The FRR examines tests, The entrance and success As a result of successful FRR
Review (FRR) demonstrations, analyses, criteria for the FRR are completion, technical and
and audits that determine the defined in Table G-13 of procedural maturity exists
system’s readiness for a safe and NPR 7123.1. for system launch and flight
successful flight or launch and authorization and, in some cases,
for subsequent flight operations. initiation of system operations.
It also ensures that all flight and
ground hardware, software,
personnel, and procedures are
operationally ready.

Post-Launch A PLAR is a post-deployment This review is The entrance and success As a result of successful PLAR
Assessment evaluation of the readiness of the typically held after criteria for the PLAR are completion, the system is ready to
Review (PLAR) spacecraft systems to proceed with the early flight defined in Table G-14 of assume in-space operations.
full, routine operations. The review operations and NPR 7123.1.
evaluates the status, performance, initial checkout.
and capabilities of the project
evident from the flight operations
experience since launch. This can
also mean assessing readiness
to transfer responsibility from
the development organization to
the operations organization. The
review also evaluates the status of
the project plans and the capability
to conduct the mission with
emphasis on near-term operations
and mission-critical events.

Critical Event A CERR confirms the project’s The CERR entrance and As a result of successful CER
Readiness readiness to execute the success criteria for a completion, the system is ready
Review (CERR) mission’s critical activities during program are defined in to assume (or resume) in-space
flight operation. These include Table G-15 of NPR 7123.1. operations.
orbital insertion, rendezvous
and docking, re-entry, scientific
observations/encounters, etc.

Post-Flight The PFAR evaluates the activities The entrance and success As a result of successful
Assessment from the flight after recovery. The criteria for the PFAR are PFAR completion, the report
Review (PFAR) review identifies all anomalies defined in Table G-16 of documenting flight performance
that occurred during the flight and NPR 7123.1. and recommendations for future
mission and determines the actions missions is complete and all
necessary to mitigate or resolve anomalies have been documented
the anomalies for future flights. and dispositioned.

(continued)

NASA SYSTEMS ENGINEERING HANDBOOK 163
6.0  Crosscutting Technical Management

Name of Purpose Timing Entrance/Success Results of Review
Review Criteria
Decommissioning The DR confirms the decision The DR is normally The entrance and success A successful DR completion ensures
Review (DR) to terminate or decommission held near the end criteria for the DR are that the decommissioning and disposal
the system and assesses the of routine mission defined in Table G-17 of of system items and processes are
readiness of the system for operations upon NPR 7123.1. appropriate and effective.
the safe decommissioning and accomplishment
disposal of system assets. of planned mission
objectives. It may
be advanced if
some unplanned
event gives
rise to a need
to prematurely
terminate the
mission, or delayed
if operational life
is extended to
permit additional
investigations.

Disposal A DRR confirms the readiness The DRR is held The DRR entrance and A successful DRR completion
Readiness for the final disposal of the as major assets success criteria for a ensures that the disposal of
Review (DRR) system assets. are ready for final program are defined in system items and processes are
disposal. Table G-18 of NPR 7123.1. appropriate and effective.

The Decision Analysis Process is used in support of performance, cost, and schedule. For an issue that
decision making bodies to help evaluate technical, cost, follows the decision analysis process, the definition of
and schedule issues, alternatives, and their uncertain- the decision criteria or the measures that are import-
ties. Decision models have the capacity for accepting ant to characterize the options for making a decision
and quantifying human subjective inputs: judgments should be the next step in the process. With this
of experts and preferences of decision makers. defined, a set of alternative solutions can be defined
for evaluation. These solutions should cover the full
The outputs from this process support the decision decision space as defined by the understanding of
authority’s difficult task of deciding among compet- the decision and definition of the decision criteria.
ing alternatives without complete knowledge; there- The need for specific decision analysis tools (defined
fore, it is critical to understand and document the in Section 6.8.3 in the NASA Expanded Guidance
assumptions and limitation of any tool or method- for Systems Engineering at https://nen.nasa.gov/
ology and integrate them with other factors when web/se/doc-repository) can then be determined and
deciding among viable options. employed to support the formulation of a solution.
Following completion of the analysis, a description
6.8.1 Process Description of how each alternative compares with the decision
A typical process flow diagram is provided in FIGURE criteria can be captured for submission to the deci-
6.8-1, including inputs, activities, and outputs. The sion-making body or authority. A recommendation
first step in the process is understanding the deci- is typically provided from the decision analysis, but
sion to be made in the context of the system/mission. is not always required depending on the discretion of
Understanding the decision needed requires knowl- the decision-making body. A decision analysis report
edge of the intended outcome in terms of technical should be generated including: decision to be made,

NASA SYSTEMS ENGINEERING HANDBOOK 164
6.0  Crosscutting Technical Management

From all technical
processes Establish guidelines to determine
Decision Need, which technical issues are subject to a
Alternatives, Issues, or formal analysis/evaluation process
Problems and
Supporting Data
for evaluating To all technical processes
alternative solutions
Alternative Selection
Recommendations and
From Technical
Identify alternative solutions to Impacts
Assessment Process
address decision issues
Analysis Support
Requests Select evaluation methods and tools To Technical
Assessment Process

Decision Support
Evaluate alternative solutions with
Recommendations and
the established criteria and
Impacts
selected methods

Select recommended solutions from
To Technical Data
the alternatives based on the
Management Process
evaluation criteria

Work Products From
Report analysis results with Decision Analysis
recommendations, impacts, and
corrective actions

Capture work products from
decision analysis activities

FIGURE 6.8‑1  Decision Analysis Process

decision criteria, alternatives, evaluation methods, an ongoing fashion during the decision analysis pro-
evaluation process and results, recommendation, and cess. However, for quick or emergency decisions, the
final decision. report information may be captured after the deci-
sion has been made.
Decision analysis covers a wide range of timeframes.
Complex, strategic decisions may require weeks or Not all decisions require the same amount of analysis
months to fully assess all alternatives and potential effort. The level and rigor required in a specific situa-
outcomes. Decisions can also be made in hours or in tion depend essentially on how clear-cut the decision
a few days, especially for smaller projects or activi- is. If there is enough uncertainty in the alternatives’
ties. Decisions are also made in emergency situations. performance that the decision might change if that
Under such conditions, process steps, procedures, and uncertainty were to be reduced, then consideration
meetings may be combined. In these cases, the focus needs to be given to reducing that uncertainty. A
of the systems engineer is on obtaining accurate deci- robust decision is one that is based on sufficient tech-
sions quickly. Once the decision is made, the report nical evidence and characterization of uncertainties
can be generated. The report is usually generated in to determine that the selected alternative best reflects

NASA SYSTEMS ENGINEERING HANDBOOK 165
6.0  Crosscutting Technical Management

Examples of Decisions
• Architecture A vs. Architecture V vs. Archicture C • Making changes to existing systems
• Extending the life of existing systems • Responding to operational occurrences in
• Contingency Plan A vs. Contingency Plan B real time
• Changing requirements • Technology A vs. Technology B
• Launch or no launch • Prioritization Deliberation and
Recommending a
Decision Alternative

Decision
Alternatives for
Analysis
Identify Qualitative Yes Risk &
Techniques Performance
Analyze Measurement
Results

of Available
Techniques
Preliminary Risk

Spectrum
Scoping &
Determination of & Performance Is the ranking/
Methods to Measurement comparison robust?
Be Used Results
Identify
No
Analyze Quantitative
Techniques
Net beneficial to No
Risk Analysis reduce uncertainty?
Techniques

Iteration Yes

Additional Uncertainty Reduction Necessary per Stakeholders

FIGURE 6.8-2  Risk Analysis of Decision Alternatives

decision-maker preferences and values given the state complicated or not well defined relationships. Due
of knowledge at the time of the decision. This is sug- to this complexity, formal decision analysis has the
gested in FIGURE 6.8-2 . potential to consume significant resources and time.
Typically, its application to a specific decision is war-
Note that in FIGURE 6.8-2 , the phrase “net beneficial” ranted only when some of the following conditions
in the decision node “Net beneficial to reduce uncer- are met:
tainty?” is meant to imply consideration of all factors,
including whether the project can afford any sched- • Complexity: The actual ramifications of alterna-
ule slip that might be caused by additional informa- tives are difficult to understand without detailed
tion collection and additional analysis. analysis;

6.8.1.1  Inputs • Uncertainty: Uncertainty in key inputs creates
The technical, cost, and schedule inputs need to be substantial uncertainty in the ranking of alter-
comprehensively understood as part of the general natives and points to risks that may need to be
decision definition. Based on this understanding, managed;
decision making can be addressed from a simple
meeting to a formal analytical analysis. As illustrated • Multiple Attributes: Greater numbers of attributes
in FIGURE 6.8-2 , many decisions do not require exten- cause a greater need for formal analysis; and
sive analysis and can be readily made with clear input
from the responsible engineering and programmatic • Diversity of Stakeholders: Extra attention is war-
disciplines. Complex decisions may require more for- ranted to clarify objectives and formulate TPMs
mal decision analysis when contributing factors have

NASA SYSTEMS ENGINEERING HANDBOOK 166
6.0  Crosscutting Technical Management

when the set of stakeholders reflects a diversity of Decisions are based on facts, qualitative and quan-
values, preferences, and perspectives. titative data, engineering judgment, and open com-
munications to facilitate the flow of information
Satisfaction of all of these conditions is not a require- throughout the hierarchy of forums where technical
ment for initiating decision analysis. The point is, analyses and evaluations are presented and assessed
rather, that the need for decision analysis increases and where decisions are made. The extent of technical
as a function of the above conditions. In addition, analysis and evaluation required should be commen-
often these decisions have the potential to result in surate with the consequences of the issue requiring
high stakes impacts to cost, safety, or mission success a decision. The work required to conduct a formal
criteria, which should be identified and addressed in evaluation is significant and applicability should be
the process. When the Decision Analysis Process is based on the nature of the problem to be resolved.
triggered, the following are inputs: Guidelines for use can be determined by the magni-
tude of the possible consequences of the decision to
• Decision need, identified alternatives, issues, or be made.
problems and supporting data: This information
would come from all technical, cost, and sched- Define the Criteria for Evaluating Alternative
6.8.1.2.1 
ule management processes. It may also include Solutions
high-level objectives and constraints (from the This step includes identifying the following:
program/project).
• The types of criteria to consider, such as customer
• Analysis support requests: Requests will arise expectations and requirements, technology lim-
from the technical, cost, and schedule assessment itations, environmental impact, safety, risks, total
processes. ownership and life cycle costs, and schedule impact;

6.8.1.2  Process Activities • The acceptable range and scale of the criteria; and
For the Decision Analysis Process, the following
activities are typically performed. • The rank of each criterion by its importance.

It is important to understand the decision needed in Decision criteria are requirements for individually
the context of the mission and system, which requires assessing the options and alternatives being consid-
knowledge of the intended outcome in terms of tech- ered. Typical decision criteria include cost, sched-
nical performance, cost, and schedule. A part of this ule, risk, safety, mission success, and supportability.
understanding is the definition of the decision crite- However, considerations should also include tech-
ria, or the measures that are important to character- nical criteria specific to the decision being made.
ize the options for making a decision. The specific Criteria should be objective and measurable. Criteria
decision-making body, whether the program/project should also permit differentiating among options or
manager, chief engineer, line management, or con- alternatives. Some criteria may not be meaningful to
trol board should also be well defined. Based on this a decision; however, they should be documented as
understanding, then the specific approach to deci- having been considered. Criteria may be mandatory
sion-making can be defined. (i.e., “shall have”) or enhancing. An option that does

NASA SYSTEMS ENGINEERING HANDBOOK 167
6.0  Crosscutting Technical Management

not meet mandatory criteria should be disregarded. to document the fact that these options were consid-
For complex decisions, criteria can be grouped into ered. A few decisions might only have one option. It
categories or objectives. is a best practice to document a decision matrix for
a major decision even if only one alternative is deter-
Identify Alternative Solutions to Address
6.8.1.2.2  mined to be viable. (Sometimes doing nothing or not
Decision Issues making a decision is an option.)
With the decision need well understood, alterna-
tives can be identified that fit the mission and system 6.8.1.2.3  Select Evaluation Methods and Tools
context. There may be several alternatives that could Based on the decision to be made, various approaches
potentially satisfy the decision criteria. Alternatives can be taken to evaluate identified alternatives. These
can be found from design options, operational can range from simple discussion meetings with con-
options, cost options, and/or schedule options. tributing and affected stakeholders to more formal
evaluation methods. In selecting the approach, the
Almost every decision will have options to choose mission and system context should be kept in mind
from. These options are often fairly clear within the and the complexity of the decision analysis should
mission and system context once the decision need is fit the complexity of the mission, system, and corre-
understood. In cases where the approach has uncer- sponding decision.
tainty, there are several methods to help generate
various options. Brainstorming decision options with Evaluation methods and tools/techniques to be used
those knowledgeable of the context and decision can should be selected based on the purpose for analyzing
provide a good list of candidate alternatives. A litera- a decision and on the availability of the information
ture search of related systems and approaches to iden- used to support the method and/or tool. Typical eval-
tify options may also provide some possible options. uation methods include: simulations; weighted trade-
All possible options should be considered. This can off matrices; engineering, manufacturing, cost, and
get unwieldy if a large number of variations is possi- technical opportunity trade studies; surveys; human-
ble. A “trade tree” (discussed later) is an excellent way in-the-loop testing; extrapolations based on field expe-
to prune the set of variations before extensive analysis rience and prototypes; user review and comment; and
is undertaken, and to convey to other stakeholders testing. Section 6.8.2 provides several options.
the basis for that pruning.
6.8.1.2.4 Evaluate Alternative Solutions with the
A good understanding of decision need and criteria Established Criteria and Selected Methods
will include the definition of primary and secondary The performance of each alternative with respect to
factors. Options should be focused on primary fac- each chosen performance measure is evaluated. In all
tors in the decision as defined by the decision criteria. but the simplest cases, some consideration of uncer-
Non-primary factors (i.e., secondary, tertiary) can be tainty is warranted. Uncertainty matters in a partic-
included in evaluations but should not, in general, ular analysis only if there is a non-zero probability
define separate alternatives. This will require some that uncertainty reduction could alter the ranking
engineering judgment that is based on the mission of alternatives. If this condition is obtained, then it
and system context as well as the identified decision is necessary to consider the value of reducing that
criteria. Some options may quickly drop out of con- uncertainty, and act accordingly.
sideration as the analysis is conducted. It is important

NASA SYSTEMS ENGINEERING HANDBOOK 168
6.0  Crosscutting Technical Management

Regardless of the methods or tools used, results Select Recommended Solutions from the
6.8.1.2.5 
should include the following: Alternatives Based on the Evaluation Criteria
and Report to the Decision-Maker
• Evaluation of assumptions related to evaluation Once the decision alternative evaluation is completed,
criteria and of the evidence that supports the recommendations should be brought back to the deci-
assumptions; and sion maker including an assessment of the robustness
of the ranking (i.e., whether the uncertainties are such
• Evaluation of whether uncertainty in the values that reducing them could credibly change the rank-
for alternative solutions affects the evaluation. ing of the alternatives). Generally, a single alternative
should be recommended. However, if the alternatives
When decision criteria have different measurement do not significantly differ, or if uncertainty reduction
bases (e.g., numbers, money, weight, dates), normal- could credibly alter the ranking, the recommenda-
ization can be used to establish a common base for tion should include all closely ranked alternatives for
mathematical operations. The process of “normaliza- a final selection by the decision-maker. In any case,
tion” is making a scale so that all different kinds of the decision-maker is always free to select any alter-
criteria can be compared or added together. This can native or ask for additional alternatives to be assessed
be done informally (e.g., low, medium, high), on a (often with updated guidance on selection criteria).
scale (e.g., 1-3-9), or more formally with a tool. No This step includes documenting the information,
matter how normalization is done, the most import- including assumptions and limitations of the evalu-
ant thing to remember is to have operational defi- ation methods used, and analysis of the uncertainty
nitions of the scale. An operational definition is a in the analysis of the alternatives’ performance that
repeatable, measurable number. For example, “high” justifies the recommendations made and gives the
could mean “a probability of 67 percent and above.” impacts of taking the recommended course of action,
“Low” could mean “a probability of 33 percent and including whether further uncertainty reduction
below.” For complex decisions, decision tools usually would be justifiable.
provide an automated way to normalize. It is import-
ant to question and understand the operational defi- The highest score (e.g., percentage, total score) is
nitions for the weights and scales of the tool. typically the option that is recommended to man-
agement. If a different option is recommended, an
explanation should be provided as to why the lower
NOTE: Completing the decision matrix can score is preferred. Usually, if an alternative having
be thought of as a default evaluation method. a lower score is recommended, the “risks” or “dis-
Completing the decision matrix is iterative. Each advantages” were too great for the highest ranking
cell for each criterion and each option needs to be alternative indicating the scoring methods did not
completed by the team. Use evaluation methods as properly rank the alternatives. Sometimes the bene-
needed to complete the entire decision matrix. fits and advantages of a lower or close score outweigh
the highest score. If this occurs, the decision criteria
should be reevaluated, not only the weights, but the

NASA SYSTEMS ENGINEERING HANDBOOK 169
6.0  Crosscutting Technical Management

basic definitions of what is being measured for each This report includes the following:
alternative. The criteria should be updated, with con-
currence from the decision-maker, to more correctly • mission and system context for the decision
reflect the suitability of each alternative. • decision needed and intended outcomes
• decision criteria
6.8.1.2.6  Report Analysis Results • identified alternative solutions
These results are reported to the appropriate stake- • decision evaluation methods and tools employed
holders with recommendations, impacts, and correc- • assumptions, uncertainties, and sensitivities in
tive actions the evaluations and recommendations
• results of all alternative evaluations
6.8.1.2.7  Capture Work Products • alternative recommendations
These work products may include the decision anal- • final decision made with rationale
ysis guidelines, strategy, and procedures that were • lessons learned
used; analysis/evaluation approach; criteria, meth-
ods, and tools used; analysis/evaluation assumptions Typical information captured in a decision report is
made in arriving at recommendations; uncertainties; shown in TABLE 6.8-1.
sensitivities of the recommended actions or corrective
actions; and lessons learned. 6.8.2  Decision Analysis Guidance
Refer to Section 6.8.2 in the NASA Expanded
6.8.1.3  Outputs Guidance for Systems Engineering at https://nen.
Alternative Selection and Decision Support
6.8.1.3.1  nasa.gov/web/se/doc-repository for additional guid-
Recommendations and Impacts ance on decision analysis methods supporting all SE
Once the technical team recommends an alterna- processes and phases including:
tive to a NASA decision-maker (e.g., a NASA board,
forum, or panel), all decision analysis information • trade studies,
should be documented. The team should produce a • cost-benefit analysis,
report to document all major recommendations to • influence diagrams,
serve as a backup to any presentation materials used. • decision trees,
A report in conjunction with a decision matrix pro- • analytic hierarchy process,
vides clearly documented rationale for the presen- • Borda counting, and
tation materials (especially for complex decisions). • utility analysis,
Decisions are typically captured in meeting minutes
and should be captured in the report as well. Based Additional information on tools for decision making
on the mission and system context and the decision can be found in NASA Reference Publication 1358,
made, the report may be a simple white paper or a System Engineering “Toolbox” for Design-Oriented
more formally formatted document. The important Engineers located at https://nen.nasa.gov/web/se/
characteristic of the report is the content, which fully doc-repository.
documents the decision needed, assessments done,
recommendations, and decision finally made.

NASA SYSTEMS ENGINEERING HANDBOOK 170
6.0  Crosscutting Technical Management

TABLE 6.8‑1  Typical Information to Capture in a Decision Report
# Section Section Description
1 Executive Provide a short half-page executive summary of the report:
Summary
• Recommendation (short summary—1 sentence)
• Problem/issue requiring a decision (short summary—1 sentence)

2 Problem/Issue Describe the problem/issue that requires a decision. Provide background, history, the
Description decision maker(s) (e.g., board, panel, forum, council), and decision recommendation team,
etc.

3 Decision Matrix Provide the rationale for setting up the decision matrix:
Setup Rationale
• Criteria selected
• Options selected
• Weights selected
• Evaluation methods selected
Provide a copy of the setup decision matrix.

4 Decision Matrix Provide the rationale for the scoring of the decision matrix. Provide the results of populating
Scoring Rationale the scores of the matrix using the evaluation methods selected.

5 Final Decision Cut and paste the final spreadsheet into the document. Also include any important
Matrix snapshots of the decision matrix.

6 Risk/Benefits For the final options being considered, document the risks and benefits of each option.

7 Recommendation Describe the recommendation that is being made to the decision maker(s) and the rationale
and/or Final for why the option was selected. Can also document the final decision in this section.
Decision

8 Dissent If applicable, document any dissent with the recommendation. Document how dissent was
addressed (e.g., decision matrix, risk).

9 References Provide any references.

A Appendices Provide the results of the literature search, including lessons learned, previous related
decisions, and previous related dissent. Also document any detailed data analysis and risk
analysis used for the decision. Can also document any decision metrics.

NASA SYSTEMS ENGINEERING HANDBOOK 171
6.0  Crosscutting Technical Management

NASA SYSTEMS ENGINEERING HANDBOOK 172
Appendix A: Acronyms

AADL Architecture Analysis and Design Language EEE Electrical, Electronic, and Electromechanical
AD 2
Advancement Degree of Difficulty EFFBD Enhanced Functional Flow Block Diagram
Assessment EIA Electronic Industries Alliance
AIAA American Institute of Aeronautics and EMC Electromagnetic Compatibility
Astronautics
EMI Electromagnetic Interference
AO Announcement of Opportunity
EO (U.S.) Executive Order
AS9100 Aerospace Quality Management Standard
EOM End of Mission
ASME American Society of Mechanical Engineers
EVM Earned Value Management
ASQ American Society for Quality
FA Formulation Agreement
CAIB Columbia Accident Investigation Board
FAD Formulation Authorization Document
CCB Configuration Control Board
FAR Federal Acquisition Regulation
CDR Critical Design Review
FCA Functional Configuration Audit
CE Concurrent Engineering or Chief Engineer
FFBD Functional Flow Block Diagram
CEQ Council on Environmental Quality
FIPS Federal Information Processing Standard
CERR Critical Event Readiness Review
FM Fault Management
CHSIP Commercial Human Systems Integration
Processes FMEA Failure Modes and Effects Analysis
CI Configuration Item FMR Financial Management Requirements
CM Configuration Management FRR Flight Readiness Review
CMO Configuration Management Organization FTE Full Time Equivalent
ConOps Concept of Operations GEO Geostationary
COSPAR Committee on Space Research GOTS Government Off-The-Shelf
COTS Commercial Off-The-Shelf GSE Government-Supplied Equipment or Ground
Support Equipment
CPI Critical Program Information
GSFC Goddard Space Flight Center
CR Change Request
HCD Human-Centered Design
CRM Continuous Risk Management
HF Human Factors
CSA Configuration Status Accounting
HITL Human-In-The-Loop
D&C Design and Construction
HQ Headquarters
DDT&E Design, Development, Test, and Evaluation
HSI Human Systems Integration
DM Data Management
HSIP Human System Integration Plan
DOD (U.S.) Department of Defense
HWIL HardWare-In-the-Loop
DODAF DOD Architecture Framework
I&T Integration and Test
DR Decommissioning Review
ICD Interface Control Document/Drawing
DRM Design Reference Mission
ICP Interface Control Plan
DRR Disposal Readiness Review
IDD Interface Definition Document
EDL Entry, Descent, and Landing

NASA SYSTEMS ENGINEERING HANDBOOK 173
Appendix A: Acronyms

IDEF0 Integration Definition (for functional modeling) NGO Needs, Goals, and Objectives
IEEE Institute of Electrical and Electronics NIAT NASA Integrated Action Team
Engineers NID NASA Interim Directive
ILS Integrated Logistics Support NOA New Obligation Authority
INCOSE International Council on Systems Engineering NOAA (U.S.) National Oceanic and Atmospheric
IPT Integrated Product Team Administration
IRD Interface Requirements Document NODIS NASA Online Directives Information System
ISO International Organization for Standardization NPD NASA Policy Directive
IT Information Technology NPR NASA Procedural Requirements
ITA Internal Task Agreement NRC (U.S.) Nuclear Regulatory Commission
ITAR International Traffic in Arms Regulation NSTS National Space Transportation System
IV&V Independent Verification and Validation OCE (NASA) Office of the Chief Engineer
IVHM Integrated Vehicle Health Management OCIO (NASA) Office of the Chief Information
IWG Interface Working Group Officer

JCL Joint (cost and schedule) Confidence Level OCL Object Constraint Language

JPL Jet Propulsion Laboratory OMB (U.S.) Office of Management and Budget

KBSI Knowledge Based Systems, Inc. ORR Operational Readiness Review

KDP Key Decision Point OTS Off-the-Shelf

KDR Key Driving Requirement OWL Web Ontology Language

KPP Key Performance Parameter PBS Product Breakdown Structure

KSC Kennedy Space Center PCA Physical Configuration Audit or Program
Commitment Agreement
LCC Life Cycle Cost
PD/NSC (U.S.) Presidential Directive/National
LEO Low Earth Orbit or Low Earth Orbiting Security Council
M&S Modeling and Simulation or Models and PDR Preliminary Design Review
Simulations
PFAR Post-Flight Assessment Review
MBSE Model-Based Systems Engineering
PI Performance Index or Principal Investigator
MCR Mission Concept Review
PIR Program Implementation Review
MDAA Mission Directorate Associate Administrator
PKI Public Key Infrastructure
MDR Mission Definition Review
PLAR Post-Launch Assessment Review
MEL Master Equipment List
PM Program Manager or Project Manager
MODAF (U.K.) Ministry of Defense Architecture
Framework PMC Program Management Council

MOE Measure of Effectiveness PPD (U.S.) Presidential Policy Directive

MOP Measure of Performance PRA Probabilistic Risk Assessment

MOTS Modified Off-The-Shelf PRD Project Requirements Document

MOU Memorandum of Understanding PRR Production Readiness Review

MRB Material Review Board QA Quality Assurance

MRR Mission Readiness Review QVT Query View Transformations

MSFC Marshall Space Flight Center R&M Reliability and Maintainability

NASA (U.S.) National Aeronautics and Space R&T Research and Technology
Administration RACI Responsible, Accountable, Consulted,
NEN NASA Engineering Network Informed

NEPA National Environmental Policy Act REC Record of Environmental Consideration

NFS NASA FAR Supplement RF Radio Frequency

NASA SYSTEMS ENGINEERING HANDBOOK 174
Appendix A: Acronyms

RFA Requests for Action SRR Program/System Requirements Review
RFP Request for Proposal SRS Software Requirements Specification
RID Review Item Discrepancy or Review Item STI Scientific and Technical Information
Disposition STS Space Transportation System
RIDM Risk-Informed Decision-Making SysML System Modeling Language
RM Risk Management T&E Test and Evaluation
RMA Rapid Mission Architecture TA Technical Authority
RUL Remaining Useful Life TBD To Be Determined
SAR System Acceptance Review or Safety TBR To Be Resolved
Analysis Report (DOE)
ToR Terms of Reference
SBU Sensitive But Unclassified
TPM Technical Performance Measure
SDR Program/System Definition Review
TRL Technology Readiness Level
SE Systems Engineering
TRR Test Readiness Review
SECoP Systems Engineering Community of Practice
TVC Thrust Vector Controller
SEMP Systems Engineering Management Plan
UFE Unallocated Future Expenses
SI International System of Units (French:
Système international d’unités) UML Unified Modeling Language
SIR System Integration Review V&V Verification and Validation
SMA Safety and Mission Assurance WBS Work Breakdown Structure
SME Subject Matter Expert WYE Work Year Equivalent
SOW Statement Of Work XMI XML Metadata Interchange
SP Special Publication XML Extensible Markup Language
SRD System Requirements Document

NASA SYSTEMS ENGINEERING HANDBOOK 175
Appendix B: Glossary

Acceptable Risk: The risk that is understood and to sufficient detail for initiating manufacturing or
agreed to by the program/project, governing author- coding of a CI. The allocated baseline is controlled
ity, mission directorate, and other customer(s) such by NASA. The allocated baseline(s) is typically estab-
that no further specific mitigating action is required. lished at the Preliminary Design Review.

Acquisition: The process for obtaining the systems, Analysis: Use of mathematical modeling and analytical
research, services, construction, and supplies that techniques to predict the compliance of a design to its
NASA needs to fulfill its missions. Acquisition, requirements based on calculated data or data derived
which may include procurement (contracting for from lower system structure end product validations.
products and services), begins with an idea or pro-
posal that aligns with the NASA Strategic Plan and Analysis of Alternatives: A formal analysis method
fulfills an identified need and ends with the comple- that compares alternative approaches by estimating
tion of the program or project or the final disposition their ability to satisfy mission requirements through
of the product or service. an effectiveness analysis and by estimating their
life cycle costs through a cost analysis. The results
Activity:A set of tasks that describe the technical of these two analyses are used together to produce
effort to accomplish a process and help generate a cost-effectiveness comparison that allows decision
expected outcomes. makers to assess the relative value or potential pro-
grammatic returns of the alternatives. An analysis of
Advancement Degree of Difficulty Assessment alternatives broadly examines multiple elements of
(AD2):The process to develop an understanding program or project alternatives (including technical
of what is required to advance the level of system performance, risk, LCC, and programmatic aspects).
maturity.
Analytic Hierarchy Process: A multi-attribute meth-
Allocated Baseline (Phase C): The allocated base- odology that provides a proven, effective means to
line is the approved performance-oriented configu- deal with complex decision- making and can assist
ration documentation for a CI to be developed that with identifying and weighting selection criteria,
describes the functional and interface characteristics analyzing the data collected for the criteria, and expe-
that are allocated from a higher level requirements diting the decision-making process.
document or a CI and the verification required to
demonstrate achievement of those specified charac- Anomaly: The unexpected performance of intended
teristics. The allocated baseline extends the top-level function.
performance requirements of the functional baseline

NASA SYSTEMS ENGINEERING HANDBOOK 176
Appendix B: Glossary

Approval: Authorization by a required management Autonomous: Autonomy refers to the relative loca-
official to proceed with a proposed course of action. tions and scope of decision-making and control func-
Approvals are documented. tions between two locations within a system or across
the system boundary.
Approval (for Implementation): The acknowledg-
ment by the decision authority that the program/ Baseline : An agreed-to set of requirements, designs,
project has met stakeholder expectations and for- or documents that will have changes controlled
mulation requirements, and is ready to proceed to through a formal approval and monitoring process.
implementation. By approving a program/project,
the decision authority commits the budget resources The ability to trace any
Bidirectional Traceability :
necessary to continue into implementation. Approval given requirement/expectation to its parent require-
(for Implementation) is documented. ment/expectation and to its allocated children
requirements/expectations.
Architecture (System): Architecture is the high-level
unifying structure that defines a system. It provides Brassboard: A medium fidelity functional unit that
a set of rules, guidelines, and constraints that defines typically tries to make use of as much operational
a cohesive and coherent structure consisting of con- hardware/software as possible and begins to address
stituent parts, relationships and connections that scaling issues associated with the operational sys-
establish how those parts fit and work together. It tem. It does not have the engineering pedigree in
addresses the concepts, properties and characteristics all aspects, but is structured to be able to operate in
of the system and is represented by entities such as simulated operational environments in order to assess
functions, functional flows, interfaces, relationships, performance of critical functions.
resource flow items, physical elements, containers,
modes, links, communication resources, etc. The Breadboard: A low fidelity unit that demonstrates
entities are not independent but interrelated in the function only, without respect to form or fit in the
architecture through the relationships between them case of hardware, or platform in the case of software.
(NASA HQ). It often uses commercial and/or ad hoc components
and is not intended to provide definitive information
Architecture (ISO Definition): Fundamental concepts regarding operational performance.
or properties of a system in its environment embod-
ied in its elements, relationships, and in the principles Component Facilities: Complexes that are geo-
of its design and evolution (ISO 42010). graphically separated from the NASA Center or
institution to which they are assigned, but are still
As-Deployed Baseline: The as-deployed baseline part of the Agency.
occurs at the Operational Readiness Review. At this
point, the design is considered to be functional and Concept of Operations (ConOps) (Concept
ready for flight. All changes will have been incorpo- Documentation): Developed early in Pre-Phase A,
rated into the documentation. the ConOps describes the overall high-level concept
of how the system will be used to meet stakeholder
Automated: Automation refers to the allocation of expectations, usually in a time-sequenced manner. It
system functions to machines (hardware or software) describes the system from an operational perspective
versus humans. and helps facilitate an understanding of the system

NASA SYSTEMS ENGINEERING HANDBOOK 177
Appendix B: Glossary

goals. It stimulates the development of the require- documents risks associated with implementation of
ments and architecture related to the user elements designs, plans, and processes.
of the system. It serves as the basis for subsequent
definition documents and provides the foundation Contract: A mutually binding legal relationship obli-
for the long-range operational planning activities. gating the seller to furnish the supplies or services
(including construction) and the buyer to pay for
Concurrence: A documented agreement by a manage- them. It includes all types of commitments that obli-
ment official that a proposed course of action is acceptable. gate the Government to an expenditure of appropri-
ated funds and that, except as otherwise authorized,
Concurrent Engineering: Design in parallel rather are in writing. In addition to bilateral instruments,
than serial engineering fashion. It is an approach to contracts include (but are not limited to) awards and
product development that brings manufacturing, notices of awards; job orders or task letters issued
testing, assurance, operations and other disciplines under basic ordering agreements; letter contracts;
into the design cycle to ensure all aspects are incorpo- orders, such as purchase orders under which the
rated into the design and thus reduce overall product contract becomes effective by written acceptance or
development time. performance; and bilateral contract modifications.
Contracts do not include grants and cooperative
Configuration Items (CI): Any hardware, software, or agreements.
combination of both that satisfies an end use func-
tion and is designated for separate configuration Contractor: An individual, partnership, company,
management. For example, configuration items can corporation, association, or other service having a
be referred to by an alphanumeric identifier which contract with the Agency for the design, develop-
also serves as the unchanging base for the assignment ment, manufacture, maintenance, modification,
of serial numbers to uniquely identify individual operation, or supply of items or services under the
units of the CI. terms of a contract to a program or project. Research
grantees, research contractors, and research subcon-
Configuration Management Process: A management tractors are excluded from this definition.
discipline that is applied over a product’s life cycle to
provide visibility into and to control changes to perfor- Control Account Manager: A manager responsible
mance and functional and physical characteristics. It for a control account and for the planning, develop-
ensures that the configuration of a product is known ment, and execution of the budget content for those
and reflected in product information, that any prod- accounts.
uct change is beneficial and is effected without adverse
consequences, and that changes are managed. Control Gate (or milestone): A defined point in
the program/project life cycle where the decision
Context Diagram: A diagram that shows external sys- authority can evaluate progress and determine next
tems that impact the system being designed. actions. These may include a key decision point, life
cycle review, or other milestones identified by the
Continuous Risk Management: A systematic and program/project.
iterative process that efficiently identifies, ana-
lyzes, plans, tracks, controls, communicates, and

NASA SYSTEMS ENGINEERING HANDBOOK 178
Appendix B: Glossary

Cost-Benefit Analysis: A methodology to determine Decision Analysis Process: A methodology for
the advantage of one alternative over another in terms making decisions that offers techniques for model-
of equivalent cost or benefits. It relies on totaling pos- ing decision problems mathematically and finding
itive factors and subtracting negative factors to deter- optimal decisions numerically. The methodology
mine a net result. entails identifying alternatives, one of which should
be decided upon; possible events, one of which occurs
Cost-Effectiveness Analysis: A systematic quanti- thereafter; and outcomes, each of which results from
tative method for comparing the costs of alternative a combination of decision and event.
means of achieving the same equivalent benefit for a
specific objective. Decision Authority: The individual authorized by the
Agency to make important decisions for programs
Critical Design Review: A review that demonstrates and projects under his or her authority.
that the maturity of the design is appropriate to sup-
port proceeding with full-scale fabrication, assembly, Decision Matrix: A methodology for evaluating
integration, and test, and that the technical effort is alternatives in which valuation criteria are typically
on track to complete the system development meet- displayed in rows on the left side of the matrix and
ing performance requirements within the identified alternatives are the column headings of the matrix. A
cost and schedule constraints. “weight” is typically assigned to each criterion.

Critical Event (or key event): An event in the opera- Decision Support Package: Documentation submitted
tions phase of the mission that is time-sensitive and is in conjunction with formal reviews and change requests.
required to be accomplished successfully in order to
achieve mission success. These events should be consid- Decision Tree: A decision model that displays the
ered early in the life cycle as drivers for system design. expected consequences of all decision alternatives by
making discreet all “chance” nodes, and, based on
Critical Event Readiness Review: A review that eval- this, calculating and appropriately weighting the pos-
uates the readiness of a project’s flight system to exe- sible consequences of all alternatives.
cute the critical event during flight operation.
Decommissioning Review: A review that confirms
Customer: The organization or individual that has the decision to terminate or decommission a system
requested a product and will receive the product to and assess the readiness for the safe decommissioning
be delivered. The customer may be an end user of and disposal of system assets. The DR is normally
the product, the acquiring agent for the end user, or held near the end of routine mission operations upon
the requestor of the work products from a technical accomplishment of planned mission objectives. It
effort. Each product within the system hierarchy has may be advanced if some unplanned event gives rise
a customer. to a need to prematurely terminate the mission, or
delayed if operational life is extended to permit addi-
Data Management: DM is used to plan for, acquire, tional investigations.
access, manage, protect, and use data of a technical
nature to support the total life cycle of a system. Deliverable Data Item: Consists of technical
data, such as requirements specifications, design

NASA SYSTEMS ENGINEERING HANDBOOK 179
Appendix B: Glossary

documents, management data plans, and metrics descope,” it indicates the process or the result of the
reports, that have been identified as items to be deliv- process of narrowing the scope; i.e., removing part of
ered with an end product. the original scope.

Demonstration: Showing that the use of an end Design Solution Definition Process: The process used
product achieves the individual specified requirement to translate the outputs of the logical decomposition
(verification) or stakeholder expectation (validation). process into a design solution definition. It includes
It is generally a basic confirmation of performance transforming the defined logical decomposition
capability, differentiated from testing by the lack of models and their associated sets of derived technical
detailed data gathering. Demonstrations can involve requirements into alternative solutions and analyzing
the use of physical models or mock-ups; for example, each alternative to be able to select a preferred alterna-
a requirement that all controls shall be reachable by tive and fully define that alternative into a final design
the pilot could be verified by having a pilot perform solution that will satisfy the technical requirements.
flight-related tasks in a cockpit mock-up or simulator.
A demonstration could also be the actual operation Designated Governing Authority: For the technical
of the end product by highly qualified personnel, effort, this is the Center Director or the person that
such as test pilots, who perform a one-time event that has been designated by the Center Director to ensure
demonstrates a capability to operate at extreme limits the appropriate level of technical management over-
of system performance. sight. For large programs, this will typically be the
Engineering Technical Authority. For smaller proj-
Derived Requirements: Requirements arising from ects, this function can be delegated to line managers.
constraints, consideration of issues implied but not
explicitly stated in the high-level direction provided Detection: Determination that system state or behav-
by NASA Headquarters and Center institutional ior is different from expected performance.
requirements, factors introduced by the selected
architecture, and the design. These requirements are Diagnosis: Determining the possible locations and/
finalized through requirements analysis as part of or causes of an anomaly or a failure.
the overall systems engineering process and become
part of the program or project requirements baseline. Discrepancy: Any observed variance from, lack of
Requirements arising from constraints, consideration agreement with, or contradiction to the required or
of issues implied but not explicitly stated in the high- expected outcome, configuration, or result.
level direction provided by NASA Headquarters and
Center institutional requirements, factors introduced Earned Value: The sum of the budgeted cost for tasks
by the selected architecture, and the design. These and products that have actually been produced (com-
requirements are finalized through requirements pleted or in progress) at a given time in the schedule.
analysis as part of the overall systems engineering
process and become part of the program or project Earned Value Management: A tool for measur-
requirements baseline. ing and assessing project performance through the
integration of technical scope with schedule and
Descope: As a verb, take out of (or remove from) cost objectives during the execution of the project.
the scope of a project. As a noun, as in “performance EVM provides quantification of technical progress,

NASA SYSTEMS ENGINEERING HANDBOOK 180
Appendix B: Glossary

enabling management to gain insight into project sta- Enhanced Functional Flow Block Diagram: A block
tus and project completion costs and schedules. Two diagram that represents control flows and data flows
essential characteristics of successful EVM are EVM as well as system functions and flow.
system data integrity and carefully targeted monthly
EVM data analyses (i.e., risky WBS elements). Entrance Criteria: Guidance for minimum accom-
plishments each project needs to fulfill prior to a life
Emergent Behavior: An unanticipated behavior cycle review.
shown by a system due to interactions between large
numbers of simple components of that system. Environmental Impact: The direct, indirect, or
cumulative beneficial or adverse effect of an action
End Product: The hardware/software or other prod- on the environment.
uct that performs the operational functions. This
product is to be delivered to the next product layer or Environmental Management: The activity of ensur-
to the final customer. ing that program and project actions and decisions
that potentially impact or damage the environment
Enabling Products: The life cycle support products are assessed and evaluated during the formulation
and services (e.g., production, test, deployment, and planning phase and reevaluated throughout
training, maintenance, and disposal) that facili- implementation. This activity is performed according
tate the progression and use of the operational end to all NASA policy and Federal, state, and local envi-
product through its life cycle. Since the end prod- ronmental laws and regulations.
uct and its enabling products are interdependent,
they are viewed as a system. Project responsibility Establish (with respect to processes): The act of
thus extends to acquiring services from the relevant developing policy, work instructions, or procedures
enabling products in each life cycle phase. When a to implement process activities.
suitable enabling product does not already exist,
the project that is responsible for the end product Evaluation: The continual self- and independent
may also be responsible for creating and using the assessment of the performance of a program or proj-
enabling product. ect and incorporation of the evaluation findings to
ensure adequacy of planning and execution accord-
Engineering Unit: A high fidelity unit that demon- ing to plan.
strates critical aspects of the engineering processes
involved in the development of the operational unit. Extensibility: The ability of a decision to be extended
Engineering test units are intended to closely resem- to other applications.
ble the final product (hardware/software) to the max-
imum extent possible and are built and tested so as to Failure: The inability of a system, subsystem, com-
establish confidence that the design will function in ponent, or part to perform its required function
the expected environments. In some cases, the engi- within specified limits (Source: NPR 8715.3 and
neering unit will become the final product, assuming Avizienis 2004).
that proper traceability has been exercised over the
components and hardware handling. Failure Tolerance: The ability to sustain a certain
number of failures and still retain capability (Source:

NASA SYSTEMS ENGINEERING HANDBOOK 181
Appendix B: Glossary

NPR 8705.2). A function should be preserved despite Formulation Phase: The first part of the NASA man-
the presence of any of a specified number of coinci- agement life cycle defined in NPR 7120.5 where sys-
dent, independent failure causes of specified types. tem requirements are baselined, feasible concepts are
determined, a system definition is baselined for the
Fault: A physical or logical cause, which explains a selected concept(s), and preparation is made for pro-
failure (Source: Avizienis 2004). gressing to the Implementation Phase.

Fault Identification: Determining the possible loca- Functional Analysis: The process of identifying,
tions of a failure or anomaly cause(s), to a defined describing, and relating the functions a system should
level of granularity. perform to fulfill its goals and objectives.

Fault Isolation: The act of containing the effects of a Functional Baseline (Phase B): The functional base-
fault to limit the extent of failure. line is the approved configuration documentation
that describes a system’s or top-level CIs’ performance
Fault Management: A specialty engineering disci- requirements (functional, interoperability, and inter-
pline that encompasses practices that enable an oper- face characteristics) and the verification required
ational system to contain, prevent, detect, diagnose, to demonstrate the achievement of those specified
identify, respond to, and recover from conditions that characteristics.
may interfere with nominal mission operations.
Functional Configuration Audit (FCA): Examines the
Fault Tolerance: See “Failure Tolerance.” functional characteristics of the configured product
and verifies that the product has met, via test results,
Feasible: Initial evaluations show that the concept the requirements specified in its functional baseline
credibly falls within the technical cost and schedule documentation approved at the PDR and CDR plus
constraints for the project. any approved changes thereafter. FCAs will be con-
ducted on both hardware- and software-configured
The ability of a decision to support more
Flexibility: products and will precede the PCA of the configured
than one current application. product.

Flight Readiness Review: A review that examines Functional Decomposition: A subfunction under
tests, demonstrations, analyses, and audits that logical decomposition and design solution definition,
determine the system’s readiness for a safe and it is the examination of a function to identify sub-
successful flight/launch and for subsequent flight functions necessary for the accomplishment of that
operations. It also ensures that all flight and ground function and functional relationships and interfaces.
hardware, software, personnel, and procedures are
operationally ready. Functional Flow Block Diagram: A block diagram
that defines system functions and the time sequence
Float: The amount of time that a task in a project net- of functional events.
work schedule can be delayed without causing a delay
to subsequent tasks or the project completion date. Gantt Chart: A bar chart depicting start and finish
dates of activities and products in the WBS.

NASA SYSTEMS ENGINEERING HANDBOOK 182
Appendix B: Glossary

Goal: Goals elaborate on the need and constitute a Human Systems Integration: An interdisciplinary
specific set of expectations for the system. They fur- and comprehensive management and technical pro-
ther define what we hope to accomplish by address- cess that focuses on the integration of human consid-
ing the critical issues identified during the problem erations into the system acquisition and development
assessment. Goals need not be in a quantitative or processes to enhance human system design, reduce
measurable form, but they must allow us to assess life cycle ownership cost, and optimize total system
whether the system has achieved them. performance.

Government Mandatory Inspection Points: Implementation Phase: The part of the NASA man-
Inspection points required by Federal regulations agement life cycle defined in NPR 7120.5 where the
to ensure 100 percent compliance with safety/mis- detailed design of system products is completed and the
sion-critical attributes when noncompliance can products to be deployed are fabricated, assembled, inte-
result in loss of life or loss of mission. grated, and tested and the products are deployed to their
customers or users for their assigned use or mission.
Health Assessment: The activity under Fault Man-
agement that carries out detection, diagnosis, and Incommensurable Costs: Costs that cannot be easily
identification of faults and prediction of fault prop- measured, such as controlling pollution on launch or
agation states into the future. mitigating debris.

Health Monitoring: The activity under Fault Influence Diagram: A compact graphical and mathe-
Management that implements system state data col- matical representation of a decision state. Its elements are
lection, storage, and reporting though sensing and decision nodes, chance nodes, value nodes, and arrows
communication. to indicate the relationships among these elements.

Heritage (or legacy): Refers to the original manufactur- Inspection: The visual examination of a realized end
er’s level of quality and reliability that is built into the product. Inspection is generally used to verify phys-
parts, which have been proven by (1) time in service, ical design features or specific manufacturer identi-
(2) number of units in service, (3) mean time between fication. For example, if there is a requirement that
failure performance, and (4) number of use cycles. the safety arming pin has a red flag with the words
“Remove Before Flight” stenciled on the flag in black
Human-Centered Design: An approach to the devel- letters, a visual inspection of the arming pin flag can
opment of interactive systems that focuses on making be used to determine if this requirement was met.
systems usable by ensuring that the needs, abilities,
and limitations of the human user are met through- Integrated Logistics Support: The management,
out the system’s life cycle. engineering activities, analysis, and information
management associated with design requirements
Human Factors Engineering: The discipline that definition, material procurement and distribution,
studies human-system interfaces and provides maintenance, supply replacement, transportation,
requirements, standards, and guidelines to ensure the and disposal that are identified by space flight and
human component of an integrated system is able to ground systems supportability objectives.
function as intended.

NASA SYSTEMS ENGINEERING HANDBOOK 183
Appendix B: Glossary

Interface Management Process: The process to assist in the design, development, verification, production,
in controlling product development when efforts are deployment, prime mission operation, maintenance,
divided among parties (e.g., Government, contrac- support, and disposal of a project, including closeout,
tors, geographically diverse technical teams) and/or but not extended operations. The LCC of a project or
to define and maintain compliance among the prod- system can also be defined as the total cost of own-
ucts that should interoperate. ership over the project or system’s planned life cycle
from Formulation (excluding Pre–Phase A) through
Iterative: Application of a process to the same product Implementation (excluding extended operations).
or set of products to correct a discovered discrepancy The LCC includes the cost of the launch vehicle.
or other variation from requirements. (See “recursive”
and “repeatable.”) Logical Decomposition Models: Mathematical
or visual representations of the relationships
Key Decision Point: The event at which the decision between requirements as identified in the Logical
authority determines the readiness of a program/proj- Decomposition Process.
ect to progress to the next phase of the life cycle (or
to the next KDP). Logical Decomposition Process: A process used
to improve understanding of the defined technical
Key Event (or Critical Event): See “Critical Event.” requirements and the relationships among the require-
ments (e.g., functional, behavioral, performance, and
Key Performance Parameter: Those capabilities or temporal) and to transform the defined set of techni-
characteristics (typically engineering-based or related to cal requirements into a set of logical decomposition
health and safety or operational performance) consid- models and their associated set of derived technical
ered most essential for successful mission accomplish- requirements for lower levels of the system and for
ment. They characterize the major drivers of operational input to the Design Solution Definition Process.
performance, supportability, and interoperability.
Logistics (or Integrated Logistics Support): See
Knowledge Management: A collection of policies, “Integrated Logistics Support.”
processes, and practices relating to the use of intellec-
tual- and knowledge-based assets in an organization. Loosely Coupled Program: Programs that address
specific objectives through multiple space flight proj-
Least-Cost Analysis: A methodology that identifies ects of varied scope. While each individual project
the least-cost project option for meeting the techni- has an assigned set of mission objectives, architec-
cal requirements. tural and technological synergies and strategies that
benefit the program as a whole are explored during
Liens: Requirements or tasks not satisfied that have the formulation process. For instance, Mars orbiters
to be resolved within a certain assigned time to allow designed for more than one Mars year in orbit are
passage through a control gate to proceed. required to carry a communication system to support
present and future landers.
Life Cycle Cost (LCC): The total of the direct, indi-
rect, recurring, nonrecurring, and other related Maintain (with respect to establishment of pro-
expenses both incurred and estimated to be incurred cesses): The act of planning the process, providing

NASA SYSTEMS ENGINEERING HANDBOOK 184
Appendix B: Glossary

resources, assigning responsibilities, training people, Metric: The result of a measurement taken over a
managing configurations, identifying and involving period of time that communicates vital information
stakeholders, and monitoring process effectiveness. about the status or performance of a system, process,
or activity. A metric should drive appropriate action.
Maintainability: The measure of the ability of an item
to be retained in or restored to specified conditions Mission: A major activity required to accomplish an
when maintenance is performed by personnel having Agency goal or to effectively pursue a scientific, tech-
specified skill levels, using prescribed procedures and nological, or engineering opportunity directly related
resources, at each prescribed level of maintenance. to an Agency goal. Mission needs are independent of
any particular system or technological solution.
Margin: The allowances carried in budget, projected
schedules, and technical performance parameters Mission Concept Review: A review that affirms the
(e.g., weight, power, or memory) to account for mission/project need and examines the proposed
uncertainties and risks. Margins are allocated in the mission’s objectives and the ability of the concept to
formulation process based on assessments of risks and fulfill those objectives.
are typically consumed as the program/project pro-
ceeds through the life cycle. Mission Definition Review: A life cycle review that
evaluates whether the proposed mission/system
Master Equipment List (MEL): The MEL is a listing architecture is responsive to the program mission/sys-
of all the parts of a system and includes pertinent tem functional and performance requirements and
information such as serial numbers, model numbers, requirements have been allocated to all functional
manufacturer, equipment type, system/element it is elements of the mission/system.
located within, etc.
Mitigation: An action taken to mitigate the effects
Measure of Effectiveness (MOE): A measure by of a fault towards achieving existing or redefined
which a stakeholder’s expectations are judged in system goals.
assessing satisfaction with products or systems pro-
duced and delivered in accordance with the associated Model: A model is a physical, mathematical, or logi-
technical effort. The MOE is deemed to be critical to cal representation of reality.
not only the acceptability of the product by the stake-
holder but also critical to operational/mission usage. Need: A single statement that drives everything else.
A MOE is typically qualitative in nature or not able It should relate to the problem that the system is sup-
to be used directly as a design-to requirement. posed to solve, but not be the solution.

Measure of Performance (MOP): A quantitative Nonconforming product: Software, hardware, or
measure that, when met by the design solution, helps combination, either produced, acquired, or in some
ensure that a MOE for a product or system will be combination that is identified as not meeting docu-
satisfied. These MOPs are given special attention mented requirements.
during design to ensure that the MOEs to which
they are associated are met. There are generally two Objective:Specific target levels of outputs the sys-
or more measures of performance for each MOE. tem must achieve. Each objective should relate to a

NASA SYSTEMS ENGINEERING HANDBOOK 185
Appendix B: Glossary

particular goal. Generally, objectives should meet ensures that all system and support (flight and ground)
four criteria: hardware, software, personnel, procedures, and user
documentation accurately reflects the deployed state
1. Specific: Objectives should aim at results and of the system and are operationally ready.
reflect what the system needs to do, but they don’t
outline how to implement the solution. They need Operations Concept: A description of how the flight
to be specific enough to provide clear direction, so system and the ground system are used together to
developers, customers, and testers can understand ensure that the concept of operation is reasonable. This
them. might include how mission data of interest, such as
engineering or scientific data, are captured, returned to
2. Measurable: Objectives need to be quantifiable Earth, processed, made available to users, and archived
and verifiable. The project needs to monitor the for future reference. (Source: NPR 7120.5)
system’s success in achieving each objective.
Optimal Solution: A feasible solution that best meets
3. Aggressive, but attainable: Objectives need to be criteria when balanced at a system level.
challenging but reachable, and targets need to be
realistic. At first, objectives “To Be Determined” Other Interested Parties (Stakeholders): A subset of
(TBD) may be included until trade studies “stakeholders,” other interested parties are groups or
occur, operations concepts solidify, or technology individuals who are not customers of a planned tech-
matures. But objectives need to be feasible before nical effort but may be affected by the resulting prod-
starting to write requirements and design systems. uct, the manner in which the product is realized or
used, or have a responsibility for providing life cycle
4. Results-oriented: Objectives need to focus on support services.
desired outputs and outcomes, not on the meth-
ods used to achieve the target (what, not how). Peer Review: Independent evaluation by internal or
external subject matter experts who do not have a
Objective Function (sometimes Cost Function): vested interest in the work product under review. Peer
A mathematical expression of the values of combi- reviews can be planned, focused reviews conducted
nations of possible outcomes as a single measure of on selected work products by the producer’s peers to
cost-effectiveness. identify defects and issues prior to that work product
moving into a milestone review or approval cycle.
Operational Environment: The environment in
which the final product will be operated. In the case Performance Standards: Defines what constitutes
of space flight hardware/software, it is space. In the acceptable performance by the provider. Common
case of ground-based or airborne systems that are not metrics for use in performance standards include cost
directed toward space flight, it is the environments and schedule.
defined by the scope of operations. For software, the
environment is defined by the operational platform. Physical Configuration Audits (PCA) or configura-
tion inspection: The PCA examines the physical con-
Operational Readiness Review: A review that exam- figuration of the configured product and verifies that
ines the actual system characteristics and the proce- the product corresponds to the build-to (or code-to)
dures used in the system or product’s operation and product baseline documentation previously approved

NASA SYSTEMS ENGINEERING HANDBOOK 186
Appendix B: Glossary

at the CDR plus the approved changes thereafter. Product: A part of a system consisting of end prod-
PCAs are conducted on both hardware-and soft- ucts that perform operational functions and enabling
ware-configured products. products that perform life cycle services related to
the end product or a result of the technical efforts in
Post-Flight Assessment Review: Evaluates how well the form of a work product (e.g., plan, baseline, or
mission objectives were met during a mission and test result).
identifies all flight and ground system anomalies
that occurred during the flight and determines the Product Baseline (Phase D/E): The product base-
actions necessary to mitigate or resolve the anomalies line is the approved technical documentation that
for future flights of the same spacecraft design. describes the configuration of a CI during the produc-
tion, fielding/deployment, and operational support
Post-Launch Assessment Review: A review that phases of its life cycle. The product baseline describes
evaluates the readiness of the spacecraft systems detailed physical or form, fit, and function character-
to proceed with full, routine operations after post- istics of a CI; the selected functional characteristics
launch deployment. The review also evaluates the sta- designated for production acceptance testing; and the
tus of the project plans and the capability to conduct production acceptance test requirements.
the mission with emphasis on near-term operations
and mission-critical events. Product Breakdown Structure: A hierarchical break-
down of the hardware and software products of a
Precedence Diagram: Workflow diagram that places program/project.
activities in boxes connected by dependency arrows;
typical of a Gantt chart. Product Implementation Process: A process used to
generate a specified product of a product layer through
Preliminary Design Review: A review that demon- buying, making, or reusing in a form consistent with
strates that the preliminary design meets all system the product life cycle phase exit (success) criteria and
requirements with acceptable risk and within the cost that satisfies the design solution definition-specified
and schedule constraints and establishes the basis for requirements (e.g., drawings, specifications).
proceeding with detailed design. It will show that the
correct design option has been selected, interfaces Product Integration Process: A process used to
have been identified, and verification methods have transform the design solution definition into the
been described. desired end product of the product layer through
assembly and integration of lower-level validated end
Process: A set of activities used to convert inputs products in a form that is consistent with the product
into desired outputs to generate expected outcomes life cycle phase exit (success) criteria and that satis-
and satisfy a purpose. fies the design solution definition requirements (e.g.,
drawings, specifications).
Producibility: A system characteristic associated with
the ease and economy with which a completed design Product Realization: The act of making, buying, or
can be transformed (i.e., fabricated, manufactured, or reusing a product, or the assembly and integration
coded) into a hardware and/or software realization. of lower-level realized products into a new product,
as well as the verification and validation that the

NASA SYSTEMS ENGINEERING HANDBOOK 187
Appendix B: Glossary

product satisfies its appropriate set of requirements Prognosis: The prediction of a system’s future health
and the transition of the product to its customer. states, degradation, and Remaining Useful Life (RUL).

Product Transition Process: A process used to tran- Program: A strategic investment by a mission direc-
sition a verified and validated end product that has torate or mission support office that has a defined
been generated by product implementation or prod- architecture and/or technical approach, require-
uct integration to the customer at the next level in the ments, funding level, and a management structure
system structure for integration into an end product that initiates and directs one or more projects. A pro-
or, for the top-level end product, transitioned to the gram defines a strategic direction that the Agency has
intended end user. identified as critical.

Product Validation Process: A process used to con- Program/System Definition Review: A review that
firm that a verified end product generated by product examines the proposed program architecture and the
implementation or product integration fulfills (satis- flowdown to the functional elements of the system.
fies) its intended use when placed in its intended envi- The proposed program’s objectives and the concept
ronment and to assure that any anomalies discovered for meeting those objectives are evaluated. Key tech-
during validation are appropriately resolved prior to nologies and other risks are identified and assessed.
delivery of the product (if validation is done by the The baseline program plan, budgets, and schedules
supplier of the product) or prior to integration with are presented.
other products into a higher-level assembled product
(if validation is done by the receiver of the product). Program Requirements: The set of requirements
The validation is done against the set of baselined imposed on the program office, which are typically
stakeholder expectations. found in the program plan plus derived requirements
that the program imposes on itself.
Product Verification Process: A process used to
demonstrate that an end product generated from Program System Requirements Review: A review
product implementation or product integration con- that evaluates the credibility and responsiveness of a
forms to its design solution definition requirements proposed program requirements/architecture to the
as a function of the product life cycle phase and the mission directorate requirements, the allocation of
location of the product layer end product in the sys- program requirements to the projects, and the matu-
tem structure. rity of the program’s mission/system definition.

Production Readiness Review (PRR): A review for Programmatic Requirements: Requirements set
projects developing or acquiring multiple or similar by the mission directorate, program, project, and
systems greater than three or as determined by the PI, if applicable. These include strategic scientific
project. The PRR determines the readiness of the and exploration requirements, system performance
system developers to efficiently produce the required requirements, and schedule, cost, and similar non-
number of systems. It ensures that the production technical constraints.
plans, fabrication, assembly, integration-enabling
products, operational support, and personnel are in Project: A specific investment having defined goals,
place and ready to begin production. objectives, requirements, life cycle cost, a beginning,

NASA SYSTEMS ENGINEERING HANDBOOK 188
Appendix B: Glossary

and an end. A project yields new or revised products systems in an operational environment. The proto-
or services that directly address NASA’s strategic type is used to “wring out” the design solution so that
needs. The products may be produced or the services experience gained from the prototype can be fed back
performed wholly in-house; by partnerships with into design changes that will improve the manufac-
Government, industry, or academia; or through con- ture, integration, and maintainability of a single flight
tracts with private industry. item or the production run of several flight items.

Project Plan: The document that establishes the Quality Assurance: An independent assessment per-
project’s baseline for implementation, signed by the formed throughout a product’s life cycle in order to
responsible program manager, Center Director, proj- acquire confidence that the system actually produced
ect manager, and the MDAA, if required. and delivered is in accordance with its functional,
performance, and design requirements.
Project Requirements: The set of requirements
imposed on the project and developer, which are typ- Realized Product: The end product that has been
ically found in the project plan plus derived require- implemented/integrated, verified, validated, and
ments that the project imposes on itself. It includes transitioned to the next product layer.
identification of activities and deliverables (end prod-
ucts and work products) and outputs of the develop- Recovery: An action taken to restore the functions
ment and operations. necessary to achieve existing or redefined system
goals after a fault/failure occurs.
Phase Product: An end product that is to be provided
as a result of the activities of a given life cycle phase. Recursive: Value is added to the system by the
The form depends on the phase—a product of early repeated application of processes to design next low-
phases might be a simulation or model; a product of er-layer system products or to realize next upper-layer
later phases may be the (final) end product itself. end products within the system structure. This also
applies to repeating the application of the same pro-
Product Form: A representation of a product that cesses to the system structure in the next life cycle
depends on the development phase, current use, and phase to mature the system definition and satisfy
maturity. Examples include mock-up, model, engi- phase exit (success) criteria.
neering unit, prototype unit, and flight unit.
Relevant Stakeholder: A subset of the term “stake-
Product Realization: The desired output from the appli- holder” that applies to people or roles that are des-
cation of the four product realization processes. The ignated in a plan for stakeholder involvement. Since
form of this product is dependent on the phase of the “stakeholder” may describe a very large number of
product life cycle and the phase exit (success) criteria. people, a lot of time and effort would be consumed
by attempting to deal with all of them. For this rea-
Prototype: The prototype unit demonstrates form, son, “relevant stakeholder” is used in most practice
fit, and function at a scale deemed to be representa- statements to describe the people identified to con-
tive of the final product operating in its operational tribute to a specific task.
environment. A subscale test article provides fidelity
sufficient to permit validation of analytical mod- Relevant Environment: Not all systems, subsys-
els capable of predicting the behavior of full-scale tems, and/or components need to be operated in

NASA SYSTEMS ENGINEERING HANDBOOK 189
Appendix B: Glossary

the operational environment in order to satisfacto- of each product layer during system design. It pro-
rily address performance margin requirements or vides bidirectional traceability back to the top prod-
stakeholder expectations. Consequently, the relevant uct layer requirements and manages the changes to
environment is the specific subset of the operational established requirement baselines over the life cycle
environment that is required to demonstrate critical of the system products.
“at risk” aspects of the final product performance in
an operational environment. Risk: In the context of mission execution, risk is the
potential for performance shortfalls that may be real-
Reliability: The measure of the degree to which a ized in the future with respect to achieving explicitly
system ensures mission success by functioning prop- established and stated performance requirements.
erly over its intended life. It has a low and acceptable The performance shortfalls may be related to any one
probability of failure, achieved through simplicity, or more of the following mission execution domains:
proper design, and proper application of reliable parts (1) safety, (2) technical, (3) cost, and (4) schedule.
and materials. In addition to long life, a reliable sys- (Source: NPR 8000.4, Agency Risk Management
tem is robust and fault tolerant. Procedural Requirements)

Repeatable: A characteristic of a process that can be Risk Assessment: An evaluation of a risk item that
applied to products at any level of the system struc- determines (1) what can go wrong, (2) how likely it
ture or within any life cycle phase. is to occur, (3) what the consequences are, and (4)
what the uncertainties associated with the likelihood
Requirement: The agreed-upon need, desire, want, and consequences are, and 5) what the mitigation
capability, capacity, or demand for personnel, equip- plans are.
ment, facilities, or other resources or services by
specified quantities for specific periods of time or Risk-Informed Decision Analysis Process: A five-
at a specified time expressed as a “shall” statement. step process focusing first on objectives and next
Acceptable form for a requirement statement is indi- on developing decision alternatives with those
vidually clear, correct, feasible to obtain, unambig- objectives clearly in mind and/or using decision
uous in meaning, and can be validated at the level alternatives that have been developed under other
of the system structure at which it is stated. In pairs systems engineering processes. The later steps of the
of requirement statements or as a set, collectively, process interrelate heavily with the Technical Risk
they are not redundant, are adequately related with Management Process.
respect to terms used, and are not in conflict with
one another. Risk Management: Risk management includes
Risk-Informed Decision-Making (RIDM) and
Requirements Allocation Sheet: Documents the Continuous Risk Management (CRM) in an inte-
connection between allocated functions, allocated grated framework. RIDM informs systems engi-
performance, and the physical system. neering decisions through better use of risk and
uncertainty information in selecting alternatives and
Requirements Management Process: A process establishing baseline requirements. CRM manages
used to manage the product requirements identified, risks over the course of the development and the
baselined, and used in the definition of the products Implementation Phase of the life cycle to ensure that

NASA SYSTEMS ENGINEERING HANDBOOK 190
Appendix B: Glossary

safety, technical, cost, and schedule requirements 2210.1, Release of NASA Software, does not include
are met. This is done to foster proactive risk man- computer databases or software documentation.
agement, to better inform decision-making through
better use of risk information, and then to more effec- Note 2: Definitions for the terms COTS, GOTS,
tively manage Implementation risks by focusing the heritage software, MOTS, legacy software, software
CRM process on the baseline performance require- reuse, and classes of software are provided in NPR
ments emerging from the RIDM process. (Source: 7150.2, NASA Software Engineering Requirements.
NPR 8000.4, Agency Risk Management Procedural (Source: NPD 7120.4, NASA Engineering and
Requirements) These processes are applied at a level Program/Project Management Policy)
of rigor commensurate with the complexity, cost, and
criticality of the program. Solicitation: The vehicle by which information is
solicited from contractors for the purpose of award-
Safety: Freedom from those conditions that can ing a contract for products or services. Any request
cause death, injury, occupational illness, damage to to submit offers or quotations to the Government.
or loss of equipment or property, or damage to the Solicitations under sealed bid procedures are called
environment. “invitations for bids.” Solicitations under negoti-
ated procedures are called “requests for proposals.”
Search Space (or Alternative Space): The envelope Solicitations under simplified acquisition procedures
of concept possibilities defined by design constraints may require submission of either a quotation or
and parameters within which alternative concepts an offer.
can be developed and traded off.
Specification: A document that prescribes com-
Single-Project Programs: Programs that tend to pletely, precisely, and verifiably the requirements,
have long development and/or operational lifetimes, design, behavior, or characteristics of a system or
represent a large investment of Agency resources, and system component. In NPR 7123.1, “specification” is
have contributions from multiple organizations/agen- treated as a “requirement.”
cies. These programs frequently combine program
and project management approaches, which they Stakeholder: A group or individual who is affected
document through tailoring. by or has an interest or stake in a program or project.
There are two main classes of stakeholders. See “cus-
Software: Computer programs, procedures, rules, tomers” and “other interested parties.”
and associated documentation and data pertaining
to the development and operation of a computer sys- Stakeholder Expectations: A statement of needs,
tem. Software also includes Commercial Off-The- desires, capabilities, and wants that are not expressed
Shelf (COTS), Government Off-The-Shelf (GOTS), as a requirement (not expressed as a “shall” statement)
Modified Off-The-Shelf (MOTS), embedded soft- is referred to as an “expectation.” Once the set of
ware, reuse, heritage, legacy, autogenerated code, expectations from applicable stakeholders is collected,
firmware, and open source software components. analyzed, and converted into a “shall” statement, the
expectation becomes a requirement. Expectations
Note 1:For purposes of the NASA Software Release can be stated in either qualitative (non-measurable)
program only, the term “software,” as redefined in NPR or quantitative (measurable) terms. Requirements are

NASA SYSTEMS ENGINEERING HANDBOOK 191
Appendix B: Glossary

always stated in quantitative terms. Expectations can need. The elements include all hardware, software,
be stated in terms of functions, behaviors, or con- equipment, facilities, personnel, processes, and pro-
straints with respect to the product being engineered cedures needed for this purpose. (2) The end product
or the process used to engineer the product. (which performs operational functions) and enabling
products (which provide life cycle support services to
Stakeholder Expectations Definition Process: A the operational end products) that make up a system.
process used to elicit and define use cases, scenarios,
concept of operations, and stakeholder expectations System Acceptance Review: The SAR verifies the
for the applicable product life cycle phases and prod- completeness of the specific end products in relation
uct layer. The baselined stakeholder expectations are to their expected maturity level, assesses compliance
used for validation of the product layer end product. to stakeholder expectations, and ensures that the
system has sufficient technical maturity to authorize
Standing Review Board: The board responsible for its shipment to the designated operational facility or
conducting independent reviews (life-cycle and spe- launch site.
cial) of a program or project and providing objective,
expert judgments to the convening authorities. The System Definition Review: The Mission/System
reviews are conducted in accordance with approved Definition Review (MDR/SDR) evaluates whether
Terms of Reference (ToR) and life cycle requirements the proposed mission/system architecture is respon-
per NPR 7123.1. sive to the program mission/system functional and
performance requirements and requirements have
State Diagram: A diagram that shows the flow in the been allocated to all functional elements of the mis-
system in response to varying inputs in order to char- sion/system. This review is used for projects and for
acterize the behavior of the system. single-project programs.

Success Criteria: Specific accomplishments that System Integration Review: A SIR ensures that seg-
need to be satisfactorily demonstrated to meet the ments, components, and subsystems are on schedule
objectives of a technical review so that a technical to be integrated into the system and that integration
effort can progress further in the life cycle. Success facilities, support personnel, and integration plans and
criteria are documented in the corresponding techni- procedures are on schedule to support integration.
cal review plan. Formerly referred to as “exit” criteria,
a term still used in some NPDs/NPRs. System Requirements Review: For a program, the
SRR is used to ensure that its functional and per-
Surveillance : The monitoring of a contractor’s activ- formance requirements are properly formulated and
ities (e.g., status meetings, reviews, audits, site visits) correlated with the Agency and mission directorate
for progress and production and to demonstrate fis- strategic objectives.
cal responsibility, ensure crew safety and mission suc-
cess, and determine award fees for extraordinary (or For a system/project, the SRR evaluates whether the
penalty fees for substandard) contract execution. functional and performance requirements defined for
the system are responsive to the program’s require-
System: (1) The combination of elements that func- ments and ensures that the preliminary project plan
tion together to produce the capability to meet a and requirements will satisfy the mission.

NASA SYSTEMS ENGINEERING HANDBOOK 192
Appendix B: Glossary

System Safety Engineering: The application of engi- of deviations and waivers depending on the timing
neering and management principles, criteria, and of the request.
techniques to achieve acceptable mishap risk within
the constraints of operational effectiveness and suit- OR
ability, time, and cost throughout all phases of the
system life cycle. The process used to seek relief from NPR 7123.1
requirements consistent with program or project
System Structure: A system structure is made up objectives, allowable risk, and constraints.
of a layered structure of product-based WBS mod-
els. (See “Work Breakdown Structure” and Product Technical Assessment Process: A process used to
Breakdown Structure.”) help monitor progress of the technical effort and
provide status information for support of the system
Systems Approach: The application of a systematic, design, product realization, and technical manage-
disciplined engineering approach that is quantifiable, ment processes. A key aspect of the process is con-
recursive, iterative, and repeatable for the develop- ducting life cycle and technical reviews throughout
ment, operation, and maintenance of systems inte- the system life cycle.
grated into a whole throughout the life cycle of a
project or program. Technical Cost Estimate: The cost estimate of the
technical work on a project created by the techni-
Systems Engineering (SE) Engine: The SE model cal team based on its understanding of the system
shown in Figure 2.1-1 that provides the 17 technical requirements and operational concepts and its vision
processes and their relationships with each other. The of the system architecture.
model is called an “SE engine” in that the appropriate
set of processes is applied to the products being engi- Technical Data Management Process: A process
neered to drive the technical effort. used to plan for, acquire, access, manage, protect,
and use data of a technical nature to support the total
Systems Engineering Management Plan (SEMP): life cycle of a system. This process is used to cap-
The SEMP identifies the roles and responsibility ture trade studies, cost estimates, technical analyses,
interfaces of the technical effort and specifies how reports, and other important information.
those interfaces will be managed. The SEMP is the
vehicle that documents and communicates the tech- Technical Data Package: An output of the Design
nical approach, including the application of the com- Solution Definition Process, it evolves from phase to
mon technical processes; resources to be used; and phase, starting with conceptual sketches or models
the key technical tasks, activities, and events along and ending with complete drawings, parts list, and
with their metrics and success criteria. other details needed for product implementation or
product integration.
Tailoring: A process used to adjust or seek relief from
a prescribed requirement to accommodate the needs Technical Measures: An established set of measures
of a specific task or activity (e.g., program or proj- based on the expectations and requirements that will
ect). The tailoring process results in the generation be tracked and assessed to determine overall system or

NASA SYSTEMS ENGINEERING HANDBOOK 193
Appendix B: Glossary

product effectiveness and customer satisfaction. Com- undesired consequences related to technical perfor-
mon terms for these measures are Measures Of Effec- mance, human safety, mission assets, or environment.
tiveness (MOEs), Measures Of Performance (MOPs),
and Technical Performance Measures (TPMs). Technical Risk Management Process: A process
used to make risk-informed decisions and examine,
Technical Performance Measures: A set of perfor- on a continuing basis, the potential for deviations
mance measures that are monitored by comparing from the project plan and the consequences that
the current actual achievement of the parameters could result should they occur.
with that anticipated at the current time and on
future dates. TPMs are used to confirm progress and Technical Team: A group of multidisciplinary indi-
identify deficiencies that might jeopardize meeting a viduals with appropriate domain knowledge, experi-
system requirement. Assessed parameter values that ence, competencies, and skills who are assigned to a
fall outside an expected range around the anticipated specific technical task.
values indicate a need for evaluation and corrective
action. Technical performance measures are typi- Technology Readiness Assessment Report: A doc-
cally selected from the defined set of Measures Of ument required for transition from Phase B to Phase
Performance (MOPs). C/D demonstrating that all systems, subsystems, and
components have achieved a level of technological
Technical Planning Process: A process used to plan maturity with demonstrated evidence of qualification
for the application and management of each common in a relevant environment.
technical process. It is also used to identify, define,
and plan the technical effort applicable to the prod- Technology Assessment: A systematic process that
uct life cycle phase for product layer location within ascertains the need to develop or infuse technologi-
the system structure and to meet project objectives cal advances into a system. The technology assess-
and product life cycle phase exit (success) criteria. A ment process makes use of basic systems engineering
key document generated by this process is the SEMP. principles and processes within the framework of the
Product Breakdown Structure (PBS). It is a two-step
Technical Requirements: A set of requirements imposed process comprised of (1) the determination of the cur-
on the end products of the system, including the system rent technological maturity in terms of Technology
itself. Also referred to as “product requirements.” Readiness Levels (TRLs) and (2) the determination
of the difficulty associated with moving a technol-
Technical Requirements Definition Process: A pro- ogy from one TRL to the next through the use of the
cess used to transform the stakeholder expectations Advancement Degree of Difficulty Assessment (AD2).
into a complete set of validated technical require-
ments expressed as “shall” statements that can be Technology Development Plan: A document required
used for defining a design solution for the Product for transition from Phase A to Phase B identifying
Breakdown Structure (PBS) model and related technologies to be developed, heritage systems to be
enabling products. modified, alternative paths to be pursued, fallback
positions and corresponding performance descopes,
Technical Risk: Risk associated with the achievement milestones, metrics, and key decision points. It is
of a technical goal, criterion, or objective. It applies to incorporated in the preliminary project plan.

NASA SYSTEMS ENGINEERING HANDBOOK 194
Appendix B: Glossary

Technology Maturity Assessment: A process to requirements, evaluating these alternatives in terms of
determine a system’s technological maturity based on the measures of effectiveness and system cost, rank-
Technology Readiness Levels (TRLs). ing the alternatives according to appropriate selection
criteria, dropping less promising alternatives, and
Technology Readiness Level: Provides a scale against proceeding to the next level of resolution, if needed.
which to measure the maturity of a technology. TRLs
range from 1, basic technology research, to 9, systems Trade Study Report: A report written to document
test, launch, and operations. Typically, a TRL of 6 a trade study. It should include: the system under
(i.e., technology demonstrated in a relevant environ- analysis; system goals, objectives (or requirements, as
ment) is required for a technology to be integrated appropriate to the level of resolution), and constraints;
into an SE process. measures and measurement methods (models) used;
all data sources used; the alternatives chosen for anal-
Test: The use of a realized end product to obtain ysis; computational results, including uncertainty
detailed data to verify or validate performance or to ranges and sensitivity analyses performed; the selec-
provide sufficient information to verify or validate tion rule used; and the recommended alternative.
performance through further analysis.
Trade Tree: A representation of trade study alter-
Test Readiness Review: A review that ensures that natives in which each layer represents some system
the test article (hardware/software), test facility, sup- aspect that needs to be treated in a trade study to
port personnel, and test procedures are ready for test- determine the best alternative.
ing and data acquisition, reduction, and control.
Transition: The act of delivery or moving of a product
Threshold Requirements: A minimum acceptable set from one location to another. This act can include
of technical and project requirements; the set could packaging, handling, storing, moving, transporting,
represent the descope position of the project. installing, and sustainment activities.

Tightly Coupled Programs: Programs with multi- Uncoupled Programs: Programs implemented
ple projects that execute portions of a mission(s). No under a broad theme and/or a common program
single project is capable of implementing a complete implementation concept, such as providing frequent
mission. Typically, multiple NASA Centers con- flight opportunities for cost-capped projects selected
tribute to the program. Individual projects may be through AO or NASA Research Announcements.
managed at different Centers. The program may also Each such project is independent of the other projects
include contributions from other agencies or interna- within the program.
tional partners.
Utility: A measure of the relative value gained from an
Traceability: A discernible association among two or alternative. The theoretical unit of measurement for
more logical entities such as requirements, system ele- utility is the “util.”
ments, verifications, or tasks.
Validated Requirements: A set of requirements that
Trade Study: A means of evaluating system designs are well formed (clear and unambiguous), complete
by devising alternative means to meet functional (agree with customer and stakeholder needs and

NASA SYSTEMS ENGINEERING HANDBOOK 195
Appendix B: Glossary

expectations), consistent (conflict free), and individ- requirement is put under configuration control at the
ually verifiable and traceable to a higher level require- level the requirement will be implemented.
ment or goal.
Work Breakdown Structure (WBS): A product-ori-
Validation (of a product):The process of showing ented hierarchical division of the hardware, software,
proof that the product accomplishes the intended services, and data required to produce the program/
purpose based on stakeholder expectations and the project’s end product(s) structured according to the
Concept of Operations. May be determined by a way the work will be performed, reflecting the way
combination of test, analysis, demonstration, and in which program/project costs, schedule, technical,
inspection. (Answers the question, “Am I building and risk data are to be accumulated, summarized,
the right product?”) and reported.

Variance: In program control terminology, a differ- WBS Model: A WBS model describes a system that
ence between actual performance and planned costs consists of end products and their subsystems (which
or schedule status. perform the operational functions of the system), the
supporting or enabling products, and any other work
Verification (of a product): Proof of compliance with products (plans, baselines) required for the develop-
specifications. Verification may be determined by ment of the system.
test, analysis, demonstration, or inspection or a com-
bination thereof. (Answers the question, “Did I build Workflow Diagram: A scheduling chart that
the product right?”) shows activities, dependencies among activities,
and milestones.
Waiver: A documented authorization releasing a pro-
gram or project from meeting a requirement after the

NASA SYSTEMS ENGINEERING HANDBOOK 196
Appendix C: H
 ow to Write a Good Requirement—
Checklist

C.1 Use of Correct Terms not the solution. Ask, “Why do you need the
requirement?” The answer may point to the real
†† Shall = requirement requirement.)

†† Will = facts or declaration of purpose †† Free of descriptions of operations? (Is this a need
the product should satisfy or an activity involv-
†† Should = goal ing the product? Sentences like “The operator
shall…” are almost always operational statements
not requirements.)
C.2 Editorial Checklist
Personnel Requirement Example Product Requirements
†† The requirement is in the form “responsible party †† The system shall operate at a power level of…
shall perform such and such.” In other words, use
the active, rather than the passive voice. A require- †† The software shall acquire data from the…
ment should state who shall (do, perform, provide,
weigh, or other verb) followed by a description of †† The structure shall withstand loads of…
what should be performed.
†† The hardware shall have a mass of…
Product Requirement
†† The requirement is in the form “product ABC shall
C.3 General Goodness Checklist
XYZ.” A requirement should state “The product
shall” (do, perform, provide, weigh, or other verb) †† The requirement is grammatically correct.
followed by a description of what should be done.
†† The requirement is free of typos, misspellings,
†† The requirement uses consistent terminology to and punctuation errors.
refer to the product and its lower-level entities.
†† The requirement complies with the project’s tem-
†† Complete with tolerances for qualitative/perfor- plate and style rules.
mance values (e.g., less than, greater than or equal
to, plus or minus, 3 sigma root sum squares). †† The requirement is stated positively (as opposed to
negatively, i.e., “shall not”).
†† Is the requirement free of implementation?
(Requirements should state WHAT is needed, †† The use of “To Be Determined” (TBD) values
NOT HOW to provide it; i.e., state the problem should be minimized. It is better to use a best

NASA SYSTEMS ENGINEERING HANDBOOK 197
Appendix C: How to Write a Good Requirement— Checklist

estimate for a value and mark it “To Be Resolved” as TBDs or TBRs and a complete listing of them
(TBR) with the rationale along with what should maintained with the requirements?
be done to eliminate the TBR, who is responsi-
ble for its elimination, and by when it should be †† Are any requirements missing? For example,
eliminated. have any of the following requirements areas
been overlooked: functional, performance, inter-
†† The requirement is accompanied by an intel- face, environment (development, manufacturing,
ligible rationale, including any assumptions. test, transport, storage, and operations), facility
Can you validate (concur with) the assump- (manufacturing, test, storage, and operations),
tions? Assumptions should be confirmed before transportation (among areas for manufacturing,
baselining. assembling, delivery points, within storage facil-
ities, loading), training, personnel, operability,
†† The requirement is located in the proper section of safety, security, appearance and physical charac-
the document (e.g., not in an appendix). teristics, and design.

†† Have all assumptions been explicitly stated?
C.4 Requirements Validation
Checklist Compliance
Clarity †† Are all requirements at the correct level (e.g., sys-
†† Are the requirements clear and unambiguous? tem, segment, element, subsystem)?
(Are all aspects of the requirement understand-
able and not subject to misinterpretation? Is the †† Are requirements free of implementation specif-
requirement free from indefinite pronouns (this, ics? (Requirements should state what is needed,
these) and ambiguous terms (e.g., “as appropri- not how to provide it.)
ate,” “etc.,” “and/or,” “but not limited to”)?)
†† Are requirements free of descriptions of opera-
†† Are the requirements concise and simple? tions? (Don’t mix operation with requirements:
update the ConOps instead.)
†† Do the requirements express only one thought per
requirement statement, a stand-alone statement as †† Are requirements free of personnel or task assign-
opposed to multiple requirements in a single state- ments? (Don’t mix personnel/task with product
ment, or a paragraph that contains both require- requirements: update the SOW or Task Order
ments and rationale? instead.)

†† Does the requirement statement have one subject Consistency
and one predicate? †† Are the requirements stated consistently without
contradicting themselves or the requirements of
Completeness related systems?
†† Are requirements stated as completely as possible?
Have all incomplete requirements been captured †† Is the terminology consistent with the user and
sponsor’s terminology? With the project glossary?

NASA SYSTEMS ENGINEERING HANDBOOK 198
Appendix C: How to Write a Good Requirement— Checklist

†† Is the terminology consistently used throughout †† Is each performance requirement realistic?
the document? Are the key terms included in the
project’s glossary? †† Are the tolerances overly tight? Are the tolerances
defendable and cost-effective? Ask, “What is the
Traceability worst thing that could happen if the tolerance was
†† Are all requirements needed? Is each requirement doubled or tripled?”
necessary to meet the parent requirement? Is each
requirement a needed function or characteristic? Interfaces
Distinguish between needs and wants. If it is not †† Are all external interfaces clearly defined?
necessary, it is not a requirement. Ask, “What is
the worst that could happen if the requirement †† Are all internal interfaces clearly defined?
was not included?”
†† Are all interfaces necessary, sufficient, and consis-
†† Are all requirements (functions, structures, and tent with each other?
constraints) bidirectionally traceable to high-
er-level requirements or mission or system-of-in- Maintainability
terest scope (i.e., need(s), goals, objectives, †† Have the requirements for maintainability of the
constraints, or concept of operations)? system been specified in a measurable, verifiable
manner?
†† Is each requirement stated in such a manner that it
can be uniquely referenced (e.g., each requirement †† Are requirements written so that ripple effects
is uniquely numbered) in subordinate documents? from changes are minimized (i.e., requirements
are as weakly coupled as possible)?
Correctness
†† Is each requirement correct? Reliability
†† Are clearly defined, measurable, and verifiable
†† Is each stated assumption correct? Assumptions reliability requirements specified?
should be confirmed before the document can be
baselined. †† Are there error detection, reporting, handling,
and recovery requirements?
†† Are the requirements technically feasible?
†† Are undesired events (e.g., single-event upset, data
Functionality loss or scrambling, operator error) considered and
†† Are all described functions necessary and together their required responses specified?
sufficient to meet mission and system goals and
objectives? †† Have assumptions about the intended sequence
of functions been stated? Are these sequences
Performance required?
†† Are all required performance specifications and
margins listed (e.g., consider timing, throughput, †† Do these requirements adequately address the
storage size, latency, accuracy and precision)? survivability after a software or hardware fault of

NASA SYSTEMS ENGINEERING HANDBOOK 199
Appendix C: How to Write a Good Requirement— Checklist

the system from the point of view of hardware, †† Are the requirements free of unverifiable terms
software, operations, personnel and procedures? (e.g., flexible, easy, sufficient, safe, ad hoc, ade-
quate, accommodate, user-friendly, usable, when
Verifiability/Testability required, if required, appropriate, fast, portable,
†† Can the system be tested, demonstrated, light-weight, small, large, maximize, minimize,
inspected, or analyzed to show that it satisfies sufficient, robust, quickly, easily, clearly, other
requirements? Can this be done at the level of the “ly” words, other “ize” words)?
system at which the requirement is stated? Does
a means exist to measure the accomplishment of Data Usage
the requirement and verify compliance? Can the †† Where applicable, are “don’t care” conditions
criteria for verification be stated? truly “don’t care”? (“Don’t care” values identify
cases when the value of a condition or flag is irrel-
†† Are the requirements stated precisely to facilitate evant, even though the value may be important
specification of system test success criteria and for other cases.) Are “don’t care” conditions values
requirements? explicitly stated? (Correct identification of “don’t
care” values may improve a design’s portability.)

NASA SYSTEMS ENGINEERING HANDBOOK 200
Appendix D: Requirements Verification Matrix

When developing requirements, it is important to delineate sources of requirements depending on the
identify an approach for verifying the requirements. project. The example is shown to provide suggested
This appendix provides an example matrix that guidelines for the minimum information that should
defines how all the requirements are verified. Only be included in the verification matrix.
“shall” requirements should be included in these
matrices. The matrix should identify each “shall” by
unique identifier and be definitive as to the source, NOTE: See Appendix I for an outline of the
i.e., document from which the requirement is taken. Verification and Validation Plan. The matrix shown
This matrix could be divided into multiple matri- here (TABLE D-1) is Appendix C in that outline.
ces (e.g., one for each requirements document) to

NASA SYSTEMS ENGINEERING HANDBOOK 201
TABLE D-1  Requirements Verification Matrix

Require- Document Para- Shall Verifi- Verifi- Facility or Phasea Accep- Preflight Perform- Results
ment No. graph State- cation cation Lab tance Accep- ing Orga-
ment Success Method Require- tance? nization
Criteria ment?

Unique iden- Document Paragraph Text (within Success Verification Facility or Phase in Indicate Indicate Organization Indicate
tifier or each number the number of the reason) of the criteria for the method laboratory which the whether this whether this responsible documents
requirement requirement requirement requirement, requirement for the used to per- verification requirement requirement for per- that contain
is contained i.e., the requirement form the ver- and valida- is also ver- is also ver- forming the the objective
within “shall” (analysis, ification and tion will be ified during ified during verification evidence that
inspection, validation. performed. initial accep- any pre-flight requirement
demonstration, tance testing or recurring was satisfied
test) of each unit. acceptance
testing of
each unit

P-1 xxx 3.2.1.1 System X 1. System Test xxx 5 Yes No xxx TPS
Capability: shall provide X locks to xxxx
Support a max. forward link

NASA SYSTEMS ENGINEERING HANDBOOK
Uplinked Data ground-to- at the min
Appendix D: Requirements Verification Matrix

(LDR) station uplink and max
of… data rate
tolerances

2. System
X locks to
the forward
link at the
min and max
operating
frequency
tolerances

P-i xxx Other Other “shalls” Other criteria xxx xxx xxx Yes/No Yes/No xxx Memo xxx
paragraphs in PTRS

S-i or other xxxxx (other Other Other Other criteria xxx xxx xxx Yes/No Yes/No xxx Report xxx
unique specs, ICDs, paragraphs “shalls” in
designator etc.) specs, ICDs,
etc.

a
Phases defined as: (1) Pre-Declared Development, (2) Formal Box-Level Functional, (3) Formal Box-Level Environmental, (4) Formal System-Level Environmental, (5)
Formal System-Level Functional, (6) Formal End-to-End Functional, (7) Integrated Vehicle Functional,

202
(8) On-Orbit Functional.
Appendix E: Creating the Validation Plan with a
Validation Requirements Matrix

Validation products can take the form of a wide range
NOTE: See Appendix I for an outline of the of deliverables, including:
Verification and Validation Plan. The matrix shown
here (TABLE E-1) is Appendix D in that outline. • Stakeholder evaluation and feedback
• Peer reviews
When developing requirements, it is important to • Physical models of all fidelities
identify a validation approach for how additional val- • Simulations
idation evaluation, testing, analysis, or other demon- • Virtual modeling
strations will be performed to ensure customer/ • Tests
sponsor satisfaction. • Fit-checks
• Procedure dry-runs
There are a number of sources to draw from for creat- • Integration activities (to inform on-orbit mainte-
ing the validation plan: nance procedures)
• Phase-level review solicitation and feedback
• ConOps
• Stakeholder/customer needs, goals, and objectives Particular attention should be paid to the planning
documentation for life cycle phase since early validation can have a
• Rationale statements for requirements and in ver- profound impact on the design and cost in the later
ification requirements life cycle phases.
• Lessons learned database
• System architecture modeling TABLE E-1 shows an example validation matrix.
• Test-as-you-fly design goals and constraints
• SEMP, HSIP, V&V plans

NASA SYSTEMS ENGINEERING HANDBOOK 203
Appendix E: Creating the Validation Plan with a Validation Requirements Matrix

TABLE E-1  Validation Requirements Matrix
Validation Activity Objective Validation Facility Phase Performing Results
Product # Method or Lab Organization

Unique Describe What is to be Validation Facility or Phase in Organization Indicate the
identifier for evaluation accomplished method laboratory which the responsible for objective
validation by the by the for the used to verification/ coordinating evidence
product customer/ customer/ requirement perform validation the validation that
sponsor sponsor (analysis, the will be activity validation
that will be evaluation inspection, validation performeda activity
performed demonstra- occurred
tion, or test)

1 Customer/ 1. Ensure Test xxx Phase A xxx TPS 123456
sponsor legibility is
will acceptable 2.
evaluate Ensure overall
the appearance is
candidate acceptable
displays

a. E xample: (1) during product selection process, (2) prior to final product selection (if COTS) or prior to PDR, (3) prior to CDR,
(4) during box-level functional, (5) during system-level functional, (6) during end-to-end functional, (7) during integrated vehicle
functional, (8) during on-orbit functional.

NASA SYSTEMS ENGINEERING HANDBOOK 204
Appendix F: Functional, Timing, and State Analysis

This appendix was removed. For additional guidance
on functional flow block diagrams, requirements allo-
cation sheets/models, N-squared diagrams, timing
analysis, and state analysis refer to Appendix F in the
NASA Expanded Guidance for Systems Engineering
at https://nen.nasa.gov/web/se/doc-repository.

NASA SYSTEMS ENGINEERING HANDBOOK 205
Appendix G: Technology Assessment/Insertion

G.1 Introduction, Purpose, a complex process that is often dealt with in an ad
and Scope hoc manner differing greatly from project to project
with varying degrees of success.
In 2014, the Headquarters Office of Chief Engineer
and Office of Chief Technologist conducted an Technology infusion frequently results in schedule
Agency-wide study on Technical Readiness Level slips, cost overruns, and occasionally even in cancel-
(TRL) usage and Technology Readiness Assessment lations or failures. In post mortem, the root cause of
(TRA) implementation. Numerous findings, observa- such events is often attributed to “inadequate defi-
tions, and recommendations were identified, as was a nition of requirements.” If such is indeed the root
wealth of new guidance, best practices, and clarifica- cause, then correcting the situation is simply a matter
tions on how to interpret TRL and perform TRAs. of defining better requirements, but this may not be
These are presently being collected into a NASA TRA the case—at least not totally.
Handbook (in work), which will replace this appendix.
In the interim, contact HQ/Steven Hirshorn on any In fact, there are many contributors to schedule
specific questions on interpretation and application of slip, cost overrun, and project cancellation and fail-
TRL/TRA. Although the information contained in ure—among them lack of adequate requirements
this appendix may change, it does provide some infor- definition. The case can be made that most of these
mation until the TRA Handbook can be completed. contributors are related to the degree of uncertainty
at the outset of the project and that a dominant factor
Agency programs and projects frequently require in the degree of uncertainty is the lack of understand-
the development and infusion of new technologi- ing of the maturity of the technology required to
cal advances to meet mission goals, objectives, and bring the project to fruition and a concomitant lack
resulting requirements. Sometimes the new techno- of understanding of the cost and schedule reserves
logical advancement being infused is actually a heri- required to advance the technology from its present
tage system that is being incorporated into a different state to a point where it can be qualified and suc-
architecture and operated in a different environment cessfully infused with a high degree of confidence.
from that for which it was originally designed. It is Although this uncertainty cannot be eliminated, it
important to recognize that the adaptation of heritage can be substantially reduced through the early appli-
systems frequently requires technological advance- cation of good systems engineering practices focused
ment. Failure to account for this requirement can on understanding the technological requirements;
result in key steps of the development process being the maturity of the required technology; and the
given short shrift—often to the detriment of the technological advancement required to meet pro-
program/project. In both contexts of technological gram/project goals, objectives, and requirements.
advancement (new and adapted heritage), infusion is

NASA SYSTEMS ENGINEERING HANDBOOK 206
Appendix G: Technology Assessment/Insertion

TABLE G.1-1 Products Provided by the TA as a Function of Program/Project Phase
Gate Product
KDP A: Transition from Requires an assessment of potential technology needs versus current and planned
Pre-Phase A to Phase A technology readiness levels, as well as potential opportunities to use commercial,
academic, and other government agency sources of technology. Included as part of
the draft integrated baseline. Technology Development Plan is baselined that identifies
technologies to be developed, heritage systems to be modified, alternative paths to be
pursued, fallback positions and corresponding performance descopes, milestones, metrics,
and key decision points. Initial Technology Readiness Assessment (TRA) is available.

KDP B: Transition from Technology Development Plan and Technology Readiness Assessment (TRA) are updated.
Phase A to Phase B Incorporated in the preliminary project plan.

KDP C: Transition from Requires a TRAR demonstrating that all systems, subsystems, and components have
Phase B to Phase C/D achieved a level of technological maturity with demonstrated evidence of qualification in a
relevant environment.
Source: NPR 7120.5.

A number of processes can be used to develop the systems, subsystems, and components demonstrated
appropriate level of understanding required for through test and analysis. The initial AD2 provides the
successful technology insertion. The intent of this material necessary to develop preliminary cost and to
appendix is to describe a systematic process that schedule plans and preliminary risk assessments. In
can be used as an example of how to apply standard subsequent assessments, the information is used to
systems engineering practices to perform a compre- build the Technology Development Plan and in the
hensive Technology Assessment (TA). The TA com- process, identify alternative paths, fallback positions,
prises two parts, a Technology Maturity Assessment and performance descope options. The information
(TMA) and an Advancement Degree of Difficulty is also vital to preparing milestones and metrics for
Assessment (AD2). The process begins with the TMA subsequent Earned Value Management (EVM).
which is used to determine technological maturity
via NASA’s Technology Readiness Level (TRL) scale. The TMA is performed against the hierarchical
It then proceeds to develop an understanding of what breakdown of the hardware and software products
is required to advance the level of maturity through of the program/project PBS to achieve a systematic,
the AD2. It is necessary to conduct TAs at various overall understanding at the system, subsystem, and
stages throughout a program/project to provide the component levels. (See FIGURE G.1-1.)
Key Decision Point (KDP) products required for
transition between phases. (See TABLE G.1-1.)
G.2 Inputs/Entry Criteria
The initial TMA provides the baseline maturity of It is extremely important that a TA process be defined
the system’s required technologies at program/project at the beginning of the program/project and that it
outset and allows monitoring progress throughout be performed at the earliest possible stage (concept
development. The final TMA is performed just prior development) and throughout the program/proj-
to the Preliminary Design Review (PDR). It forms ect through PDR. Inputs to the process will vary in
the basis for the Technology Readiness Assessment level of detail according to the phase of the program/
Report (TRAR), which documents the maturity project, and even though there is a lack of detail in
of the technological advancement required by the Pre-Phase A, the TA will drive out the major critical

NASA SYSTEMS ENGINEERING HANDBOOK 207
Appendix G: Technology Assessment/Insertion

1.3
Crew Launch
Vehicle

1.3.8
Launch
Vehicle
...
1.3.8.1 1.3.8.2 1.3.8.3
First Stage Upper Stage Upper Stage
Engine
... ...
1.3.8.2.4 1.3.8.2.5 1.3.8.2.6 1.3.8.2.7 1.3.8.2.8 1.3.8.2.9 1.3.8.2.10
MPS US RCS FS RCS TVCS Avionics Software Integrated
... H/W
Test

.1: Integ MPS .1: Integ RCS .1: Integ RCS .1: Integ TVCS .1: Integ Avionics .1: Integ S/W .1: MPTA
System
.2: LH System .2: Integ Energy .2: Actuator .2: C&DH System .2: GVT
Support .2: Flight S/W
.3: O2 Fluid Sys. .3: Hydraulic .3: GN&C H/W .3: STA
Power
.4: Pressure & .4: Radio Frequency .4: US for DTF-1
Pneumatic Sys. .4: APU System
.5: US for VTF-2
.5: Umbilicals & .5: EPS
.6: US for RRF-3
Disconnect
.6: Electrical Integration
.7: Struc. Thermal
.7: Develop Flight Instrument Component Test
.8: Sensor & Instrument System
.9: EGSE
.10: Integ CLV Avionics System
Element Testing
.11: Flight Safety System

FIGURE G.1-1  PBS Example

technological advancements required. Therefore, at G.3 How to Do Technology
the beginning of Pre-Phase A, the following should Assessment
be provided:
The technology assessment process makes use of basic
• Refinement of TRL definitions. systems engineering principles and processes. As
• Definition of AD2. mentioned previously, it is structured to occur within
• Definition of terms to be used in the assessment the framework of the Product Breakdown Structure
process. (PBS) to facilitate incorporation of the results. Using
• Establishment of meaningful evaluation criteria the PBS as a framework has a twofold benefit—it
and metrics that will allow for clear identification breaks the “problem” down into systems, subsys-
of gaps and shortfalls in performance. tems, and components that can be more accurately
• Establishment of the TA team. assessed; and it provides the results of the assessment
• Establishment of an independent TA review team. in a format that can be readily used in the generation

NASA SYSTEMS ENGINEERING HANDBOOK 208
Appendix G: Technology Assessment/Insertion

of program costs and schedules. It can also be highly TMA and is used to develop the TRAR, which val-
beneficial in providing milestones and metrics for idates that all elements are at the requisite maturity
progress tracking using EVM. As discussed above, it level. (See FIGURE G.3-1.)
is a two-step process comprised of (1) the determina-
tion of the current technological maturity in terms Even at the conceptual level, it is important to use
of TRLs and (2) the determination of the difficulty the formalism of a PBS to avoid allowing important
associated with moving a technology from one TRL technologies to slip through the cracks. Because of
to the next through the use of the AD2. the preliminary nature of the concept, the systems,
subsystems, and components will be defined at a level
Conceptual Level Activities that will not permit detailed assessments to be made.
The overall process is iterative, starting at the concep- The process of performing the assessment, however,
tual level during program Formulation, establishing is the same as that used for subsequent, more detailed
the initial identification of critical technologies, and steps that occur later in the program/project where
establishing the preliminary cost, schedule, and risk systems are defined in greater detail.
mitigation plans. Continuing on into Phase A, the
process is used to establish the baseline maturity, the Architectural Studies
Technology Development Plan, and the associated Once the concept has been formulated and the ini-
costs and schedule. The final TA consists only of the tial identification of critical technologies made, it is

Identify systems, sub- Assign TRL to subsystems
Assign TRL to all
systems, and components based on lowest TRL of
components based on
per hierarchical product components and TRL
assessment of maturity
breakdown of the WBS state of integration

Identify all components, Assign TRL to systems
Baseline technology subsystems, and systems based on lowest TRL of
maturity assessment that are at lower TRLs subsystems and TRL
than required by program state of integration

Perform AD2 on all
components, subsystems,
and systems that are below
requisite maturity level

Technology Development Plan
Cost Plan
Schedule Plan
Risk Assessment

FIGURE G.3-1  Technology Assessment Process

NASA SYSTEMS ENGINEERING HANDBOOK 209
Appendix G: Technology Assessment/Insertion

necessary to perform detailed architecture studies 1980s. The TRL essentially describes the state of a
with the Technology Assessment Process intimately given technology and provides a baseline from which
interwoven. (See FIGURE G.3-2.) maturity is gauged and advancement defined. (See
FIGURE G.4-1.)

Require- Architectural System
Concepts Programs are often undertaken without fully under-
ments Studies Design
standing either the maturity of key technologies
TRL/AD2 Assessment or what is needed to develop them to the required
level. It is impossible to understand the magnitude and
Technology Maturation scope of a development program without having a clear
understanding of the baseline technological maturity of
FIGURE G.3-2  Architectural Studies and Technology all elements of the system. Establishing the TRL is a
Development
vital first step on the way to a successful program. A
frequent misconception is that in practice, it is too
The purpose of the architecture studies is to refine difficult to determine TRLs and that when you do,
end-item system design to meet the overall scientific it is not meaningful. On the contrary, identifying
requirements of the mission. It is imperative that TRLs can be a straightforward systems engineering
there be a continuous relationship between architec- process of determining what was demonstrated and
tural studies and maturing technology advances. The under what conditions it was demonstrated.
architectural studies should incorporate the results of
the technology maturation, planning for alternative Terminology
paths and identifying new areas required for devel- At first glance, the TRL descriptions in FIGURE G.4-1
opment as the architecture is refined. Similarly, it appear to be straightforward. It is in the process of
is incumbent upon the technology maturation pro- trying to assign levels that problems arise. A primary
cess to identify requirements that are not feasible cause of difficulty is in terminology; e.g., everyone
and development routes that are not fruitful and to knows what a breadboard is, but not everyone has
transmit that information to the architecture studies the same definition. Also, what is a “relevant envi-
in a timely manner. It is also incumbent upon the ronment?” What is relevant to one application may
architecture studies to provide feedback to the tech- or may not be relevant to another. Many of these
nology development process relative to changes in terms originated in various branches of engineering
requirements. Particular attention should be given and had, at the time, very specific meanings to that
to “heritage” systems in that they are often used in