You are on page 1of 282

Springer Aerospace Technology

Jens Eickhoff

Onboard Computers,
Onboard Software and
Satellite Operations
An Introduction

With 169 Figures and 33 Tables

123
Prof. Dr.-Ing. Jens Eickhoff
Institute of Space Systems (IRS),
University of Stuttgart,
Germany

ISBN 978-3-642-25169-6 e-ISBN 978-3-642-25170-2

DOI 10.1007/978-3-642-25170-2

Springer Heidelberg Dordrecht London New York

Springer Series in Aerospace Technology ISSN 1869-1730 e-ISSN 1869-1749

Library of Congress Control Number: 2011940959

© Springer-Verlag Berlin Heidelberg 2012

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation,
reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and
permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law.

The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement,
that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Cover design: WMXDesign GmbH, Heidelberg


Cover figure derived from original in ISSN 2191-2696, Issue 2.
Original by Sabine Leib EADS Cassidian and Jens Eickhoff.

Printed on acid-free paper

Springer is part of Springer Science+ Business Media (www.springer.com)


Foreword
The development of satellites is always driven by their applications, so the payload
and its satellite infrastructure can fulfill all envisioned tasks and sometimes this is
under great autonomy. The brain of the satellite is the Onboard Computer with the
Onboard Software providing functions, procedures and services in preparation for the
different tasks. Lastly Spacecraft Operations will succeed only when the Space and
Ground Segment are interlinked optimally through appropriate data handling and
management concepts.
There are many examples where the flexibility of the spacecraft’s operation system
determined failure or success of a mission. Completely unexpected mission
scenarios or onboard failures are common situations for science and exploration
satellites especially. Communication satellites also profit from the reliability and
flexibility of the onboard systems as the spectacular recovery of the European
Artemis satellite demonstrated in 2003 when Artemis could be repositioned in its
correct orbit after 18 months of recovery activities. Also new rules avoiding space
debris and for deorbiting satellites at the end of their life call for very robust and
flexible onboard computer systems to ensure full operational capability at the end of
satellite lifetime when some components like gyros might have already failed.
This book entitled "Onboard Computers, Onboard Software and Spacecraft
Operations – An Introduction" covers in a broad yet detailed way the important
aspects for satellite development and operation. To our knowledge it is the first book
completely covering the whole subject including particularly subtopic inter-
dependencies. It is a result of a manuscript which has been used and consistently
taught as an examined lecture series at the University of Stuttgart for several years.
The book is equally applicable for students as well as experts of many engineering
disciplines. It is suitable for an introductory course as well as a reference text in
modern system engineering.

September 2011

Prof. Dr. Hans-Peter Roeser Prof. Dr. Volker Liebig


Managing Director Director of Earth Observation Programmes
Institute of Space Systems European Space Agency
University of Stuttgart
Preface
After being engaged by the University of Stuttgart, Institute of Space Systems as
System Engineering Coach from Industry for the small satellite project “Flying
Laptop” begin of 2009 the main challenges in this project turned out to be
● the satellite onboard computer design,
● the onboard software design and
● the spacecraft's operational concept.
The source of this difficulty was not the spacecraft complexity nor the lack of
available industrial technology. It was the fact that neither of these topics was so far
addressed in any lecture in Stuttgart and that no adequate introductory literature
existed on the market for students to be instructed before being able to contribute in
such difficult engineering tasks of the satellite program. In particular no literature
addressed the system engineering interdependencies between these three topics.
Thus all students and PhD candidates had to be trained up in parallel to the
spacecraft design, development and verification processes already underway.
From this situation the idea for a lecture evolved, designed to cover in a system
engineering approach all these three issues – onboard computers, onboard software
and satellite operations – including their interrelations. The lecture was highly
accepted and after two years the improved manuscript could be enhanced for
release as a textbook.
Students' high interest and the demand for study thesis topics, diploma theses and
doctoral theses together with the chance of hands-on experience in the institute's
satellite project clearly confirmed the idea for this lecture series. I hope this book
contributes to imparting background knowledge to the students, enabling them to
professionally begin their industry or agency career in these complex domains of
satellite onboard computer or payload controller design, onboard software or the
operations of spacecraft.

Immenstaad, 2011 Jens Eickhoff


Acknowledgments
This manuscript covers a broad spectrum of technology aspects and would not have
become so educational without the availability of instructive graphical material from
Industry and Agencies. Therefore I thankfully refer to the courtesy of the following
figure and photo providers:
● Institute of Space Systems, University of Stuttgart, Germany
● ESA/ESOC Space Operations Center, Darmstadt, Germany
● Astrium GmbH – Satellites, Friedrichshafen, Germany
● Aeroflex, Colorado Springs, USA
● Aeroflex Gaisler, Göteborg, Sweden
● RUAG Aerospace Sweden AB, Göteborg, Sweden
● BAE Systems, Manassas, USA
● DLR/GSOC Space Operations Center, Oberpfaffenhofen, Germany
● Jena Optronik GmbH, Jena, Germany
All figures used from industrial providers are cited with the according source and
copyright information.
All publicly available figures from ESA and NASA Internet pages are used according
to the copyright and usage conditions cited there, e.g. multimedia@esa.int, and are
also cited with according copyright owner information. Figures and photos under
GFDL or Creative Commons License taken from Wikipedia are also cited accordingly.
For this book I am specially indebted to Prof. Dr. Volker Liebig for initiating the
provision of ESA figure and photo material for the operations chapters 14 and 15 and
to Mr. Nic Mardle, CryoSat Spacecraft Operations Manager at ESOC, who carefully
selected the appropriate material to optimally complement the text.
Furthermore I'd like to express my gratitude to Prof. Dr. Hans-Peter Röser at the
Institute of Space Systems for engaging me in 2003 as a visiting lecturer and in 2009
as System Engineering Coach for the FLP small satellite project and to my Astrium
site Director in Friedrichshafen, Eckard Settelmeyer, for supporting this part-time
academic coaching activity.
I am very much obliged to Dave T. Haslam who performed the proofreading of the
book manuscript as a native English speaker.
At Springer-Verlag GmbH I was very well supported by Mrs. Carmen Wolf and
Dr. Christoph Baumann concerning all topics on layout and the like which typically
arise during book authoring. Special thanks to Dr. Baumann for considering my draft
cover ideas.
Finally I want to thank my family and especially my wife for her encouragement and
motivation, and for bearing me spending many evenings in front of the computer
during lecture development and the later manuscript upgrade to this book.

Most grateful for all the support I received,


Jens Eickhoff
Contents

List of Abbreviations.....................................................................................................XV

Part I Context
1 Introduction.................................................................................................................3
1.1 Design Aspects....................................................................................................4
1.2 Onboard Computers and Data Links..................................................................6
2 Mission / Spacecraft Analysis and Design.................................................................7
2.1 Phases and Tasks in Spacecraft Development..................................................8
2.2 Phase A – Mission Analysis.................................................................................9
2.3 Phase B – Spacecraft Design Definition...........................................................10
2.4 Phase C – Spacecraft Design Refinement.......................................................14
2.5 Phase D – Spacecraft Flight Model Production................................................15
2.5.1 Launcher Selection....................................................................................15
2.5.2 Launch and Early Orbit Phase Engineering..............................................16
2.5.3 Onboard Software and Hardware Design Freeze.....................................17

Part II Onboard Computers


3 Historic Introduction to Onboard Computers............................................................21
3.1 Human Space Mission OBCs...........................................................................23
3.1.1 The NASA Mercury Program.....................................................................23
3.1.2 The NASA Gemini Program.......................................................................24
3.1.3 The NASA Apollo Program........................................................................29
3.1.4 The Space Shuttle Program......................................................................32
3.2 Satellite and Space Probe OBCs......................................................................34
3.2.1 The Generation of digital Sequencers.......................................................34
3.2.2 Transistor based OBCs with CMOS Memory............................................35
3.2.3 Microprocessors in a Space Probe............................................................38
3.2.4 MIL Standard Processors and Ada Programming.....................................41
3.2.5 RISC Processors and Operating Systems on Board.................................42
3.2.6 Today's Technology: Systems on Chip......................................................46
3.3 Onboard Computers of Specific Missions.........................................................49
4 Onboard Computer Main Elements..........................................................................51
4.1 Processors and Top-level Architecture..............................................................54
4.2 Computer Memory............................................................................................56
4.3 Data Buses, Networks and Point-to-Point Connections...................................58
4.3.1 OBC Equipment Interconnections.............................................................58
4.3.2 MIL-STD-1553B.........................................................................................58
4.3.3 SpaceWire.................................................................................................60
4.3.4 CAN-Bus....................................................................................................61
4.4 Transponder Interface.......................................................................................62
4.5 Command Pulse Decoding Unit........................................................................64
4.6 Reconfiguration Units........................................................................................65
XII

4.7 Debug and Service Interfaces...........................................................................66


4.8 Power Supply....................................................................................................68
4.9 Thermal Control Equipment..............................................................................69
5 OBC Mechanical Design..........................................................................................71
6 OBC Development....................................................................................................75
6.1 OBC Model Philosophy.....................................................................................76
6.2 OBC Manufacturing Processes.........................................................................80
7 Special Onboard Computers....................................................................................81

Part III Onboard Software


8 Onboard Software Static Architecture......................................................................87
8.1 Onboard Software Functions............................................................................88
8.2 Operating System and Drivers Layer................................................................91
8.3 Equipment Handlers and OBSW Data Pool.....................................................92
8.4 Application Layer...............................................................................................94
8.5 OBSW Interaction with Ground Control............................................................95
8.6 Service-based OBSW Architecture.................................................................101
8.7 Telecommand Routing and High Priority Commands.....................................111
8.8 Telemetry Downlink and Multiplexing..............................................................113
8.9 Service Interface Stub.....................................................................................115
8.10 Failure Detection, Isolation and Recovery....................................................116
8.11 OBSW Kernel................................................................................................117
9 Onboard Software Dynamic Architecture...............................................................119
9.1 Internal Task Scheduling.................................................................................120
9.2 Channel Acquisition Scheduling......................................................................122
9.3 FDIR Handling.................................................................................................125
9.4 Onboard Control Procedures..........................................................................126
9.5 Service Interface Data Supply........................................................................128
10 Onboard Software Development..........................................................................129
10.1 Onboard Software Functional Analysis.........................................................130
10.2 Onboard Software Requirements Definition.................................................132
10.3 Software Design............................................................................................135
10.3.1 Structured Analysis & Design Technique...............................................136
10.3.2 Hierarchic Object-Oriented Design........................................................138
10.3.3 The Unified Modeling Language – UML................................................140
10.4 Software Implementation and Coding...........................................................147
10.5 Software Verification and Testing..................................................................148
10.5.1 Functional Verification Bench (FVB)......................................................150
10.5.2 Software Verification Facility (SVF).......................................................152
10.5.3 Hybrid System Testbed (STB)...............................................................156
10.5.4 Electrical Functional Model (EFM).........................................................160
10.5.5 Onboard Software Test Sequence.........................................................163
11 OBSW Development Process and Standards......................................................165
11.1 Software Engineering Standards – Overview...............................................166
11.2 Software Classification According to Criticality.............................................169
11.3 Software Standard Application Example.......................................................170
XIII

Part IV Satellite Operations


12 Mission Types and Operations Goals..................................................................179
13 The Spacecraft Operability Concept....................................................................185
13.1 Spacecraft Commandability Concept...........................................................187
13.2 Spacecraft Configuration Handling Concept.................................................187
13.3 PUS Tailoring Concept..................................................................................189
13.4 Onboard Process ID Concept.......................................................................190
13.5 Task Scheduling and Channel Acquisition Concept......................................191
13.6 The Spacecraft Mode Concept.....................................................................192
13.6.1 Operational Phases...............................................................................192
13.6.2 System and Subsystem Modes.............................................................193
13.6.3 Equipment States versus Satellite Modes ............................................196
13.7 Mission Timelines..........................................................................................196
13.7.1 LEOP Timeline.......................................................................................197
13.7.2 Commissioning Phase Timeline............................................................198
13.7.3 Nominal Operations Phase Timeline.....................................................199
13.8 Operational Sequences Concept..................................................................200
13.9 System Authentication Concept....................................................................203
13.10 Spacecraft Observability Concept...............................................................204
13.11 Synchronization and Datation Concept.......................................................206
13.12 Science Data Management Concept..........................................................208
13.13 Uplink and Downlink Concept.....................................................................208
13.14 Autonomy Concept......................................................................................211
13.14.1 Definitions and Classifications.............................................................211
13.14.2 Implementations of Autonomy and their Focus...................................214
13.14.3 Autonomy Implementation Conclusions..............................................215
13.15 Redundancy Concept..................................................................................216
13.16 FDIR Concept.............................................................................................219
13.16.1 FDIR Requirements.............................................................................220
13.16.2 FDIR Approach....................................................................................220
13.16.3 FDIR and Safeguarding Hierarchy......................................................222
13.16.4 Safe Mode Implementation..................................................................223
13.17 Satellite Operations Constraints.................................................................225
13.18 Flight Procedures and Testing....................................................................226
14 Mission Operations Infrastructure........................................................................233
14.1 The Flight Operations Infrastructure.............................................................234
14.2 Support Infrastructure...................................................................................240
15 Bringing a Satellite into Operation........................................................................243
15.1 Mission Operations Preparation....................................................................244
15.2 Launch and LEOP Activities..........................................................................246
15.3 Platform and Payload Commissioning Activities...........................................250
Annex: Autonomy Implementation Examples............................................................253
Autonomous onboard SW / HW Components......................................................254
Improvement Technology – Optimizing the Mission Product................................255
Enabling Technology – Autonomous OBSW for Deep Space Probes..................258
References................................................................................................................261
Index..........................................................................................................................277
List of Abbreviations
General Abbreviations
a.m. above mentioned
cf. confer
e.g. example given
i.e. Latin: id est  that is
w.r.t. with respect to

Technical Abbreviations
AES Advanced Encryption Standard
AFT Abbreviated Function Test
AGC Apollo Guidance Computer
AIT Assembly, Integration and Test
AOCS Attitude and Orbit Control System
APID Application ID
ASIC Application Specific Integrated Circuit
ATV Autonomous Transfer Vehicle
BC Bus Controller
BGA Ball Grid Array
BIOS Basic Input / Output System
CADU Channel Access Data Unit
CAN Controller Area Network
CASE Computer-Aided Software Engineering
CDMU Control and Data Management Unit
CDR Critical Design Review
CISC Complex Instruction Set Computer
CLTU Command Link Transfer Unit
CM Apollo Command Module
CPU Central Processing Unit
DDF Design Definition File
DHS Data Handling System
DJF Design Justification File
DLR Deutsches Zentrum für Luft- und Raumfahrt
DMA Direct Memory Access
DMAC Direct Memory Access Controller
DORIS Doppler Orbitography and Radiopositioning Integrated by Satellite
DPS Shuttle Data Processing System
DRD Document Requirement Definition
EBB Elegant Breadboard
ECC ESTRACK Control Center
EDAC Error Detection and Correction
EEPROM Electrically Erasable Programmable Read Only Memory
EFM Electrical Functional Model
EM Engineering Model
EMC Electromagnetic Compatibility
EQM Engineering Qualification Model
ESA European Space Agency
X VI

ESD Electrostatic Discharge


ESTRACK ESA Tracking Network
FAR Flight Acceptance Review
FDIR Failure Detection, Isolation and Recovery
FM Flight Model
FOC Flight Operations Center
FOCC Flight Operations Control Center – ESOC Terminology – see FOC
FOD Flight Operations Director
FOM Flight Operations Manual – also called SSUM
FOS Flight Operations Segment
ESOC control Infrastructure including antenna stations
FPGA Field Programmable Gate Array
FVB Functional Verification Bench
G/S Ground Station
GDC Gemini Digital Computer
GEO Geostationary Earth Orbit
GOM Ground Operations Manager
GPL GNU Public License
GPS Global Positioning System
GSWS Galileo Software Standard
HITL Hardware in the Loop
HK Housekeeping
HOOD Hierarchic Object-Oriented Design
HPC High Priority Command
HPTM High Priority Telemetry
HW Hardware
I/O Input / Output
IC Integrated Circuit
IF Interface
IRS Institut für Raumfahrtsysteme,
Institute of Space Systems, University of Stuttgart, Germany
JTAG Joint Test Actions Group
LCB Line Control Block
LED Light Emitting Diode
LEO Low Earth Orbit
LEOP Launch and Early Orbit Phase
LGPL Lesser GNU Public License
LM Apollo Lunar Module
LVDS Low Voltage Differential Signal
MAP-ID Multiplexer Access Point Identifier
MC Magnetic Core
MCS Mission Control System
MMFU Mass Memory and Formatting Unit
MMI Man Machine Interface
MMU Memory Management Unit
MSG MeteoSat 2nd Generation
MTL Master Timeline (on board)
MTQ Magnetotorquer
NASA National Aeronautics and Space Administration
NCO Numerically Controllable Oscillator
X VI I

NRZ Non Return to Zero


NTP Network Time Protocol
OBC Onboard computer
OBCP Onboard Control Procedure
OBDH Onboard Data Handling
OBSW Onboard software
OBSW-DP Onboard software data pool
OBT Onboard time
OIRD Operations Interface Requirements Document
P/E Program / Erase (cycle)
PF Spacecraft Platform
PC Personal Computer
PCB Printed Circuit Board
PCDU Power Control and Distribution Unit
PDGS Payload Data Ground Segment – ESOC Terminology – see PGS
PDHT Payload Data Handling and Transmission
PDR Preliminary Design Review
PFM Proto Flight Model
PGS Payload Ground Segment
PID Process Identifier (for an OBSW process)
PL Payload
PMC Payload Management Computer
PPS Pulse Per Second
PROM Programmable Read Only Memory
PRR Preliminary Requirements Review
PUS ESA Packet Utilization Standard
QM Qualification Model
QR Qualification Review
RAM Random Access Memory
RF Radio Frequency
RISC Reduced Instruction Set Computer
RIU Remote Interface Unit
ROM Read Only Memory
RT Remote Terminal
RTOS Realtime Operating System
RWL Reaction wheel
S/C Spacecraft
SA Solar Array
SADT Structured Analysis and Design Technique
SBC Single Board Computer
SCOE Special Checkout Equipment
SCV Spacecraft Configuration Vector
SDD Software Design Document
SEU Single Event Upset
SIF Service Interface
SMD Surface Mounted Device
SoC System on Chip
SOCD Spacecraft Operations Concept Document
SOM Spacecraft Operations Manager
SPACON Spacecraft Controller
X VI II

SRD System Requirements Document


SRDB Satellite Reference Database
SRR System Requirements Review
SRS Satellite Requirements Specification
SSR Solid State Recorder
SSS Software System Specification
SSUM Space Segment User Manual – also called FOM
ST Subservice Type (PUS)
STB System Testbench
STR Star Tracker (sometimes also Star Camera)
SUITP Software Unit and Integration Test Plan
SVF Software Verification Facility
SVS Software Validation Specification
SVT System Validation Test
SW Software
TC Telecommand
TM Telemetry
UML Unified Modeling Language
VC Virtual Channel
WS Workstation
The best way to predict the future is to invent it.
Alan Kay

Part I

Context
Introduction 3

1 Introduction

Rosetta and Lander Philae © ESA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
4 Introduction

Although the payloads of a satellite such as radar or optical instruments are the
principle performance driver for a spacecraft, the platform control functionality plays a
significant role in mission efficiency. Considering key characteristics like required
payload data geolocation precision of today's Earth observation missions, the
requirements towards the satellite platform control functionality are even more
continuously increasing. The same trend can be detected for specific missions like
Earth gravity field measurements, for deep space missions and for latest concepts on
Earth observation from geostationary orbit positions.
The platform control functionality is centrally driven by the functionality included in the
onboard software, (OBSW), and the operational flexibility from ground – being based
on onboard software functions and features. The performance of the onboard
software itself is driven respectively limited by the performance of the available
onboard computer, (OBC), hardware. Thus the chain of spacecraft operations from
ground, complemented by the OBSW and controlling platform and payload
equipment via the OBC hardware is the key system engineering challenge.

1.1 Design Aspects

In a spacecraft, (S/C), development project the initial design requirements do not


however cover details concerning the onboard computer, the software or operations
procedures and so on. The spacecraft mission concept requirements for a satellite
development B/C/D phase are usually laid down in two key documents, namely the
● “System Requirements Document”, (SRD), and the
● “Operations Interface Requirements Document”, (OIRD).
The SRD covers technical requirements on both space and ground segment of the
mission. The OIRD covers requirements on how to operate the spacecraft from
ground.
The S/C manufacturer takes these primary input documents and develops a derived
requirements set exclusively focused on the spacecraft in a so-called “Satellite
Requirements Specification”, (SRS). The SRS thus comprises design and
performance requirements to all S/C equipment, functionality and performance,
especially reflecting
● instrument / payload requirements,
● attitude and orbit control system, (AOCS), design and performance
requirements,
● power subsystem and control requirements,
● thermal subsystem and control requirements,
● onboard data handling subsystem, (DHS), requirements,
● spacecraft “Failure Detection, Isolation and Recovery”, (FDIR), requirements
● and ground segment compatibility requirements.
This is the design baseline for the spacecraft and implicitly for the design of onboard
software features for spacecraft control and secondarily for the onboard computers
and the operations concept. All three domains have to be designed together
complementing each other with the according specifics.
Design Aspects 5

Concerning the onboard computers, the software and the operations concept a
number of aspects have to be taken into consideration. The onboard computers
compared to standard industry embedded controllers or automotive controllers have
to provide
● significant failure robustness only achievable by internal redundancy,
● electromagnetic compatibility, (EMC), to the space environment conditions
● and in addition radiation robustness against high energetic particles.
● The latter cannot be achieved by standard highly integrated circuit, (IC),
designs as used in today’s PC microprocessors. Space application processors
require a lower circuit integration density and further manufacturing specifics.
● This again results in lower achievable processor clock frequencies (20-
66 MHz are typical values).
● Furthermore onboard computers today still have to serve a large number of
different types of interfaces such as:
◊ Serial or LVDS interfaces on the transponder side.
◊ Analog and data bus interfaces on platform and payload equipment side.
● And finally also these interface connections at least partly need to be
redundant.
Similar dedicated constrains affect the onboard software of a satellite. The OBSW
needs to be a
● realtime control software,
● allowing both interactive spacecraft remote control and automated/
autonomous control.
● The onboard software concept typically today is a service based architecture
covering several control and input/output, (I/O), levels:
◊ Data I/O handlers and data bus protocols,
◊ control routines for payloads, AOCS, thermal and power subsystems,
◊ up to Failure Detection, Isolation and Recovery routines.

The operations concept of the spacecraft has to be detailed concerning the:


● command and control of payload and platform via the cited service based
onboard software.
● The operations concept has to be based on the international spacecraft
uplink / downlink data transmission standards.
● The OBSW telecommand / telemetry, (TC/TM), packet management in the
OBSW service architecture must comply to the customer's baseline such as
the ESA “Packet Utilization Standard”, (PUS).
● The S/C mission operations concept has to be elaborated concerning ground
station visibilities, the utilized ground station network, link budgets and
operational timeline commanding from ground.
● Furthermore the operations concept must
◊ support control of all nominal platform and payload functions from ground,
◊ the control of all FDIR and recovery operations from ground and
◊ the handling of OBSW updates, mission extension functions and software
patches from ground.
The detailed design requirements for onboard software, onboard computers and
spacecraft operations result from the mission analysis performed and the selected
spacecraft design concept.
6 Introduction

1.2 Onboard Computers and Data Links

Payload Module Service © Astrium

Module OBC
MMFU

PMC

X-Band Downlink for S-Band Up-/Downlink for


Science Data S/C Cmd/Ctrl

Figure 1.1: Modular satellite and its onboard computers.


Satellite platform control is usually performed via a bi-directional telecommand (TC) /
telemetry (TM) radio data link in S-band (2.0 to 2.2 GHz). Science TM downlink (uni-
directional) is usually performed via X-band (7.25 to 7.75 GHz). For both links usually
the same data protocol standards are applied.
On older satellites or space probes the onboard computer, (OBC), exclusively
controls the S/C platform while a dedicated payload management computer, (PMC),
operates the payload instruments. On newer spacecraft mostly one single OBC
controls both platform
as well as instruments.
Usually in addition a
so-called “Mass Me-
mory and Formatting
Unit”, (MMFU), is on
board for storage of
both housekeeping and
science telemetry.
In case the MMFU is
integrated into the
OBC, such computers
are often called “Con-
trol and Data Manage-
ment Unit“, (CDMU).

Figure 1.2: Satellite block-diagram with central CDMU.


© Astrium GmbH
Mission / Spacecraft Analysis and Design 7

2 Mission / Spacecraft Analysis and Design

Rosetta approach to Steins © ESA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
8 Mission / Spacecraft Analysis and Design

2.1 Phases and Tasks in Spacecraft Development

The following figure shows the phase breakdown of spacecraft development. Listed
in addition are also the main tasks to be performed within each phase. Figure 2.2
depicts additionally the prescribed review milestones according to ECSS-E-M30A.
Phase 0/A Phase B/C Phase C/D Phase E

Evaluation of Conceptualization Design refinement, Production, assembly, Spacecraft


mission and of mission, design verification integration and test operations
compliant payload payload and
design solutions spacecraft design
●Definition of ●Payload ●System design ●Subcontracting of ●Software ●Ground segment
mission objectives requirements refinement and component verification validation
and constraints analysis design verification manufacture ●System integration ●Operator training
●Definition of a ●Definition of ●Development and ●Detailed design of and tests ●Launch

mission baseline alternative verification of components and ●Validation ●In orbit

and alternatives / payload concepts system and system layout regarding commissioning
variants ●Analysis of equipment ●EGSE develop­ operational and ●Payload calibration

●Analysis of resulting specifications ment and test functional ●Performance

minimum spacecraft / orbit / ●Functional ●Onboard software performance evaluation


requirements trajectory algorithm design development and ●Development and ●Prime contractor

●Documentation requirements and and performance verification verification of flight provides trouble
constraints verification ●Development and procedures shooting support
●Standardized ●Design support validation of test for spacecraft
documentation regarding inter­ procedures
faces and budgets ●Unit and

subsystem tests

Figure 2.1: Tasks in Spacecraft Development Phases.

● PRR Preliminary Requirements Review


● SRR System Requirements Review
● PDR Preliminary Design Review
● CDR Critical Design Review
● QR Qualification Review
● FAR Flight Acceptance Review

Phases
0+A B C D E F
MDR PRR
Mission
SRR PDR
Requirements
Tasks

Definition
CDR

Design Definition

QR
Verification &
Qualification
FAR
Production
Launch Deorbiting
Operation

PSP Layer 0 Definition


Verification
Requirements Production
PSP Layer n

Figure 2.2: Spacecraft development phases and reviews. Source © ECSS-M30A


Phases and Tasks in Spacecraft Development 9

Mission analysis is already performed in the early phases 0/A of a project. From
these analysis phases result the requirements towards the space and ground
segment of the mission which are further refined in phase B up to the PDR review.
The system design – also concerning OBCs, OBSW and Operations Concept – start
after SRR. Thus over phases A-C up to CDR the following elements must be defined:
● S/C payloads and their functions
● S/C orbit / trajectories / maneuvers
● S/C operational modes
● Required S/C AOCS and platform subsystems
● Used onboard equipment and according design
● Ground / space link equipment
● Onboard functions for system and equipment monitoring and control
● Autonomous functions – e.g. for the “Launch and Early Orbit Phase”, (LEOP),
timeline execution
● FDIR functions, Safe Mode handling etc.
● Test functions
● Identification of functions being realized in hardware respectively in software
All these are essential drivers for OBC and OBSW design, the spacecraft's top level
and subsystem design as well as for the spacecraft operations concept.

2.2 Phase A – Mission Analysis

Mission analysis serves for determining


the optimum orbit w.r.t.
● payload mission product quality,
● required target revisit times,
● possible ground station contacts
for mission product downlinks and
ground servicing.
Resulting from these are requirements
towards
● mission product data storage
aboard,
● onboard timelines / autonomy,
● data transmission link budgets.

From this elementary assessment


follows the definition of
● characteristics of payload instru- Figure 2.3: Example: LEOP orbit ground
ments, tracks and station visibility. © Astrium GmbH
● operational orbit and LEOP orbit /
trajectory conceptualization,
● S/C geometrical concept:
◊ Body mounted solar array, (SA), deployable SA, deployable antennas,
◊ deployable booms parts,
◊ etc.
10 Mission / Spacecraft Analysis and Design

Next follows the conceptual requirements definition and technology selection for the
main functional components such as
● AOCS subsystem sensors / actuators,
● power subsystem equipment,
● thermal subsystem equipment,
● data handling subsystem equipment.
And finally come the first definitions on
● elementary PL modes,
● elementary S/C modes,
● plus non functional design data such as budgets (mass, power).
The following shall be the first of four consecutive figures restating and sketching out
from top to bottom for each development phase the subsequent level of growing
design detail.

Table 2.1: Phase A design perimeter.

2.3 Phase B – Spacecraft Design Definition

Phase B serves as first complete design definition on system level. This includes a
number of detailed analyses in various fields. Without claiming completeness of the
list the most prominent ones shall be cited including their subtasks. One is the
refinement of of the orbit definition, which includes
● nominal operations orbit,
● transfer orbits / trajectories including LEOP trajectories,
● orbit control maneuvers and
● de-orbiting / re-orbiting after end of life.
Closely associated with the orbits, maneuvers and trajectories is the definition of the
spacecraft's operational modes in nominal and failure conditions. The figure below
depicts an example of a spacecraft level mode diagram. It includes notation of
Phase B – Spacecraft Design Definition 11

possible transitions be-


tween spacecraft modes
and identification of
transition triggers respec-
tively, and required com-
manding to invoke the
according mode transi-
tion. At this level detailed
telecommands are ob-
viously not yet defined.
However these identified
modes are already of
relevance as they are to
be controlled later by the
onboard software.
The next step of design
refinement in phase B
concerns the elaboration
of a complete satellite Figure 2.4: Satellite modes and transitions. © Astrium GmbH
product tree with all main
physical and functional elements, i.e. including onboard software as a product tree
item and eventually any software included for satellite instruments to be developed or
software for subsystem controllers. Figure 2.5 shows an example excerpt from such
a product tree at the phase B development stage.

Figure 2.5: Phase B product tree example. © Astrium GmbH


12 Mission / Spacecraft Analysis and Design

Next after the completion of


the spacecraft product tree is PCDU::BatteryBypass_LogicalOperation

the identification of the


individual types of equipment Parked Armed
to be used for the mission – entry:
MilBusBypassStatus = 0
entry:
[hlcBypassArmingOn == 1] MilBusBypassStatus = 1
i.e. the selection to use star BypassMilBusOffSel . = 0
BypassSelection = Parking
BypassMilBusOffSel . = 1
BypassSelection = Parking
tracker X from supplier Y. In
the ideal process this
selection is foreseen to be [MilBusCmdPark == 1] [MilBusCmdBypassOff == 1] [MilBusCmdSelection == output]

made already at the end of


phase B. In real projects Disarmed
Output selection and
Selected

however the situation may entry:


MilBusBypassStatus = 0
entry:
firing executed via
MilBusBypassStatus = 1
Mil1553 command

arise that certain selected BypassMilBusOffSel . = 0


bypassFired = 0
BypassMilBusOffSel . = 0
BypassSelection = output
BypassSelection = output
equipment has not yet
reached the required quali- [MilBusCmdBypassFire == 1 || MilBusCmdBypassOff == 1]
fication level. In such cases
multiple alternative solutions
must be kept under con-
Figure 2.6: Equipment mode diagram example.
sideration. For those units
where dedicated equipment already could be selected via the supplier documentation
then automatically the equipment modes, transitions, telecommands and telemetry
becomes available.
Another step in phase B is a first allocation of such equipment operational modes to
the nominal and non-nominal spacecraft modes respectively. This identifies mode
statuses for the diverse equipment to be switched by the OBSW during spacecraft
mode transitions plus possible unit A/B redundancy configurations.

Figure 2.7: Equipment operational modes versus spacecraft modes. © Astrium GmbH
Phase B – Spacecraft Design Definition 13

With this information becoming available a first definition of variable sets – so-called
data pools – for the OBSW can be defined, namely the definition of
● variables to be managed via spacecraft telecommands and telemetry,
● equipment onboard command and telemetry parameters,
● and the complementary set of data bus interface variables to be managed.

Table 2.2: Phase B design perimeter.

In phase B of the S/C development the OBSW architectural design already starts and
the subsequent stages are incrementally defined as OBSW is usually developed in a
stepwise approach. Concerning the large amount of design refinements performed in
the next phase C only those shall be followed further which concern the onboard
computers, the software and the S/C operations from ground respectively.
14 Mission / Spacecraft Analysis and Design

2.4 Phase C – Spacecraft Design Refinement

The first step in phase C is the freeze of the product tree and completing the
selection of suppliers for onboard equipment. These final decisions then allow
● the completion of interface definitions between onboard equipment (hardware,
signal types / levels and data protocols),
● the design consolidation for interfaces between OBC and onboard equipment
◊ either implemented via data buses or
◊ as low level line interfaces via a so-called “Remote Interface Unit”, (RIU)
connected to the core OBC.1
● Furthermore the design for so-called “High Priority Command”, (HPC),
interfaces can be finalized. Such HPC lines are commandable from ground
even when the OBSW has problems or is down for emergency reconfiguration.
● And with the consolidation of the electrical and data handling design via RIU
finally the onboard software variable sets (“data pools”) can be refined for
◊ ground/space TC/TM,
◊ for the core OBC,
◊ for data handled via RIU and
◊ for TC/TM data of onboard equipment like sensors / actuators /
instruments.

Table 2.3: Phase C design perimeter.

1
Such a RIU in most cases is connected via a data bus to the OBC and provides all required types of low level
interfaces like analog, serial, bi-level, pulse for control of simple equipment like heaters, simple sensors etc.
Phase C – Spacecraft Design Refinement 15

After phase C the following design information has been collected:


● Mission concept including orbit, ● OBC to RIU interfaces
transfer orbits and maneuvers ● RIU to equipment interfaces
● Spacecraft product tree ● High priority command interfaces
● Spacecraft budgets ● Data pool definitions for
● Spacecraft modes and transitions ◊ ground / space telecommand /
● Selected equipment types from telemetry,
dedicated suppliers ◊ onboard communication and
● Allocation of equipment modes to ◊ OBC internal onboard software
spacecraft modes data pool for OBC internal
● Equipment modes, interface types algorithms
● OBC Equipment bus interfaces

During phase C thus significant design input for the OBSW is consolidated and
during this phase the OBSW development is enhanced to detailed design and coding
as well as verification of first versions. The detailed roadmap is project specific.

2.5 Phase D – Spacecraft Flight Model Production

In phase C the design of the spacecraft was completed and Engineering Models of
the diverse equipment on board (including instruments and payloads) were
developed and qualified. Phase D thereafter is devoted to the production of the S/C
Flight Model. At the beginning of this phase procurement for all flight models of the
required equipment and of the spacecraft structure and flight harness is performed by
the S/C prime contractor. During the assembly, integration and test, (AIT), program
they subsequently are assembled.

2.5.1 Launcher Selection

Another important step at the beginning of phase D, after project CDR is the final
selection of the launcher since for at least most conventional Earth Observation and
science satellites missions multiple launcher options exist. During previous design
phases the S/C design has deliberately been formulated for compatibility with the 2-3
most likely carriers. The primary selection of a potential launcher which is performed
during phase B already evaluates parameters like
● mass to orbit
● suitability for according orbit depending on inclination, escape velocity, and
launcher upper stage reignition requirements
● overall launcher ΔV
The final selection in phase D then is mainly driven by launch slot availability, cost
and status of launcher qualification for new types. The following figures show a
16 Mission / Spacecraft Analysis and Design

Plesetzk:

Plesetsk 62.70° N
40.35° E

Figure 2.8: Rokot launcher and launch site Plesetzk. © DLR

typical example for com-


peting launcher systems for
Earth Observation satellites
of the 1000 kg class at orbit
altitude of approximately
700km.
With the final selection of
the launcher already
implicitly a number of
operational edge conditions
are frozen, namely the
required interfaces between
operations center and
launch site, the first ground
Kourou: contact times and some
required antenna stations.
5,23° N This directly leads over to
52,79° W the topic of engineering the
Figure 2.9: VEGA Launcher and launch site launch and early orbit
Kourou. © ESA and DLR phase in detail.

2.5.2 Launch and Early Orbit Phase Engineering

Launch and early orbit phase engineering implies the detailed development of the
automated sequences on board the satellite from separation detection. These include
Phase D – Spacecraft Flight Model Production 17

● the OBC taking over control of the S/C after being deployed by the launcher's
upper stage,
● automatic position and attitude / rotational rate detection,
● automated rate damping,
● automatic deployment (antennas and solar panels),
● to establishment of ground station contact.
Such sequences are subject to tests in S/C assembly phase prior to launch and will
be treated in more detail in part IV of this book.

Figure 2.10: Launch sequence and satellites deployment in orbit. © DLR

2.5.3 Onboard Software and Hardware Design Freeze


The final design freezes at the beginning of phase D after CDR comprise definition of
● operationally used unit redundancies and redundancy configurations (not all
combinations are usually foreseen for operational use),
● the applied line interconnection redundancies,
● secondary functions like equipment mode commands, reconfiguration
functions, low level “Failure Detection, Isolation and Recovery”, (FDIR),
● final consolidation of data protocols and bus access sequences,
● finalization of FDIR concept and last but not least
● functions for S/C AOCS in orbit characterization and the complements for
payload instruments characterization.
18 Mission / Spacecraft Analysis and Design

Table 2.4: Phase D design perimeter.


Keep it simple:
As simple as possible,
but no simpler.
Albert Einstein

Part II

Onboard Computers
Historic Introduction to Onboard Computers 21

3 Historic Introduction to Onboard


Computers

Apollo 11 launch © NASA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
-- 2

 


+
 
%
%%
 

% 

      (

  




 
 <

 6 

   ;

  
 A 
ACA   

 

 



 
% 
A

)A

 ;
 
  
 

 <
    
         A  %  
     %
 


  
 


 %
  A!#
  %
 
 
     

             
  %
  
  
  

 +   
 
  
 

%

    
   

%
 
%

   
     %    
       
   
 


+ 
  


%
 %(A 
 % 


 
  

 +   A  %    
   
     

  
  



 8 A %
 
 



=
 
        
         
      %   


 @
 +  
% 
  
% 
  %



 %
 


&"( &  @
  
) 
   ( 

% 
%
 +  
A


   


     
     
   
        

 




 (
  A!#
  

 

%E.  
  






 

      
                
    


 33    
2 

A -F

5!  C 
( $

5! !  ,+>  (
* 

( 6
(     "
   

 
     
"            
   /1E.;       

(
     
        
 +   
   
         
  %
% 
   
J%
  




    "
;    
7
:
 

>   $

   
J
77 

 
        
+      

  


 
       +   
     


       
 
(  A        

(

 
 
      "  F

       78
I:  2%     8
F/?9 8I E/1H/
,*((
   


  
 "  
  




  


 
 R
&
(
/-/1H/
+ 
  

 (
  
%
%


  
 
 (   (H=
78

I:=
  & 
    
8 
-./1H-
(  


)



 + 
   

  

 
% 



    
<+ 
 "

>  
-D 2

 


5! !# ,+>  D  


(
 > /
 
  R
 &
        " 
       2%   
     
      

  "
  
     
  


  



  
  (  
  ) 

 
 




 
 8


         %  
%  
  

           
%  
<<

    

      
 
    7 %
3
      9
  
 :  ! 9#   
 3
6   
  
         8
F-?&

H@I"J%,*((
   
   ;  3
 
@
  %
+ 
 


  
   
+        $
      
        (   


 .@(  
   


 
  
        <

    
      

    (
 
  +     &


 





     %    <
  


   
 
 

  

( 6
+ &

     -
+  

<

     /D    




 %
    L  

 
          
  
 J%%
 
  %  
    8
FF?&

1@(
,*((
2 

A -E

(%
=  
    %  
     
  %



    


     

    
+ &

  
  )
       )        
    !
   

   #

  3
%
% 


6
+         <
    

        7&

 


:!&#
%
 A+   
 
    
)
  /..  C   
  
 -I 


6
 +   &  
      

 

 +      
 

    
) 
 
  

 F1A

/FA


 
-HA

 +     <    I  2J


 
/D.
 '  

  <
  /
  


  <
  F
    
%

  H    
0D.=
  8
FD?&




' ?+ ?C


9


(  
(*,
+ *6
+ &<

 D.1H
   
  -/-      
F1A

J+  


  

   7"  ( 
 :!"(# 



% 
  
)
= 

    

 
      F  
  -     
  !  @  #        @





 FE

 
  

(
 


J

 8
FE?
 
/
 
  
 ' ?K
9J ?C


9

-H 2

 


  
   


W
 
%  



  
(


JJ  

3
 
 +
  
 %
   
    
   %
 + 





 A

 
"
 )
 

 
%<
7
J:

            8     
 
       
) 
 
 
3
 

     

. 
 
 


  


 2% 
           
J    
  /    


 

/J 

 



 
 
+ 
   %
  
       
  /
        
  
  
     7
 :        ;  



+
<
 
     !

 
   ;
"(

 
 #8


 


=
7
    : 
  
  "      
    
          

W
 
+
 %/
 


   
%
  

%. W
 
  

 
J 
 /+%

  
 
J
%

 @




3
 

 

 %

  


%  
+ 



 
+  

"( 
  
  
%

%
   
+  
 
 


       
 
    
  *  
  !&  )  F1  A
   
 
   #    

%  
F1  

 
  
&         
J 
  /F      

      HD)HD

) 8
HD)HD
)  
JD.1H 



      !   #             

 

 

 /1I. +
  

  
   
   
      

  



       

  +     "(    
        
3
 "(%


 
 

  

  

&


  +

  7 7 



 
 6
+  AC   &



 
 
    3
  


( 
 &

-

 
AC    D.0D       %
  D.1H      +     
 
 ?
2 

A -I

 9
 "J%
 

 " 

   

 &

I &
<
  
    

 
     
  
   

%

 %
  
(

 


  AC
     A             AC 
  8"+"(*  

   

 &
 

 

%9
 ;

 
 

  %
  
 

  
 
      7  >


  8

 :  !>8#       /.E-
T10V

( 
+  
E((F6
+ &7
 :!# 
 

   
    
      
   
%       

 
(%

J

+   
@  
  
  7
  

$
:  !$#  
  3
11
  3
     
%

J
  
 
  3

  
8
FH 
     
&3
 )

 
$

8
FH?
&

$
,*((
-0 2

 


+      
  
       7   >
  
:  !>#X  
%

J
%<
@
@%
@

  
% @
@
 


8
FI?&

>,*((

C
  
    <
        /-  &

 

  =  /.   


=   
<

 <


2 

A -1

5! !5 ,+>    

( 6
(     <


  
     &

     %


     
  9   % 
      (        
(
 6     
                 +            
'

   '
8K  
 
"

>  
+
       
   
   
(//  

    -.  /1H1   
  *
 (    
 (

         
  
  


         

   
 3    9
          

(//
)(

 

  /1H1/1I-

6
+  (   
  =    
 9=<


     
3   7(  &
   :
!(&#  %       3
  
      
 
%

     
     
7:!# 79
: !9#+      9
           %   
(  /F  

  
   C     8
F0?%
(
  
 

,*((
F. 2

 


+  (&    % 


       /1H.      (         +

9   "  ' 



      (&       
  
    T1V  2       
%  



6
       
%     
  %  
     &


2%
  &



 (& 
 

 

 


*"
-

EH.. 
 (I


(*6
+  <

 -.D0
  !"(#
FH C3   !"#
 +   /H
?/E
/
 

 + '$3
/H3
/D
/%
/



A      
    //I-  
3   +     
%  
&

 )

  +   <  (&
  
/2J

8
F1?(&
 !A /#,*((

 6
+ AC

+ <

 
3



 !





7
 7# 
G G$0   

)

(
3

 
  )


9 (// 
)

 

'$
  
   
 % %
3  
 %
 

  
2 

A F/

+      
  

         
  
    %  *
(
% 


  6   %  

 ( %
  ! T//V#

((6

8
F/.?(&
 K 

,*((
8
F/.F//
%


 

 (
   (&

   

8
F//?(&
 @
K $
!KR#
,*((
F- 2

 


5! !8 ,+
+ 

( 6
+   
  

)    A

     3

  
     A  
   $  
      A
('3/./(  


   
  

          $
7   +3

 :!+#

  6  


        
 
     (

     3

 -.// 8
F/-?  (
 
+      +3 +3/F-,*((

   

 %
 7
 
:!#
% 
  
   
     2     +   +       

7
:
%


 
8 @ -.//
-H  FE. 



6
+ 7 ' 
 :!'#

 

 
 %

+     %
%
 
%
%

 
 
( 

   F?/  %
   

        
 


%

-?/%
 

  


F
   
  



(
  
 
 


  

  

% 


6
+        
          ' 
        
 %

 + 
   ('3/./ 
  D'

A+ D'
 

 %

  A FH.
 (/H
%
  D'


  9 F-
('3/./
 $  



 
   AE-
 8/E
 
 
2 

A FF

8
F/F?('3/./,*((

(  
    
  

       I. 
     
     
3


 +  



 

+   
  
         ('3/./      
  <
  
 

 "(!D-D A #
+     % %%% 
F. 
%
+  


  
 
 
 
    

  
 
 

 
<

  

%

   
+



 6
8    

 %%
%

 ('/./
 +  72
 %
(9@ :2(9@ ! TD.VTD/V#2(9@

   


 
 

 
 ! 

  (
TDDV#

 % 
)*
 


8"+"(*II 
FD 2

 


5!#   


$

+  
%  %
 
 
 
 
 

 %  
K 

  
    %

!' )


@>@  #
'
!' )

@#
 >

!

#
 +  >   

 !     )
  

    
  
$*2
#
 &
! )


(


#

+ 


 
 
 
 
% 
 A  %
%
 




5!#!  ,+D    1




+ 
/.




'(*6
+ 


   

 


  


 


 
 

      
     
% 

7

< 7+ 677 
 

  
% 
)
%
 
7:6 
 
?
 
    


% 

  
  7
:  


8
 )%

T/HV+ 

< 


  ! T/IV#8
/.
<


 E/-  


< 
 
  

 

 
  =
 %
 HD 
&
 6%



 

  7 * :
+ 
 
 %=  > 
   +>%


  
 ;<  
%G3
  
+   
 )
  

 'A FE

8
F/D?
/.,*((

5!#!# ,  $+((*

+ > /@-

?
( 6
(  '
 )

 
> 


   )
 
  
       )
               $ 
*> - (-./1II> /E
/1II+  

  )       >    
 
 
 A  %
   
   

  
 

6
+ >       ?
 +   7    

       :  !#    

%
%      
   
 
    >

  

FH 2

 


 + 7(
(

 :!((#
   '
  /.  @  //  ; ((             

                

      
  
   @




 
 3 ;+ 
% 



 
 
 + ((
  

J    '$         %  



 
  <
  8  )     
      
<  
 8
     >   
              
  
         =       78
      :  !8#  (

 
 
  
 



 
 
  
 > 

 

 



8
F/E?> ,*((


'(*6
+ 

    @
 


 
= ++9
 
!((# 

 

C
 

  !D C/0
 # 8
<
 
 
  
"(
!0 C#%

   ! 8 #+ 

 'A FI

  %
  <
    
           
  


)
 
  
 
  
%

 



 

   <



8 


  <

  
3



8
F/H?> 8 
 ,*((

(   8 


% 
  
>

         >   @       <
  
 
 
 

     !0  C#  A  8         
    
 

 

+ '
> ;@%


%

 
 
 
  +  


 >  
F-0 EFHA
C


//E- A
@'  
EIH A
@

 6
+   AC    
 
     
      ((    


  6   

 3 %8  AC   


 
%

 =

 -./.! T/0V#
(
+ 
$ 6



   
 
 



  A 


 



F0 2

 


5!#!5 (

  


+ &



?
&
 
  !  --  #     
)

!
 

   
   #
    )
    
%

    &
  
  
    
  
   
        
    /.  /101 !  T/1V#
2%

<

    
   


      
  /10/        
     
 
   
        3
    /10H    
      
   
       
      
  + 
   
 
 


  
          
 


8
F/I?&
,*((
6
+         <
  
            
      
   +   7       :  !#    
     
%          

  +   7(
    (

  
 :!((#



  (


 

 
>  
  


   
>    
3
 
 
 8
  &


" 
+
@
 (*6
+   &
    !   T-.V    T-/V#  
   F 

  3   2
  9%
!29#
%@  %!29

   #8" 



+ F29  -   -%
!99#

 
 

 'A F1

+   
              29     
 +  
    
 + 
%
  
8  

=  3     
    
  !A$#  

   
 
%  29    99    

     
      
 "(
!'#/0.-
    8
F/0?"(!'#/0.-' C


&89

  

    
3
 





   %

%      
     
       
 
 
 ! #
+ "(/0.-! T--VT-FV#
<

 ?
 03

 
)/H3

!  
   /H3

  
% 03
 


  
 


   #
 + /H
/H

  !  
3 
3
#
0


 +  !'# 

  /H

%

 

 

 
 
 + "(/0.-   
 3



8
F/1?
 
A &
 
,*(('9

       
3 

       >        /0.-
  &
      
%    2%  
     
   

 
 
="(/0.-
= 


  

  
D. 2

 


&
      <
    F-  A     "(  !   /-  A   

 #29 
% /H A 99+  
%99   
 /H A   

+
8     99  %
  0  A      
      !A$#  




 
 
 
 

   !(#
 
 99 
 '$!2999#  /H2J

 
 
-.. 2J
+ 

/0.-(
  /0.-
 
% 
)


 
+
@
 (*6
C
     )
      
> ((
  
   %        
    &
      
 ((    <
  + 
&
 ((       <


  -  + K  (
  +  
(% !(+(#+ 
(+( 
    /H  
  


      D3
 (  -1.. 8
F-.?(-1./(9$' C


&89


  
 
 
  

!(9$#
% 
 
/H

C
  
  
     (+(         

  <
  ''  //

  
= 


  
''  //@FD  

    D  (
-1.. 
  C
  
  
    
-E.   
  
)
  
%

  
> ((
&
; ((  3
    
 

  2(9@  3

    
 
      
     ! 
TD.VTD/V#

8
F-/?'$ &

(
(

 ,*(('9

 'A D/

5!#!8 (  


     

+ ) "(/0.-
% 
  
  $

 93+3/IE.93+3/0/E

'
6
+ 93+3/IE.
/H

 
 + 

    
J        

  
     +   


         /IE.                 

  
 

  @
 $(883/H83/0
 
 
(    
      
93+3/IE.  
  
 


  

  
  

  
%  

    
    


       


     )  
   ' 

2
 
+ /IE.


/H

 
/H
 
 % 
    F-    D0  
      %




 

8
F--? )(F/IE.
+   
         /-0
' ? )
 
A  )-A  
7 $
:!$#
        
    %
  
             0.0H 

 9 93+3/IE.  /IE.(
/IE.A
+
 

   
   
 

%

 * 

!  ) > ) 2   %
  G  
"#        (F/IE.       )    93+3/IE.  (@A

%

F'-E2J  < +  


  
 %
  (
(+9(
 
'9>
(

    
 <
 
 
 
 3

 !
 F-
# 
+

 
 = )  
 
 
6
 )    
 =



 

 + 
93+3/IE.  
            
  
  




 
D- 2

 


 6
8     93+3/IE.     
   
  %  

%(   8"+"(* >(9 ! TD-VTDFV#
+ 
 % 




( !  TDDV TDEV#  
%  '((9( 
J
/10F  $(
8 (*(*@93+3/0/E!Y(0FY#
(0F
 
   3

 %

%    


 
 
 
/11E(   6 

% 3

J@(*30HE-+
%

   
(1E%  (-..E
+


%
  
 2(9@

 
   %  

   


  !  &
#     $(8  
      %      &*$ (

 &*(+    
%                (  



 +  %
&*$

!&'9#

 

          (       

   





5!#!9 0
   * $ 

+  93+3/IE. (@A
  %
 


         + 
   

   
  
       
    
 

 
' 

 '/IEF  )(F/IE/

  


7
  ( :!(# 

 %
%
) '/IEI*%  '$ 





  %


 
/11.;

  "3/@- %
 
!
//#

   $
        *((3/0 
    %
     
' 
%

  %


 

A
 =93+3/IE.
= 
  
%

    
      7'     :  !'#  
  = 93+3/IE. 
 8 
        
   

           

  

+   )
    
 


AAC


 
 
<

 
        %  +             )


7"    
    :  !"# 
 
    
   
"         
          %       %

 
  
 
 ) 
     

 'A DF

 <

% 
 
    

% 
 
      )     
    
  
    
7) 
:!#
  0.)0H 
 "

  %          %
      "  

 
  
'("
 $*
 
 '' 
(
  
(  " 
 %
 %
 
 
  
     
      

  

  
   )
   
  %
 

8 

 "
  %
  
          
    
               


 
               
      
 
    




  
%
 
%   

+
 
 





 
 

%
 $(?

8
F-F?"(H...A' ?A(  


6
+ 
 " 
  A( "(H... 
A "@H...
 + "@H...


 %
A;  
7
:   ''


 F-
"' 
   FF2J%

-/'
 
<

 
DD 2

 


 
)




 0 A 

 
  
 HD
 !W0 
#
 
   
!$#
  
!$#

 
 @
 

+  "(H...
       A( 
 @
' 

'9  )
"%


    
      
J8
F-D?"

 "@H...
   

 ' ?A(  
+   "(H...  7
  A  :
!A# 
  <
  
  /-0  A     "(  +     %
  
 
            
 <
  
        @

 +  




 78
'&
( :  !8'&(#    7(

  

   
 
:  !(#  +  (
%

@%
 '  
3    
   
% 
   
       %


 
?

 (
 93+/EEF
 2
 3
!2#
  /F1D @8
 '!%
9
#
 "3D--"3-F-$("+

+  "(H...
  "(IE.



 %

A '' IE.
)



 " 
 
 

8
F-E?"

 ''IE.
' ?A(  
 6
C
  
  
%
 %%
(
AC  
       

    7"
  
   :  !"+#  
AC 

 "+ 
 + 

  


 $
 >)C 
  C
  "
%     !   TE/V#  >)C  
    
  "+  


% 
 

  %

+

  
 J 

 'A DE

%
 ?
     /11.;          (  (       %    
  93+3/IE. 


  
6 + 
%  


$ 
A


 @
%
3
  
% 
 
$)

!+("#(
   
 '("=%
 
 ! TF/V#=   (


?
+ 
 

%%  "F-! TF-V
TFFV#

3F-3
  
 '(">I



+   '      %    &
  " (A  &4    !
()&
(A# 
 +
&2!(#+%
 %
   "F-
!'*?+H1/+H1-+H1F#
  "F-


!'*?+H1E#
(   '("
 


F-

! "F-?/-0
#

-D 
 '$


  
 +   
    
    C
A 
  
      
  %  

 
+   
  
  
      
   
  F-

    
  

  !F-  
#    /H  


!HD
#0<

!/-0
#
+ '


8'&(  8
F-H? "F-, (
>29      ( 

  %  
  (9&'9

( 6
 
 (9"
*   

  %         


%%+ 



 
3
          (    

'"A(3/!7'6 ( :#
  

  

       (  
)

   /- & 
C("  (    &  " 

+("3G + 3G A 
&
  
       
  A     
$
7&

 %

 
 :

/D 8
F-I?'"A(3/, (
DH 2

 


 6
(
  


 


 AC 2C%
 ( =    
%


 (

% 

C
'"A(3/

>)C "+  "F-
 
 % 


  %
"+"+ ! TE-V#
"+ 


 %  $

  

 

%
  

 ("
H0K''IE.0.F0H2!2#''("! "F-
9 *#"+ ?
 'G3/..F/3(' 
 
 
  

 "+ @"K3('
 +"*3F.3('
 +'@'
 (%


  &*$
 
  &*$&A
 

5!#!; , *G,


+ *6* +

+ 

 "      
  % 

 '("

 

 


 "F-





   '("  
         '("  >0 

          9 *
 
      7 :  
  %%          
  7   

:  !#  

  +   

      
       

  93+3/IE.  
<

 


 
( 2% 9 * 

  
       
        )       
)

  9 *
  9 *F8+!8+

78+:#


6
+ 9 *F8+! TFDVTFEV#
 

      
  
    
!
F-1?
 9 *F  '("  >0 
  
  
 
0 A  
 
    0 A     
 
  3IED



0@/H@F-3
      
 
()'""( 8
F-0?$+H119 *F38+
,()

 'A DI

 F-3
"( 
  ()"(
 /H3
@!&'#
 +
@ 

  
 

 $("++(&
 
 D C

 
 "('
 $(* 
  
 '
 
 

 $("++(&
 
0#5#'I, D #,, #,,  >>'0

///<98 />5,
&  0H "  'I, D #
B #
B  >
 "     #!-
(2 #8 $ #8 $ &   0( 
" "%

+ %

+

((& ($  C$

($  C$

($  $
 /+ 
(* C$' $   ' ( 
  $ 1 & 0, , ' ,  -' --

H'5#%*

0#5# B 
+ ' 5#% /+ CJ
0( ' 0 ( "0 (

8
F-1?$+H119 *F38+ 
,&
(A
C        
                  
    
  
 (*     '
  C


 

 
 %
 DF8 
  
  %
     
   (     

   
    
  '$ %

(A(


 %
  ("  ("9
+ 9 *
  
?(9&'9@&'989
  

 


 
   




  
+ 9 */3D
 

  
J>29+
 
  

( ' 
 

8'&( Z  
J7  @ 
 
%
  
8'&(C%
    8  8'&(
 ( '( 

   

 

J
  

 
       
 
     
  

 
7
 %$:! $#
     
  8'&(  
 (   "+3(G 
  
 
  


   $ 
 

 8'&(
D0 2

 



<

 6
 %
 
 


%

()%

    

    3

  8'&(   
   
'3
 

FF.

(  %    
  
9 *F8+     ( 
3





 FF/

      -E.    3
  A
         


  3 8


FF.?8'&(%
  
  $  C  8  (* 
 '3
 =
F-1 ,()&
(A

8
FF/?&"$+H11(
A
,()&
(A

(    ( 
3

      9 *F8+
 
   (
@(+ 9
7      3

: F =    

FF-    
    



      
     TFIV  (  
3




A

 

TF0V
8
FF-?F(

 
9 *F8+
 ,(

(%%
  (
 
  

TF1V




 D1

5!5   

( 

+       



  

  

 A  
  



 
              +   

 


 6     
  %  
  
 
    0.0E

 %

  0.0.
        A  3 8
FFF?"%6,*((

 2 +3
 

0.D0H
 
+ 

 %
 

  
*
 
9

 

 

    +
+-/.-. !(# 
  


 
 
 

+ 
%
  
   
 

  
  %   


A
= 

93+3/IE.
  "

  =  A
 

 
Onboard Computer Main Elements 51

4 Onboard Computer Main Elements

OBC Board © IRS / Aeroflex

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
52 Onboard Computer Main Elements

Figure 1.2 and in more detail figure 4.3 show the embedding of an OBC into the
overall spacecraft avionics system. The OBC is connected to the transponders for
interfacing with the ground, has bus interfaces connecting it to intelligent spacecraft
control units and to the “Remote Interface Unit”, (RIU) – in figure 4.3 called “I/O
Board” – and finally the OBC has interfaces to dedicated payload computers or
controllers. The RIU couples the OBC to all spacecraft equipment which does not
provide a high level data bus interface. The following figures show examples of real
OBCs to give an impression on state of the art machines (both ERC32 based). The
CryoSat OBC in figure 4.1 – cabled in the test bench – is an engineering model.

Figure 4.1: OBC Type: LABEN S.P.A. Leonardo.


Photo: Astrium

Figure 4.2: Galileo IOV OBC.


© RUAG Aerospace Sweden AB

  EF

8
DF?'A

 
 
,"$

ED 
 

8!  
 ,% 
+



 
 %
A   A %
 
 =
     


2%    
 


%
+   6 < 
(



       
  A           =       
 
%


 )     ?
 
 
  "(@"(
 A '"@ '"
 
  
 
 %
 
 +
 
 '

 " 



 +  


 <

 
 
 )?
 "@$
!"$#
  8
$

   


+ P                   


   " 


   
 A 


   "F- 9 *
 



 $
 ''H.F ''IE.
 
   

   

      ' "F...  
 
!>#
  

 2

2!2#
" 
 




 ("' 
 




 8'&(
8
%
 

 A  

 

 DD  DE  = 
 %
 A  
 )"$+ 
)
 A  
&
>

 
%)
A

 DD 
 
     


          
  
 
   +         
      
       


  !

 
 


  

  



#?
 +  = 7'&:





 +          7:  =  ' (   '  A  = 
 
   
"( 
 +    
      
   
      



    
  

  +  @  +     
    
 



  
 % 
 
   AC+    A$++"(
A   


D >,B/0$ 0 D$
,($ B/0 ,($$B/0
,,0 B/0 ,,0$B/0
( B/0 ($B/0


9- 0  B/0 0 B/0 9-


0 $B/0  0/B/0
0( B/0 0($B/0

@ @

"' "'

+A' +A'
*+(*@+2+ *+(*@+2+

8@
( 2'
($
2'
 C
  C


A[3\
   

+ L'' +L''

@ @
 E
BF /EEF A /EEF A E
BF
E & 0,F E& 0,F

995 +(' 995


+('

C:&/ E8F '(( 'A( C:&/ E8F


9C:&/ E#F 9C:&/ E#F

,/,  C
  C
 ,/,
(  
%

2K+ 995 995 L"(  
' C
 ' C


(
 ,CE#F  ,CE #F

>,)$E F
0 $ 0  >,)$E F
,)$E8F ,)$ E8F
$&,,0 $&,,0$
,C 58) E F + + ,C 58)  E F
,C ) E9F ,C ) E9F
,C #E#F ( ( ,C #E #F ,(&, ,($ ,( ,(&, $
,C #) E9F 9 9 ,C #) E9F ,(? ,(? $
,C 5) E;F ,C 5) E;F $
,C 8E8HF ( ( ,C 8 E8HF ,(&,# ,(&,#$
,C 8) EHF ,C 8) EHF ,(?# +A' +A' ,(?#$
,C 9E8F ,C 9E 8F
2K+ 2K+
,(&,/D/ ,(&,/D/$
> E#< F > E#<F ,(?/D/
 C
  C
 ,(?/D/$
' +3%(
 

> #E 9 F +2> +2> > #E 9F


0? 0? $
 "( * E F  "( *E F ," ,  ( ," ,  $
 "( 
1E#F "+(  "( 
1E#F ,? ( ,? $
"+( , , , , $
,
1E F ,
1E F
"(  
"(   0?# 0?#$
*E #F * E#F ," , # ," , #$
 E#5F  E#5F +L'' ''
,?# '' +L'' ,?#$

1 E#F /EEF "+ 
1E#F , ,# , ,#$
  E#F /EEF "+   E #F
," , /D/ +(' +(' ," , /D/$

1E8F 
1E8F
,?/D/ ,?/D/$
, ,/D/     , ,/D/$

CE<#F 2' !F-# 2' !F-# CE<#F

>,)$E#9F
0  /), !  0(E8F 9C 9C /), !  0(E8F
,)$E5F / !,0  E5F / !,0  E5F
,C 58) E F  99
  99

,/, ,/,
,C #E5F *++2+ *++2+
,C #) E F  E#F  E#F
,C 5) E#F  A  A
,C 8E9 F
,C 8) E#F 2K+
,C 9E8F

> E#< F +2> >(0( >(0(




   %
 AC

> #E F


1EHF  0,0"D/  0,0"D/
"+( +2( +2(

"(  
++(
9( 9(
/EEF "+
+





$ " /EEF "+ /EEF "+
8
?8
3.E.H3 
3A3
 3%
?-(
-..0 ( ( 
%

L"(  
/!,0  E5F /!,0  E5F

"$E -F
0( 0($ "$E -F
"0E5-F "0E 5-F

0 E8F +O +O 0E 8F

( E#F ( E #F
" E8F "E8F
+ * + *
0/ E#F 0/E #F
( (
0B1E8F 0B1E 8F
0B
 E 8F 0B
 E8F
0BE 8F 0BE8F
0B"
 E8F 0B"
 E 8F
0B>' E8F 0B>'  E8F
"+(
"+( /EEF "+
 "( E#F  "( E #F
/EEF "+
( ,3E #F ( ,3E#F

 E##F E##F


"+( "+(
 EHF EHF
/EEF "+ /EEF "+
,*E F ,* E F

0* E#F ( ( 


%
 ( ( 
%
 0*E #F

#H #H
8
DD?A )=A  
,"$(&( (A

8
DE?A )=@ 
,"$(&( (A



$ "
8
?$3 3
 %
[3\@
@A

?D (
-..0
EE

" !*>'"#
  
>
     ++"                   3%

EH 
 

  A 



)%
  


 "$!
 DE#= 
  E- + "$
   A '%
A
!
 

#+ "((A

 %

 <


     
     <
      A  
   


        
       7(
    
     :
!(# 
   




           
%  "(  
      
  
     
 
  %
  A + "(A
       
        <
   
  
  <
 %%   
   
 
% 

8!# (*

(A
<

 
%  
?

$*6
  
 %
 



%  A +
 *>
"   AC  
 AC
+  

7'"  :!'"#
7  
  '":! '"#
%  


8  '"
9A

  

T/-.V% 
  7

%"(  :!"(#

"(

 



 
 "( 
 



 

   



 
   
 
 
 ( 

 
 8  "(
 
 
 

 

B *6
7" (    :  !"(# 
      
         ) 
    

     
            

A
!) 

 
 
 
) # AC
 
  '" '" 

"( 
  %
 


 AC
"(
+ "(  

=
 '=
 

 7
"(:!"(# 

 
  

 $

   %


 

 
 EI

    !  %        


#   

 
<
  
  7 
 "(:  !"(# 
          "(
!"(#"( 
 

"(
'
"(     
 
 
       %
  

 




   
         
    
  

%     
 

  
 



   
 
 3 7
 %$:! $#+%  
 <

"(

3   ( 
   (     7    
     
:  +
   

 


    



  
  
   "F-9 *  

 
 
 F-
+    ( 


%

 D.

0

  @  

+ 
   
       
     
  (   
       

   

 % 

 (

%

  
 

 

 
 

 (
+


    

  
+   
    






  

   




   AC

  *6
8 A<

 3  +
 

 


  

 
 AC

   
J
   @


    =          

      @     
 
@
 <
    
AC+
 
 7  

> :!>#


)



  /F-8 %  

%
 
 
  A 

  

 A


  

 
AC   
 
   
  


8    =  <
 
 =
   
A  

 8 "(
A%
  "(! T//1VT/-.V#



 +    *
 6
8 %

 
  
    
   

 
 <


  
   +

 




 A
  =
 =
7 
E0 
 

8
$
:!8$# 
+ 8$

 
 I
 

 

         >       
         

%   
   
 %
          

  8            "(    

      

%    
      "(  
 

  9
  
 



 8  '"  =


$A
'"(   




 





8!5 "  $@>   %% 


 
 

8!5!  $/1  


 
 

8  
<
A )
    
 

 

?
 + 

%
   


<
A
 
 

   +




  
 %
  <




=
 
 A"$    

 
 "$

  % 
 + )   
+ A   

 

  7
 7 

( 

    <
      &'   
%     
   

 
 + 
  <
 
      %
     
       
 %
 
  

 + 
%
  +  
   %
 ! C
 7


7#   
J

 
       

+ 

 
)
 

 

8!5!# (%,"% 995$

+   $  

   93+3/EEFA  
      
   
  
 
   

     
      !   TEFV    TEDV#     

 

 
/1IF  $(
8 

 

 
 
A* '
33'
 
 E1

%

              
      7  
2
:!A2# 
(9 
7A:!A#F/7"+
:
!"+#+  (A
 
 
+

     
!
 DH# 


) 

 

 "+ 
  




%

 
+ 
  
  

)
=





%
 
 "+
 
%
 





A
(
+


A "+/ "+-

A
A

8
DH?'

93+3/EEFA
  

 
 


+  



 %
 
 
(A
A 


   
! 0.-F#

8
DI?
  
 

 ?C




C
     

 9
 /
! 
# -!
  #  D
*  @ F)

  9
  




H. 
 

+ D
%

 

   9
    
 
 
   
  +             9      %
     
 +   
%
 %
  
   /-%

8
%  
 A"+)
?
0
   6 + A/H
 

 
 /F-/H
+  "+
 

 

/H
"
,     6 +   A                 "
+
  +   "  +
         
    "  


  /F- A
$ 
    6 +   A            
    

F/


   ("+

 
 
 +
  
 
(6+ A 
 .
F/  


             +
            
 


 
+ "+


 "
   
 
+

  "+
0,0,, 6+ A" 
% 
    +
      +   

  +
        C
 /F- " 
%
"++  
%
+
  

C

8!5!5 
B

 C

   

 
 

          (  ! (# 
  
  
 

   


 
*(((G("K(! TEEVTEIV#
+    C
    
           3%  




  !9>#            !     /#         


! -# C

  /FEE  C
 
 


    
% 

 %
 - 
 
 
  
   




8
D0?

 

A* '
33'
 
 H/

+  
%


 %
%
!

8
"-F-"D-- #  C
% 


%
%
 
<) 
 

<
!-..
@#
(



  C



 
  -   F  D+  % 
 C
 
        
       
  %
  3   3
)
33


  

 
%

C
   

 % % C
 
"(' 



%


% 
 
 E
 C
 
) 



   %
  

 
          
  
    7


:  
 
    93/EEFA+ 





 DF 9 *FAA  
@ 
%   

+  C







 A    7

:
(   C
    
  
%   
     


   +       
    A
 

 %

  <
  
  C


      
)        
%
A  
   
        


8
D1?A
  , (

8!5!8  >%$

7(* :!(*(*3#

%


%
 
  
 
 
! $#

)
  %  "A &2
&   
 

     
 ! TE0VTH.V# (*

 %



   

%

 
  

(*    
H- 
 

+ (* 
 

 !

#3

 
(*7*"M:!*"M# 
  
)
 
(*


 
%


 

        
 

  
  
  <
     


!(* ("*0-E#
(*3         @      
 9  
&> 3( 
 
  *%

 &


8!8 ,   




+   
    
   
      
   
J


     7
%  
    
 :!#= TIHVT0/V
+
   
                    
  
79
 +$
:!9+$#

< !"8#
  
    
%



+ % AC  



  /   D *  *+  
A'  *    
* &
,(0 
+ *
E "  F
(

 (


+ 
'  ' 
' 
!2
 9 # !2
 9 #
' 
J
 +  ' 
J
 ,  
 
9  '  9    
  
 !
+' 
+' 

 +  
 
+
9  '  9   

3
)


 
+ 
+ +
'  


'  



  ?@
+ 9C +




9   9  <  
 


++8 ++8  % 


 
 (


 
9  9  !  
#
  
J

9+$ 9+$

' 
 9C ' 
 ' 
  
3

 
%


9+$
9  !-
# 9 


 
3
%
 

 

  



" 
% +


8
D/.?+ 
@ 
=+


+  HF

  %    


  
            %
    
%

 AC?
 "
+
 '  +
 2
 '

 + !
 #
 %  
  
+   +       
  

             
)

                7  (    $
:
!($#"8


+  + 
+ 
 3 
 
   

 
A 



 DF 7:


DD
 

  7A$++":
+  
 


8
  D//
 
  
  

A!) 
  #  

 
    
+ 
 
 + 
 


@ <

@
A
A' A

 

! 

  

>&3 >&3
 

     


 

 
A A
' A

 
2 A
 
( 

 

8

>" 


A
>) >
)
 
) &


(8" (8&

   

'  
"3

A2    ' 

<  %
 +


*"M39  *"M39



  

  

8
D//?' 


HD 
 

8!9  "


 & 

( )
  

J
 
( 
 
 
  !(    8'&(#  
        %
   
 

   

  
 AC

  
+
  

% 72
 '

 :!2'#
 
/



  
  
 !



  0I#+ 2'


   

        3   7  '   
  $
:  !'$# ( 
  
  0  A
       
    0  A
      





]]AC
]] ]]AC
]]
]]3\A
]]

@ <

C  


 

! 

  

>&3 >&3
 

     


 

 
A A
' A

 
2 '
   
$

 

8
!'$#
>" 


>) >
)
'
) &

L
(8" (8& 



$

    !'$#
'  
"3

A2    ' 

<  %
 +


*"M39  *"M39



  

  

8
D/-?2
 '

 


  
'$

 '$

+ '$!
 D/-# -EH

 

   0 

 2'  %   

 

-EH 
 0 

  
2'+ 

 '$ 

<
= 7'


$
:!'$# 

  
  
 A 
' 
$
 HE

     

    
 DF    7:    


     2'  

 '$
 
  


 '$
  


 2'
 <


 '$'$
 



 DD  '$


 7A$++":
 A    
 

 

 '$ 
 
 (  '


 

 @<
= '$

8!; 0
   & 

(A 

 '$
  %

 
  

 


  (             
      
 *!
#  
 '$"!# ( 



%


 
 

  
+   

  
  
 %     
%        



   
  = 
     (  = 

     A  +   
 
 %A  



 
  
  
 
!  #

8
D/F?" 


 
A 
 ,(


C   


 

 A
  A

      @   %       AC  8    
HH 
 

 

     




  A   


        %
    
   '$  = 
   %
  2'     

+  


       
%
  
A

 = 
  
 
7  

> :




 
0F  /F-  ( 
          

  
  
     




  
          A 
      !  '"

# + 
 AC   
AC 


 
  3 
A


A
!
@
#
(  
 %  
  %
      

  
   
  72
  '

 
+ :  !2'+#  
 
      
  +        
  
 
2'+



  
 00 
/F/.
'  A 


 

 
 %

 A" 
 %
     @  <
  =      
             =  
%
ACA 
 
2'+ 

8!< " 


 


A%
 
 

 
<

 
  

   
     
 
  
%     +         
7 :  
 AC+
  
    AC                  

"F-9 *"H...%


 
  
'8)  "F-9 * &*$ 

&A%
2% %


  C


    

  +            %
   
      
)  ) AC  + 
  
A%%

  
 
!$#


 
!#


 
 
 7+(&7
 ! TH-VTHFV#

    
 

J 
 7
+( 
&:!+(&#2%

  2C
 
!  
 
# 

 
7+(&7A 
C
 
 + 
 

@+(&
 

%

 3

 

 


 7
%7   


 
%
  HI

A  
 %
    + 

  )         
    

       9 *F  8+

 

0#5#'I, D #,, #,,  >>'0

///<98 />5,
&  0H "  'I, D #
B #
B  >
 "     #!-
(2 #8 $ #8 $ &   0( 
" "%

+ %

+

((& ($  C$

($  C$

($  $
 /+ 
(* C$' $   ' ( 
  $ 1 & 0, , ' ,  -' --

H'5#%*

0#5# B 
+ ' 5#% /+ CJ
0( ' 0 ( "0 (

8
D/D?$+H119 *F38+ 
,()&
(A

    


    AC  ) 
    %
  

  <




  
 
 %
+  


  
 
 C 
%
 
AC




  
7%
   :  !8# 
      
      AC


=   %








%
A
 8
  (
 
"$(& (  ( 8 
   
         !   C
#


               
          AC + 
AC
 <

  
 %
  
 
 
   


%AC
 
 8! THDV#
 +
77 

% 
%
 
   %  %
 

   
@
+ %
 
  AC"(

     A 
        +
  

   
  %
     

 
H0

D >,B/0$ 0 D$


,( $ B/0 ,( $$B/0

8!H
,,0 B/0 ,,0$B/0
(  B/0 ( $B/0
9- 0  B/0 0 B/0 9-
0 $B/0  0/B/0
0(  B/0 0( $B/0

@ @

"' "'

+A' +A'
*+(*@+2+ *+(*@+2+

( 2'
($
2'
 C
  C

   

+ L'' +L''

@ @
 E
BF /EEF A /EEF A E
BF

*
E & 0,F E& 0,F

995 +(' 995


+('

C:&/ E8F '(( 'A( C:&/ E8F


9C:&/ E#F 9C:&/ E#F

,/,  C
  C
 ,/,

' C
 ' C


$&,,0 $&,,0$
,( &, ,($ ,( ,( &, $
,( ? ,( ? $

,( &,#
$ ,( &,#$
,( ?# +A' +A' ,( ?#$

,( &,/D/ ,( &,/D/$
 C
  C

,( ?/D/ ,( ?/D/$

0? 0? $
," ,  ( ," ,  $
,? ( ,? $
, , , , $

,"$(&( (A
0?# 0?#$
," , # +L'' '' ," , #$
,?# '' +L'' ,?#$
, ,# , ,#$

," , /D/ +(' +(' ," , /D/$


,?/D/ ,?/D/$
, ,/D/     , ,/D/$

CE<#F 2' !F-# 2' !F-# CE <#F

/), !  0( E8F 9C 9C /), !  0( E8F


/ !,0  E5F / !,0  E5F
 99
  99

,/, ,/,
*++2+ *++2+
 E#F  E#F
 A  A

8
D/E?%
  "F-A
>(0( >(0(
 0,0"D/  0,0"D/
+2( +2(
++(
9( 9(

+





$ " /EEF "+ /EEF "+
8
?8
3.E.H3 
3A3


   
!
DD#
  3%
?-(
-..0 ( ( 
%



 
% 
 -0>@E.> 
   A
L"(  

 

<



 E> # 
 %
-0>

A $
%
 !
DF#

@ 
%
(  < 
  



) 
) 
 7"$:!@# 78
9:
 <
%+  % % 

E.> 
+   
! 
 

A  
 
<

FF> 
!


"@$
!"$#A %
  %
%
@ %#

 A   
%
' H1



 
 <
 
 FF  E>
    


 

  <


%  
     '$  = %   %

A+

  %  "$
 % 
)

 


  
+ @

)
 
  




  C
<
 
 
   


  



8!A ,+  /1 

+ 

A    



 
!'A#



 
 
 
A @

   
%

   
"$  @ 

+ )

A 


=
 
 
 
 @
 


%

%   


 

 A 
   
 
%

  
     
       @      
  %    

 
 
 !


#
$      @  
  

  !   
     
    @
<
  


#     A          

  
  + 

  
           

           

%


+ %


  A   


 


  
  
     %     
  

      
    

 
 
 




  %            
  A  
          
  


  %   
 (    



     
@
 ) 
   '$    
 
           %

    
  
 
   @ )  A=

 @ ! 


  #






   A'A 
   



! )
 3D. ^#+ 
 
<
2

 
%
 
 A 

   



 
!'A#
    

'A
OBC Mechanical Design 71

5 OBC Mechanical Design

OBC board frame © IRS, Uni Stuttgart

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
72 OBC Mechanical Design

The mechanical design of an OBC in the first place seems a rather simple task
compared to the electronics. However it has to fulfill a number of non trivial
requirements and may not be underestimated. Not only the interconnection between
OBC PCBs and OBC housing but also the entire assembly of the OBC has to
withstand
● sine vibration and shock loads during launch and
● permanent temperature cycles (and resulting mechanical implications) in orbit.
First of all the chips are mounted with according soldering connections onto the
boards. The most common designs used are “Surface Mounted Devices”, (SMD), or
chip implementation as “Ball Grid Array”, (BGA), assemblies respectively.
Since an OBC, as discussed in the previous chapters, consists of multiple printed
circuit boards, the mechanical chassis architecture obviously has to follow this
concept. Therefore each OBC circuit board first of all is mounted into an aluminum
frame. An example is given in figure 5.1. Such a frame has to hold the PCB and has
to hold the according connectors. The connectors typically are connected to the
board by short flexible wiring to avoid mechanical loads onto the soldering points
when the board minimally vibrates due to launcher induced mechanical loads.
Besides the outer mounting points of the board in the frame there might exist further
intermediate fixation points which however may not interfere with electronic
components nor with circuits inside the multilayer PCB.

Figure 5.1: OBC CPU board example. © RUAG Aerospace Sweden AB

The entire group of several such frames has to be assembled to an overall OBC
housing which is additionally equipped with mounting stands allowing bolting to the
S/C structure. Therefore the overall chassis design becomes rather complex. Figure
6.2 and 6.3 show an OBC flight model chassis from front and rear side.
OBC Mechanical Design 73

Another important aspect of the OBC design is to assure tightness with respect to
electromagnetic emission. The chips in today’s OBCs are clocked at rather high
frequencies and so are modern bus interfaces, e.g. the already cited SpaceWire.
Even if it is obvious that data bus cabling from OBC to S/C equipment is shielded, it
has to be considered that the signal lines inside the OBC from connector to the PBC
are single wires (see figure above) – even if they are short. And principally they
induce electromagnetic emission effects. This is the reason why in the example of
figure 5.1 above the individual wiring groups from connector to PCB are placed in
dedicated frame “subcompartments”. Similar antenna effects can be induced by
longer lines between chips on a PCB – which should be avoided – and obviously also
by wiring between PCBs inside the OBC.
Even if the electronics inside the OBC do not affect each other the overall OBC must
be electromagnetically tight against electromagnetic effects induced from the external
environment (such as from solar bursts etc.) which otherwise directly affect the a.m.
wiring of connectors to board or cross PCB lines.
For those reasons the final chassis, assembled from the multiple PCB frames, must
be a closed metal on metal construction which is usually achieved by mounting the
outer plates with a closely positioned screw placement as it is nicely illustrated in the
figure below and in figure 6.3 in the next chapter.

Figure 5.2: OBC chassis example. © IRS, Uni Stuttgart


OBC Development 75

6 OBC Development

SMD Soldering © Aisart / Wikipedia

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
76 OBC Development

6.1 OBC Model Philosophy

OBC models differ from mission to mission. In the telecom satellite domain the
highest level of standardization can be achieved since the platforms of such S/C are
really series products concerning their platform, equipped with more or less
transponders and positioned at different geostationary longitudes.
The contrary is true for Earth observation and science spacecraft. Here normally only
a single “Proto Flight Model”, (PFM), is built, or a mini series is constructed such as
ERS-1/2, MetOp 1-3, GRACE (Constellation of 2 S/C), SWARM (Constellation of 3
S/C). For each mission the OBCs have to be adapted to a certain extent:
● At least the RIUs differ significantly between missions due to other
instrumentation on board and in most cases due to highly differing AOCS.
● But also OBC cores differ w.r.t. required authentication functions, performance
requirements, memory equipment, high priority line instrumentation (HPCs) as
well as firmware and boot SW setups.
Therefore for each science S/C, mini or large S/C series, at least one OBC has to be
built. However for an OBC which represents a design adaptation to a previous one,
usually the final “Flight Model”, (FM), cannot be built directly. If a new OBC
furthermore has to implement entirely new technologies for the first time, like a new
data bus type – such as SpaceWire – or even a new microprocessor generation, this
requires a complete set of prototypes to be implemented and tested prior to the FM.
In 1995 NASA released a 9 level classification for definition of technology maturity
(cf. [65]) which – slightly adapted – is also applied by ESA to its projects (cf. [66]).
The “Technology Readiness Levels” are defined as follows and accordingly are the
intermediate flight hardware prototypes which are to be implemented:

Table 6.1: TRL levels, their definition and OBC models (key features only).
TRL TRL Definition OBC Model

9 Actual system “flight proven” through Flight Model – FM


successful mission operations
8 Actual system completed and “flight Flight Model – FM
qualified” through test and
demonstration (ground or space)
7 System prototype demonstration in a Proto Flight Model – PFM
space environment
6 System / subsystem model or proto- Engineering Qualification Model – EQM:
type demonstration in a relevant The standard of its components shall be
environment (ground or space) the highest achievable within the
schedule constraints but using the same
manufacturer, the same type and the
same package2 as for the FM.
2
Chip package.
OBC Model Philosophy 77

TRL TRL Definition OBC Model

5 Component and / or breadboard Engineering Model – EM:


validation in relevant environment EM has to be fully representative of FM
except that a lower standard of electrical
components may be used. All
redundancy which will be in the flight
standard model shall be provided in the
EM unless otherwise agreed.
4 Component and / or breadboard Elegant Breadboard – EBB:
validation in laboratory environment EBB is equipped with commercial grade
components and the configuration is
close to the flight model.
3 Analytical and experimental critical OBC Breadboard Model – BB:
function and / or characteristic proof- May still be consisting of multiple
of concept boards, lower standard components and
non representative packaging.
2 Technology concept and application PCB development boards.
formulated
1 Basic principles observed and
reported
3

The three most important steps which a spacecraft engineer on prime contractor side
might be confronted with shall be briefly discussed here:

Development boards:
These boards serve for basic OBSW development tasks like
● adaptation of the operating system code,
● bus or other interface driver development,
● boot software development,
● algorithm performance verifications / optimizations
etc.
Depending on the supplier they are either offered as FPGA boards with IP-Core only
or with real target ASIC processor chip respectively. Examples for such boards were
already presented in the figures 3.30 and 3.31.
Development boards in most cases are equipped with additional RAM and interfaces
– such as a PCI bus interface – which ease code debugging and integration of the
board into a development computer. Figure 3.31 for example shows an OBSW
development board with a real target processor, diverse I/O interfaces and
connectors for piggy-back boards for memory extension PCBs etc.

3
In rare cases between TRL 6 and 7 an additional “Qualification Model”, (QM), is built.
78 OBC Development

OBC breadboard models:


BBs or EBBs are already built around the target processor board and include the
OBC internal RIU as far as one is foreseen for the flight model. Power supply and
thermal control typically are not yet FM representative. Breadboards often provide
additional reset buttons, reconfiguration trigger buttons and activity status LEDs on
the front panel as can be seen in figure 6.1.
Depending on the computer supplier's model philosopy the I/O chips and supplier
specific ICs – such as the CCSDS preprocessor, the reconfiguration logic and
others – are either already available as ASIC or are still implemented in FPGA
technology in the OBC breadboard models.
FPGA based controllers are not entirely representative w.r.t. timing for the OBSW.
IP-cores in FPGAs however have the advantage of being reloadable and are still
modifiable at that stage of development. In OBC breadboard models not all chips are
necessarily based on radiation hard circuitry yet.

Figure 6.1: Galileo IOV OBC Elegant Breadboard.


© RUAG Aerospace Sweden AB

PFM and further FMs:


The Proto Flight Model – and in cases where more than one computer is built, the
Flight Models – are the final OBC implementations for a space mission. PFM and
further FMs normally are identical except their series number coded in PROM and
potentially available S/C authentication codes in a dedicated authentication PROM.
OBC Model Philosophy 79

In case a PFM comprises any functionality achieved in FPGA technology, it has to be


be burned into radiation hard FPGA chips – in most cases based on antifuse
technology – such as the Actel® RT-AX® series. In such cases the IP-Core is also no
longer reloadable after burning.

Figure 6.2: Galileo IOV OBC PFM front side.


© RUAG Aerospace Sweden AB

Figure 6.3: Galileo IOV OBC PFM rear side.


© RUAG Aerospace Sweden AB
80 OBC Development

6.2 OBC Manufacturing Processes

During OBC development all hardware


manufacturing from PCB production through
soldering processes and assembly processes
are strictly quality controlled. A large amount of
hardware verification activities have to be
performed. These cover
● analyses during design phase,
● reviews of design by the customer or the
agency,
● inspections during board or assembly
manufacturing processes
● and finally a large number of tests. Figure 6.4: Manual SMD soldering.
Courtesy © Aeroflex Colorado Springs
Tests are to be performed on various levels
and are covering the aspects of technical electronics design, electronics
performance, workmanship as well as environmental compatibility. Therefore tests
are to be performed
● at digital processing level of the OBC
boards,
● at analog I/O electrical level and signal
quality level (particularly for RIU analog
interfaces),
● at thermal / mechanical level (thermal
stress tests and mechanical load tests
against launcher loads),
● at radiation susceptibility level – such as
x-ray dose testing Figure 6.5: Electronics inspection.
● and finally concerning boards © Jena-Optronik GmbH
electromagnetic compatibility, (EMC).

Figure 6.6: Environmental test


equipment
Courtesy © Aeroflex Colorado Springs
Special Onboard Computers 81

7 Special Onboard Computers

Mass memory units © Astrium GmbH

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
82 Special Onboard Computers

Inside a satellite the central OBC is usually not the only computer. Besides
computers in instruments there are typical AOCS equipment components which
include considerable computational power. The most obvious components are
navigation receivers for GPS, Galileo and / or GLONASS. Another class of equipment
requiring significant CPU performance are star trackers. Modern star trackers are
equipped with their own ERC32 or even LEON processor for fast star map
identification and quaternion computation. These units however are very specific
electronic equipment.
A further class of electronic components, the “Mass Memory and Formatting Units”,
(MMFU), also called “Solid State Recorders”, (SSR), are nothing else than OBCs with
● extremely large storage memory areas,
● very performant data input channels from payload side for science data
storage
● and fast data output channels to science data transponders for downlink to
ground via X-band or Ka-band.
Memory is organized in memory banks and management is performed by SW such
that even failure of an entire bank does not lead to immediate data loss (cf. [ 71]).
Some recorders provide integrated data compression units, some suppliers offer
external separate units.

Figure 7.1: TerraSAR-X Solid State Recorder and memory board. © Astrium GmbH
Special Onboard Computers 83

SSRs by standard are built on SDRAM technology, which requires cyclic memory
refresh and thus at least their memory boards may not be power-cycled between
science data acquisition and downlink to ground. Therefore according power
buffering electronics are required for the case where the S/C encounters a power bus
undervoltage condition or similar.
Latest recorder generations are based on non volatile flash memory technology
(cf. [67]). Flash memory is a technology for storage media that is non-volatile in case
of power-off. It is used in the popular USB Sticks, camera and mobile phone storage
cards. It is a specific type of EEPROM that is programmed and erased block wise.
Since stored data is non-volatile and since devices are very compact and have no
moving parts (in contrary to the ancient tape drives on missions like Voyager), flash
memory is somewhat ideal for usage in space. Such SSRs due to the memory
persistence are robust to onboard power undervoltage situations.
However flash memory has a limited number of program / erase cycles, (P/E-cycle)
before wear begins. High quality ground based systems today reach about 1 Million
P/E cycles and infinite number of read accesses. To eliminate wear problems, flash
memory storage devices such as SSD disks for commercial computers or SSRs for
space today comprise software which monitors P/E cycles per block and statistically
balances P/E accesses over the overall memory bank.
These wear prevention
techniques based on P/E
cycle management by SW
plus bad blocks identi-
fication and management
via SW plus data stream
caching on input side and
distribution to the various
memory banks however
requires considerable
CPU power. Therefore as
example the Astrium
“Integrated Solid State
Recorder“, (ISSR), is built
on the basis of multiple
LEON2 processors. So it
becomes obvious that
these systems are real Figure 7.2: Integrated Solid State Recorder
© Astrium GmbH
computers and much
more than “external solid
state disk drives”.
Perfection is achieved
only on the point of collapse.
C.N. Parkinson

Part III

Onboard Software
Onboard Software Static Architecture 87

8 Onboard Software Static Architecture

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
88 Onboard Software Static Architecture

8.1 Onboard Software Functions

Already in the introduction to chapter 7 it was explained that more S/C onboard units
than just the central OBC are in fact computers. And obviously they contain and need
onboard software for operation. Besides the computers cited in chapter 7 quite a
significant number of microcontrollers are hidden in intelligent sensor and actuator
units which including their own embedded software. Typical components providing
functions achieved in software are
● obviously the S/C platform central OBCs,
● instrument / payload control computers and payload data processors (image
compression units etc.),
● the aforementioned Memory Management and Formatting Units,
● Power Control and Distribution Units,
● and complex AOCS sensors and actuators such as
◊ the previously mentioned star trackers,
◊ GPS / Galileo / GLONASS receivers,
◊ other position sensors such as DORIS receivers,
◊ as well as fiber-optic gyros and
◊ intelligently controlled reaction wheels.

An example for onboard equipment driven by software is cited below, based on the
small university satellite cited already in figure 4.3.

Figure 8.1: Onboard equipment driven by software. © IRS, Uni Stuttgart


namely:
page 18).

Functions:
Functions: •Control SC Modes
•Receive Gnd TCs
•Control Equipment Modes
•Depacketize TCs OBC
•Switch SC Modes
•Handle TC Queues / Procedures Failure Detection,
•Parameter Monitoring •Switch Equipment Modes
Isolation & Recovery •SC Failure Handling
•Event Handling
•SC HK TM Generation
Control Algorithms
•Submit SC TM
Onboard Software Functions

S/C EQ SC Equipment
OBSW EQ
Cmds
TC/TM Cmds
&
Data RIU&
Serial Protocols
Service Replies
Handling IFReplies
Data
Archi. Analog TM/TC

Data Buses
Operating System
High Priority TCs

Functions:
•Measure / Control
Functions: Functions: •Switch Modes
•Control Bus TCs •Decalibrate Analog TM •Status Surveillance
•Read Bus TM •Calibrate Analog TC •Read OB TC
•Calibrate/Decalibrate TC/TM •Receive Bus Tcs •Generate Equipment TM
•Generate Bus TM

● Telecommand processing for S/C command from ground


Transponders Ground Station

● Telemetry generation for S/C status monitoring by the ground station


Figure 8.2: Spacecraft function allocation, control data and data links.

An OBSW of a S/C platform control OBC has to implement a number of functions,


A more abstract view on S/C functions and their allocation in equipment and software
based functional blocks is depicted in the figure 8.2 below which also can be cross-
related with the S/C design information available in phase D (cf. table 2.4 on
89
90 Onboard Software Static Architecture

● S/C control comprising of:


◊ Attitude and orbit control
◊ Power control
◊ Thermal control
◊ Payload instruments operation

On top of these are the functions for


● system status monitoring and
● failure detection, isolation and recovery

are implemented on various OBSW hierarchy levels. For understanding an OBSW


architecture implementation with the various functional modules and their interfaces a
detailed discussion of the so-called static architecture will follow.
Besides this the dynamics of the OBSW processes are to be analyzed and designed
in detail – the so-called dynamic architecture. This topic follows later in chapter 9.

Application
Gnd Layer
System AOCS Platform Payload
TC/TM
Control Control Control Control
Handling

Core Data Handling


Data Handling
Service Layer
I/O
I/O Services I/O Services
Services
Low Level IFs Data Buses
Packets

Operating System
BIOS Realtime Operating System and Hardware IF
Layer

Figure 8.3: Top level building blocks of an OBSW.

The OBSW static architecture can be broken down to the following main elements:
● Operating system and drivers layer
● OBSW data pool
● Application layer
● OBSW interaction with ground control
● Service-based ground / space architecture
● Telecommand routing
● Telemetry downlink and channel multiplexing
● High priority command handling
● Service interface stub (SIF)
● Failure detection, isolation and recovery

These shall be treated subsequently in the following sections. An object-oriented


software implementation concept shall be assumed.
Operating System and Drivers Layer 91

8.2 Operating System and Drivers Layer

The following figure is the first of a series depicting step by step the static
architecture of an OBSW from the lowest level (closest to processor and electronics)
to more and more high level and system control oriented functional blocks. Figure 8.4
depicts the operating system level.

IFs to AOCS, PF or PL IFs to Transponder


Equipment or RIU Equipment

Boot Loader RTOS I/O Line Drivers

Figure 8.4: Operating system and drivers layer.

The OBC boot loader4 is physically placed in PROM or EEPROM – usually at


physical address 0x0. The boot loader contains all necessary instructions to initialize
the processor to be able to boot the Realtime Operating System, (RTOS), and the
OBSW layers built on top. This includes in most cases to initially copy the OBSW
from PROM to RAM, a memory image checksum verification, and OBSW execution
startup at a specified RAM address (usually 0x0). The realtime operating system
binary is booted first and the non-operating-system part of the OBSW is started on
top of the RTOS thereafter.
Drivers for I/O channels from OBSW to hardware interface lines in/out of the
computer are comparable to IF drivers on conventional PCs (like UART, USB etc).
Depending on the OBC chip intelligence such drivers either
● are implemented entirely in software
● or partly are represented in the “System on Chip” hardware already.
The level of software implementation also depends
● on the type of electric interface to be driven and
● on the interface data protocol complexity.
4
The boot loader is called sometimes imprecisely the “Basic Input / Output System”, (BIOS) with reference to its
counterpart in ground based PCs.
92 Onboard Software Static Architecture

In System on Chip implementations like the LEON 3 and LEON 4 processors,


interface electronics and lower driver layers such as for SpaceWire are achieved
directly by hardware (cf. figure 3.29). Only the upper control function calls for the
RMAP protocol are implemented as a software stack, but also in the compatible
RTOS already – which means they do not need to be hand-coded by OBSW
developers.
A typical exception are MIL bus interfaces. MIL-STD-1553 stacks always still require
certain implementation effort for the OBSW developer. For reasons of comfortable
direct access of equipment by ground in modern S/C e.g. higher protocol layers are
implemented manually on top of the pure OSI layers 1, 2 and 4 which are defined by
the bus standard itself. An example is to run the ESA “Packet Utilization Standard” for
TC/TM packets (which will be discussed in chapter 8.6) over the MIL bus.

8.3 Equipment Handlers and OBSW Data Pool

Above RTOS and IF drivers reside the equipment handlers. They perform the
command writing from the OBSW via IF drivers to the connected S/C equipment (e.g.
to AOCS actuators, payloads, the PCDU). Furthermore they perform cyclic or on
request onboard equipment telemery acquisition – which is not to be mixed up with
space to ground telemetry.
STR2 Quaternions
STR1 Quaternions

RWL 1-4 Data


STR2 Temp.
STR1 Temp.

OBSW DP

STR RWL .....


Hdl. Hdl.

Boot Loader RTOS I/O Line Drivers

Figure 8.5: Equipment handlers and OBSW Data Pool.

The implementation concept for the equipment handlers may be one handler per
equipment type. E.g. one handler for a set of four reaction wheels, (RWL), one for a
Equipment Handlers and OBSW Data Pool 93

set of three magnetotorquers, (MTQ), etc. In such case the handler has to serve all
equipment instances (e.g. 4 RWLs at a time). The alternative is one handler instance
of an equipment specific class. So one class of handlers for RWLs, one for MTQs
etc. and the 4 RWLs then each are served by one instance of the corresponding
handler class.
Equipment handlers on the lower end are connected to the signal line drivers to
which they have to supply the equipment command data in a driver compatible
format – which is not necessarily the format which is used on the outgoing physical
signal line, data connection or data bus. The handlers pick the 'to be commanded'
parameter values from a central data pool in the OBSW, the “Onboard Software Data
Pool”, (OBSW-DP). Vice versa when acquiring onboard telemetry from connected
equipment the equipment handler has to pick the data from the according SW
interface of the RTOS IF driver and has to place them into the corresponding
variables slots in the OBSW-DP. In both communication directions data format
conversions are usually necessary. And – as will be treated later – also certain data
consistency checks on acquired TM are to be applied during cyclic operation.
It should be noted that one equipment handler usually has to access multiple
physical signal line drivers. Using again the example of a modern reaction wheel
such an equipment handler will have to command wheel torque and to acquire wheel
speed telemetry typically via a data bus such as MIL bus. The wheel temperature
however will be acquired via an analog thermistor line. The wheel drive electronics in
most cases will have additional discrete status command lines besides the MIL bus.
As a result the data bus is a typical interface which is shared by multiple equipment
handlers – all those controlling bus connected equipment. The equipment handler to
I/O-line driver ratio is an N-to-M relation and access conflicts have to be avoided by a
well designed time sliced access approach. Therefore the equipment handlers will
again be addressed later when discussing the OBSW dynamic architecture in
chapter 9.
The cited OBSW data pool can be best understood as a ”vector“ containing – in
binary format, not in engineering units – all operational S/C variables which are
processed inside the OBSW, such as (non exhaustively):
● System time
● Sensor data like star tracker quaternions
● Actuator data like RWL speed data or commanded RWL torque data
● Equipment temperatures
● Currents and voltages
● S/C position and velocity
● S/C attitude and rotational rates
● Payload instrument statuses
It must be pointed out here explicitly that the OBSW-DP variable names or IDs
preferably should be identical to the variable name in the OBSW code and to the
variable name used in telecommand or telemetry packets.
Application functions like AOCS which are treated in the following chapter are using
such variables of the OBSW-DP and it must be avoided that they base their
computations on invalid or outdated values of OBSW-DP variables. Therefore it is
essential that each variable can be flagged as outdated or as invalid if e.g. an
94 Onboard Software Static Architecture

equipment does not respond or shows other symptoms detectable by the equipment
handler. The specific reaction to values being flagged as invalid or outdated is subject
to the application which in normal case needs the data as input.
It also must be noted here that besides the OBSW-DP containing the continuously
updated status and performance variables of the spacecraft, there exists a persistent
data memory area – the safeguard memory which already was mentioned in
chapter 4.2. It contains the S/C redundancy settings, equipment health status
parameters etc. All these are parsed by the OBSW at boot time for proper
configuration and which is cyclically updated. This “Spacecraft Configuration Vector”,
(SCV), its content and use is explained in the operations chapter 13.2.

8.4 Application Layer

On top of the OBSW-DP reside the applications which control the spacecraft's
● payload instruments,
● AOCS,
● power subsystem and
● thermal control.
The applications read input data from the OBSW-DP and place computed output
back into other variables of the OBSW-DP. The applications include the according
controller numerics for AOCS, power, thermal control. Each application in principle
has access to any OBSW-DP variable. Some variables are shared by all apps. such
as the system clock onboard time, (OBT).

AOCS App
Payload Ctrl. Power Ctrl. Thermal
App. App. Ctrl. App.

OBSW DP

PL
PL STR RWL PCDU Therm. .....
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl.

Boot Loader RTOS I/O Line Drivers

Figure 8.6: OBSW Control application layer.


Application Layer 95

Each application can have a different update cycle time which will be discussed again
later in the chapter 9 OBSW dynamic architecture. Each application also internally
encapsulates the handling of the according subsystem states. To provide an example
here the AOCS shall be used again:
The mode transitions triggered inside a subsystem control are handled inside its
control application. If a reaction wheel fails, this may trigger an AOCS subsystem
mode transition to AOCS Safe Mode. In Safe Mode the AOCS for example then
performs S/C attitude control only via magnetotorquers and thrusters and it switches
off all reaction wheels. Thus in Safe Mode (versus normal mode) the function of
attitude control is still performed, but the control is based on a completely different set
of measurement and actuation parameters in the OBSW-DP.
In how far such failure induced subsystem mode transitions automatically induce top
level S/C system mode transitions or vice versa is a topic to be revisited later.

8.5 OBSW Interaction with Ground Control

S/C ground / space communication is based on an international command packet


transmission which is used in the science and Earth observation satellites domain as
well as for telecommunication satellites. The standards for this communication
protocol are defined by the Consultative Committee for Space Data Systems
(CCSDS). The packets defined by the CCSDS standard serve for encapsulation of
● the S/C telecommands uplinked from ground and
● the telemetry information which is transmitted from S/C to ground.
As already presented in chapter 4.4 ground / space telecommand packets have to be
preprocessed for uplink into “Command Link Transfer Units”, (CLTU). Telemetry
packets before downlink have to be preprocessed into “Channel Access Data Units”,
(CADU). Here the architecture of such TC and TM packets shall be presented now in
brief. For the diverse details on such packet definitions, possible variations and
tailoring possibilities please refer to the CCSDS standard and the according
documentation ([76] to [81]).
A TC packet – see figure 8.7 – consists of a 6 byte long packet header and the
packet data field with a maximum length of 242 bytes. The key fields of the packet
header contain the
● Application Process Identifier, (APID):
◊ The APID defines the routing or destination of the packet on board. Each
computer or packet terminal (which also may be an intelligent star tracker
etc.) has its own APID which allows routing of packets by the main OBSW.
◊ The sub-parameter Process ID additionally can identify a software process
running on the target terminal. By this means for example packets can
directly be targeted to the AOCS application or the Power Control
application inside the S/C OBSW on the main OBC. Further details on the
use of APIDs are given in chapter 13.4.
96 Onboard Software Static Architecture

● Packet Sequence Control:


◊ Since CLTUs are subsequently uplinked and since they may encounter
sporadic transmission errors and which might require retries, the uplinked
packets may arrive in a changed sequence in the OBSW input buffer. The
fields concerning packet sequence control serve to uniquely identify the
processing sequence for uplinked packets.
The key fields of the packet data field are the
● Data Field Header,
● the Application Data Field itself – containing the TC's inside the packet
● and finally the packet error control field – containing a checksum.

The following fields of the data field header shall still be noted:
● The Acknowledge Flag – which indicated whether the ground requests a
reception acknowledge from the S/C and
● the Service Type / Subtype Field:
◊ These fields define the function and the format of the TC – e.g. whether a
TC is a direct command to an onboard equipment, a S/C mode change
command or a time tagged command for payload operations.
◊ The various packet services will be treated in chapter 8.6.

Packet Data Field


Packet Header (6 octets)
(variable number of octets)

Packet Error
Packet Sequence Packet Data Field Application
Packet ID Control
Control Length Header Data
(CRC)
Application Pro-
Header Flag

cess ID (APID)
Data Field

Sequence
Sequence

(=11 bin)

= Packet Data
Number
Version

Field Length
Count
Flags
Type
(=0)

(=1)

(=1)

ID (=PID)

Category
Process

Packet

3 1 1 7 4 2 14

2 octets 2 octets 2 octets 4 octets N octets


2 octets
10 octets variable

6 octets max. 242 octets

CCSDS
Secondary CRC Service Service
Ack Source ID
Header Flag Flags Type Subtype
(=0)
zero enumerated enumerated enumerated enumerated enumerated

1 bit 3 bits 4 bits 8 bits 8 bits 8 bits

4 octets

Figure 8.7: Telecommand source packet. © CCSDS


OBSW Interaction with Ground Control 97

Next the structure of space / ground telemetry packets shall be presented:

Packet Data Field


Source Packet Header (6 octets)
(variable number of octets)

Packet Sequence Packet Data Field Source Packet Error


Packet ID
Control Length Header Data Control

Application Pro-
Header Flag

cess ID (APID)
Data Field

Sequence
Grouping
Number
Version

Source

Count
Flags
Type

ID (=PID)
(=0)

(=0)

(=1)

Category
Process

Packet

3 1 1 7 4 2 14

2 octets 2 octets 2 octets 10 or 16 octets 2*N octets, N>0 2 octets

mandatory remark 1) remark 1)

Remark 1): Idle, High Priority and Time TM(9,2) Packets do have no Data Field Header and no Packet Error Control fields

Data Field Header (10 octets)

Filler Error Control Filler Service Service Filler


Time 1
(=0) Flags (=0) Type Subtype (=0)

absolute Time
zero enumerated enumerated enumerated enumerated enumerated
(CDS)
1 bit 3 bits 4 bits 8 bits 8 bits 8 bits 48 bits

Figure 8.8: Telemetry source packet. © CCSDS

Also here a 6 bytes long packet header can be identified and the packet data field
with a maximum length of 242 bytes. The fields of the packet header again comprise
● the Application Process Identifier, (APID) and
● the Packet Sequence Control
as for a TC packet. The key fields of the packet data field again are the
● Data Field Header,
● the Source Data Field itself – containing the downlinked telemetry
● and finally the packet error control field – containing a checksum.

The following fields of the data field header need still to be noted:
● Also in the TM packets the Service Type / Subtype Field can be found:
◊ These fields define the function and the format of the TM including – e.g.
whether a TM is a housekeeping packet, an event induced telemetry, etc.
◊ The various packet services for telemetry will also be treated in
chapter 8.6.
● Furthermore a TM packet data field header comprises
◊ TM time stamping information such as clock sync information and
◊ TM generation onboard time.
98 Onboard Software Static Architecture

While figure 4.10 already showed the principle of multiple TC packets being packed
into a TC segment, the segment being wrapped into a so-called “Transfer Frame”
and the frame being encapsulated into a CLTU from transmission, the figure below
also shows the sequence for telemetry downwards from space to ground.

Telecommanding FUNCTIONS Telemetry DATA UNITS

Generate Source Source I Source II Source III


Packets
Application Process Layer AP AP AP AP AP AP AP AP AP
0 1 2 3 4 5 6 7 8

On Board
Command Directive Multiplex Source
Packets into Source Packets
System Management Layer Transfer Frames of
Virtual Channels VC 0 VC 1 VC 2
TC Application Data Transfer Frames
Multiplex Virtual
Channels into Master Channel Synchronous
Packetization Layer
Master Channel Stream of Transfer
Frames
On Ground

Packet Apply Coding and Physical Channel


modulate RF
On Board

RF Link
Segmentation Layer Demodulate RF Physical Channel
and decode Synchronous
Segment
Stream of Transfer
On Ground

Demultiplex Virtual Master Channel Frames


Transfer Layer Channels
TransferFrames
Demultiplex
Transfer Frame VC 0 VC 1 VC 2
Packets
Source Packets

Coding Layer Distribute Packets


to one or more ...
CLTU Sink Processes Source Packets
Sink Sink Sink Sink
Physical Layer Process Process Process Process
A B C n
Physical Waveform

Figure 8.9: CCSDS packet standard communication. © CCSDS

On board multiple types of TM packets are available in multiple so-called “Virtual


Channels”, (VC), for downlink at time of ground contact:
● There is online housekeeping telemetry available which is produced during
ground station flyover.
● Then there is stored housekeeping TM for downlink which was generated and
is stored since last ground station contact.
● And furthermore there might be TM available from potential events and
anomalies which occurred since last ground contact.
Before generating downlink traffic the packets in these VC buffers must be
multiplexed into one downlink data stream according to their priority.
Each TM packet then is wrapped into a TM frame which is encapsulated into a CADU
for downlink. The segmentation layer does not exist for TM downlink.
For TC and TM packet definition examples including the visualization of packet fields,
packet integration into frames, segments and CADUs / CLTUs please refer to the two
following figures 8.10 and 8.11.
CLTU: variable - 320 byte max.
CLTU - Command Link Transmission Unit
Start 1st 2nd n-th Tail
Sequence Valid tail sequences used:
EB90 (hex) Codeblock Codeblock Codeblock Sequence 55 55 55 55 55 55 55 55 (hex)
16 bit 8 byte 8 byte 8 byte 8 byte C5 C5 C5 C5 C5 C5 C5 79 (hex)

Co deblock – one out of n


TC Data
Codeblocks: variable - 37 max.
Error Control
7 Bytes 7 Parity Bits | 1 Filler Bit

56 bit 8 bit

TC F rame:
TC Transfer Frame variable - 256 byte max.
TC Frame
TC Frame Data Unit Aggregation = OFF: n = 1 / single Packet in Segment Data Field
Frame Header Error Control
= 1 TC Segment Aggregation = ON : n > 1 / multiple Packets in Segment Data Field
(CRC)
Segment Data Field filled up by
5 byte variable length - 249 byte max. 2 byte integral number of packets
OBSW Interaction with Ground Control

Sequence Flags = 11 : no Segmentation

Frame Header T C Segment


Virtual Frame Segment Header Segment Data Field
Version Bypass Control Spare Spacecraft Frame
Cmd Channel Sequence
(=00) Flag Flag (=00) ID Length Sequence Flags MAP ID
ID Number
Packet #1 Packet #n
2 1 1 2 10 6 10 8 2 6
2 byte 2 byte 1 byte 1 byte variable length - 248 byte max.

Frame Header TC VC: TC Se gment:


VC0 = SW TC O BC Core N variable - 249 byte max.

Figure 8.10: TC Packet definition example.


VC1 = HPC1 CCSDS processo r N
VC2 = SW TC O BC Core R T C Source Packet Pac ket: variable - 248 byte max.
VC3 = HPC1 CCSDS processo r R Packet Data Field
Packet Header Application Data Packet
Data Field Command:
Error Control
Header > Comman d Data < variable - 236 byte max.
(CRC)
6 byte 4 byte variable length - 236 byte max. 2 byte

© IRS, Uni Stuttgart


Packe t Header Data Field Header
Packet ID Packet Sequence Control Pkt Length Sec. CRC TC Ack Flags
Head. Flags 0001 = accepted Service Service Source
Version Type DFH APID Sequence Sequence Packet 0010 = start exec
Flags Data Field Flag 0= no CRC 0100 = progress Type Sub-Type ID
(=000) (=1) Flag PID Pkt CAT (=11) Count Length (=0) 1= CRC 1000 = executed
3 1 1 7 4 2 14 16 1 3 4 8 8 8
2 byte 2 byte 2 byte 1 byte 2 byte 1 byte
99
100

Legend : (=<value>) ::== fix numerical value


next line from bo ttom: number of bits in field
CADU bo ttom line: numb er of b ytes in field

Attached Reed-Solomon Codeblock


Sync. Marker
1ACFFC1D Data Space Check Symbols
32 bits 8920 bits 1280 bits
4 byte 1275 byte (10200 bits) CADU: 1 279 byte

TM T ransfe r Fram e

Frame Header Frame Data Field Frame Trailer TM Frame: 11 15 byte = 8920 bits
6 byte 1105 byte 4 byte

Transfe r Fram e Tr ailer


Frame Header
Frame Identification Frame Data Field Status CLCW
Master Virtual Frame Error
Version Virtual Op DFH Sync Pkt Segment First Flags
Spacecraft Channel Ctrl CLCW COP Virtual Farm
(=00) Channel Ctrl Channel Flag Flag Order Length Header
Control (CRC)
ID Id Flag Frame Frame (=0) (=0) Flag ID Pointer Word Version Status in No No Re- Report
(=0) (=11) Channel Spare Lock- B Spare not used.
Count Count Type Fields Effect RF Bit Value
(=0) (=00) (=01) ID Out WAIT trans- Count
2 10 3 1 1 1 1 2 11 Avail Lock mit Covered by RS
2 byte 1 byte 1 byte 2 byte 1 2 3 2 6 2 1 1 1 1 1 2 1 8
4 byte 0 byte

TM Sourc e Packet Packet: varia ble - 20 48 byte ma x.


Packet Data Field
Source Data Packet
Packet Header Data Field Telem etry:
Error Control

Figure 8.11: TM Packet definition example.


Header > Teleme try Data < variable - 2028 byte max.
(CRC)
6 byte 12 byte variable length - 2028 byte max. 2 byte

Packet Head er Data Field He ade r


Packet ID Packet Sequence Control Pkt Length Filler Sync Status Time - Day Segmented (CDS)
Error Filler Service Service
Version Type APID Sequence Sequence Packet Control Day of Time 1
DFH Flags Data Field Type Sub-Type PPS Sync Millisec.
(=000) (=0) Flag Count (=0) Flags (=0) Qual. Epoch: of Day used for all
PID Pkt CAT (=11) Length Source Index 0 := 1.1.2000
3 1 1 7 4 2 14 16 1 3 4 8 8 4 4 16 32
exc ept Siral

© IRS, Uni Stuttgart


2 byte 2 byte 2 byte 1 byte 2 byte 1 byte 6 byte
science

Time - Day Segmented (CDS)


Offset Time 2
Day of Millisec. -sec
Epoch: Counter used b y Siral
0 := 1.1.2000 of Day of Day
Science Packets,
16 32 16 32
only
8 byte 4 byte

All this onboard TC unpacking from CLTUs via frames, segments until packets are
reconstructed and vice versa the TM encapsulation from VC multiplexing down to
Onboard Software Static Architecture

CADUs by means of CCSDS processors has already been looked at in chapter 4.4.
OBSW Interaction with Ground Control 101

The figure below shows the TC / TM handlers inside the OBSW static architecture
which interface to the CCSDS processor via dedicated interface drivers.

Payload Ctrl. Power Ctrl. Thermal


AOCS App
App. App. Ctrl. App.

OBSW DP

Eq.
PL Eq. Eq. Eq. Eq. .....
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.

Boot Loader RTOS I/O Line Drivers

CCSDS Processor Board

Figure 8.12: OBSW Interaction with ground control.

8.6 Service-based OBSW Architecture

In the previous chapter the format of TC and TM packets have been presented which
however only influences architectural details of the TC and TM encoder / decoder
buffers between the CCSDS processor board and the OBSW itself. What has much
more influence on the OBSW architecture are the differences between diverse packet
types w.r.t. content.
There may for example be normal housekeeping telemetry packets from the diverse
cited applications or from the equipment handlers, but also there must be the
availability of event / error telemetry. On the TC side there must be the possibility to
command OBCS controller applications like AOCS, but also to directly command
OBC connected equipment via the according equipment handler and there must be
the possibility to patch onboard software in flight.
These – non exhaustive examples – already indicate the diversity of TC / TM to be
handled and the need for appropriate mechanisms on board. Furthermore this shows
that the TC identifiers and TM generators on board must be a mirror of the TC / TM
generation / evaluation on ground.
To avoid inventing new solutions in this area again for each spacecraft mission the
European Space Agency has developed a standard on these topics, the so-called
102 Onboard Software Static Architecture

“Packet Utilization Standard”, (PUS). The PUS is defined in the ECSS standard
ECSS-E-70-41A and defines a number of onboard services in the OBSW and the
according CCSDS packets for command / control. It also is used for the German DLR
Missions and also for latest French CNES missions.
The terminus ”Packet Utilization Standard“ is completely misleading for newcomers
to the topic because the PUS primarily does not define different types of TC and TM
packets, but it defines software services which have to be provided by the OBSW. As
example PUS Service Type 1 – “Telecommand verification service” – can be taken: It
requires that there must be an onboard TC verification service available which
reports to ground successful / failed TC reception and execution.
As a sideline PUS defines how the CCSDS packets have to be built and which
variables are mandatory for TC packets of a certain service respectively for TM
packets from a service.
For the different subtasks of a service there exist so-called “Subservice Types”, (ST),
as there are for Service 1
● a Subservice for TC acknowledge,
● and a Subservice for TC execution reporting.
The PUS standard reserves the service numbers 0 to 127 although currently only 16
numbers are used, which are listed in the table below:

Table 8.1: PUS services Source ECSS-E-70-41A

Service Type Service Name


1 Telecommand verification service
2 Device command distribution service
3 Housekeeping & diagnostic data reporting service
4 Parameter statistics reporting service
5 Event reporting service
6 Memory management service
7 Not used
8 Function management service
9 Time management service
10 Not used
11 Onboard operations scheduling service
12 Onboard monitoring service
13 Large data transfer service
14 Packet forwarding control service
15 Onboard storage and retrieval service
16 Not used
17 Test service
18 Onboard operations procedure service
19 Event-action service

The S/C developer is free to define further own services in the range of 128 to 255
according to the mission needs. The same applies to Subservices: Numbers 0 to 127
Service-based OBSW Architecture 103

are reserved, 128 to 255 are free for mission specific use. For each of the above
listed services the OBSW needs a dedicated handler which processes the service
TCs and which generates the service TM 5. Furthermore the OBSW overall kernel
must provide a mechanism to route TC / TM to / from the according service handler.
Below now the service definition tables for all the predefined services shall be treated
in brief, citing their most important features and Subservices. The column “Service
requests” cites service subtype TCs which can must processed by the service
handler. The column “Service reports” cites TM and service subtype which is to be
provided by the according service handler. As an intuitive example Service 3,
“Housekeeping and Diagnostics Data Handling” shall be used – see table 8.4:
When the ground operator wants to define a new payload housekeeping TM packet
with a number of parameters from the OBSW-DP, he can submit an according
Service 3:1 command to the spacecraft. The service 3 handler thus is informed about
such a new requested housekeeping packet type. Then the ground can submit a
Service 3:5 command which enables the TM packet generation and which defines
the desired TM packet generation cycle time. From that moment on the Service 3
handler will cyclically generate the according Service 3:25 housekeeping (HK)
telemetry and send it to the on board TM storage (and in case of ground contact it is
transmitted down to Earth).
So in this example both the commandable side of a handler and the TM generation
side are treated as well as how Subservices are to be understood and how packet
types including subtype are corresponding 1:1 to the service features. The services
now all can be treated in analogy.

Table 8.2: Verification of command acceptance and execution. Source ECSS-E-70-41A

ST Service requests ST Service reports


Telecommand verification service -1
1 Telecommand Acceptance Report
Success
2 Telecommand Acceptance Report
Failure
3 Telecommand Execution Started
Report Success
4 Telecommand Execution Started
Report Failure
5 Telecommand Execution Progress
Report Success
6 Telecommand Execution Progress
Report Failure
7 Telecommand Execution Completed
Report Success
8 Telecommand Execution Completed
Report Failure

Service 1 provides no dedicated service request features. For all uplinked TCs which
are equipped with an acknowledge flag – see the turquoise field in the TC data field

5
Not all services cover both TC and TM.
104 Onboard Software Static Architecture

header in figure 8.10 – the service handler provides the according TM acknowledge
packets. Please note that TC “acknowledge” does not mean “confirmation of receipt”,
but (depending on Subservice) confirmation of successful acceptance (which will fail
e.g. when the S/C or concerned equipment / subsystem is in wrong or failure mode),
TC execution start, progress and completion respectively.

Table 8.3: Hardware device commanding. Source ECSS-E-70-41A

ST Service requests ST Service reports


Device command distribution
service -2
1 Distribute On/Off Commands
2 Distribute Register Load Commands
3 Distribute CPDU Commands

Service 2 does not generate any TM packets. It provides the functionality to


command directly from ground dedicated onboard equipment like a star tracker,
bypassing the applications like AOCS. This service is used excessively during S/C
“Assembly, Integration and Test”, (AIT) phase on ground, but also sometimes in orbit.

Table 8.4: Housekeeping and diagnostics data handling. Source ECSS-E-70-41A

ST Service requests ST Service reports


Housekeeping and diagnostic data reporting
service -3
1 Define New Housekeeping Parameter
Report
2 Define New Diagnostic Parameter Report
3 Clear Housekeeping Parameter Report
Definitions
4 Clear Diagnostic Parameter Report
Definitions
5 Enable Housekeeping Parameter Report
Generation
6 Disable Housekeeping Parameter Report
Generation
7 Enable Diagnostic Parameter Report
Generation
8 Disable Diagnostic Parameter Report
Generation
9 Report Housekeeping Parameter Report 10 Housekeeping Parameter Report
Definitions Definitions Report
11 Report Diagnostic Parameter Report 12 Diagnostic Parameter Report
Definitions Definitions Report
13 Report Housekeeping Parameter Sampling- 15 Housekeeping Parameter Sampling-
Time Offsets Time Offsets Report
14 Report Diagnostic Parameter Sampling- 16 Diagnostic Parameter Sampling-
Time Offsets Time Offsets Report
17 Select Periodic Housekeeping Parameter
Report Generation Mode
18 Select Periodic Diagnostic Parameter
Report Generation Mode
19 Select Filtered Housekeeping Parameter
Report Generation Mode
Service-based OBSW Architecture 105

20 Select Filtered Diagnostic Parameter


Report Generation Mode
21 Report Unfiltered Housekeeping 23 Unfiltered Housekeeping Parameters
Parameters Report
22 Report Unfiltered Diagnostic Parameters 24 Unfiltered Diagnostic Parameters
Report
25 Housekeeping Parameter Report
26 Diagnostic Parameter Report

The Service 3 for housekeeping and diagnostics data handling allows for in-flight
activation / deactivation of any defined diagnostic TM or HK TM as well as for in-flight
definition of new packets and for definition / change of packet generation cycle rates.

Table 8.5: Statistics for min / max values etc. Source ECSS-E-70-41A

ST Service requests ST Service reports


Parameter statistics reporting
service -4
1 Report Parameter Statistics 2 Parameter Statistics Report
3 Reset Parameter Statistics Reporting
4 Enable Periodic Parameter Statistics
Reporting
5 Disable Periodic Parameter Statistics
Reporting
6 Add Parameters to Parameter Statistics List
7 Delete Parameters from Parameter
Statistics List
8 Report Parameter Statistics List 9 Parameter Statistics List Report
10 Clear Parameter Statistics List

Service 4 provides means for statistical monitoring of dedicated OBSW-DP


parameters. This service is used only by a limited number of missions.

Table 8.6: Event reporting and system log. Source ECSS-E-70-41A

ST Service requests ST Service reports


Event reporting service -5
1 Normal/Progress Report
2 Error/Anomaly Report Low Severity
3 Error/Anomaly Report Medium
Severity
4 Error/Anomaly Report High Severity
5 Enable Event Report Generation
6 Disable Event Report Generation

Service 5 is again a very important one since via this service all report TM packets
are generated for events which happened on board – i.e. for any parameter
anomalies and out of bound statuses.
106 Onboard Software Static Architecture

Table 8.7: Software upload / dump. Source ECSS-E-70-41A

ST Service requests ST Service reports


Memory management service -6
1 Load Memory using Base plus Offsets
2 Load Memory using Absolute Addresses
3 Dump Memory using Base plus Offsets 4 Memory Dump using Base plus
Offsets Report
5 Dump Memory using Absolute Addresses 6 Memory Dump using Absolute
Addresses Report
7 Check Memory using Base plus Offsets 8 Memory Check using Base plus
Offsets Report
9 Check Memory using Absolute Addresses 10 Memory Check using Absolute
Addresses Report

Service 6 serves for uploading OBSW patches or settings to be directly stored in


certain memory areas may this be for the OBC itself or for intelligent S/C equipment
like star trackers, GPS receivers etc. Also memory dumps from such equipment can
be downloaded to ground via this service.

Table 8.8: Specific functions. Source ECSS-E-70-41A

ST Service requests ST Service reports


Function management service -8
1 Perform Function

Service 8 is provided to trigger onboard functions / sequences which are


preprogrammed in the OBSW, such as solar array or antenna deployment
sequences.

Table 8.9: Onboard time management. Source ECSS-E-70-41A

ST Service requests ST Service reports


Time management service -9
Rate control sub-service
1 Change Time Report Generation Rate
Time reporting sub-service
2 Time Report

This service controls OBSW time packet generation rate and often is enhanced by
private Subservices for OBSW time management.

Table 8.10: Uploaded timeline and execution. Source ECSS-E-70-41A

ST Service requests ST Service reports


Onboard operations scheduling
service -11
1 Enable Release of Telecommands
2 Disable Release of Telecommands
Service-based OBSW Architecture 107

3 Reset Command Schedule


4 Insert Telecommands in Command
Schedule
5 Delete Telecommands
6 Delete Telecommands over Time Period
15 Time-Shift All Telecommands
7 Time-Shift Selected Telecommands
8 Time-Shift Telecommands over Time Period
16 Report Command Schedule in Detailed 10 Detailed Schedule Report
Form
9 Report Subset of Command Schedule in (10)
Detailed Form
11 Report Subset of Command Schedule in (10)
Detailed Form over Time Period
17 Report Command Schedule in Summary 13 Summary Schedule Report
Form
12 Report Subset of Command Schedule in (13)
Summary Form
14 Report Subset of Command Schedule in (13)
Summary Form over Time Period
18 Report Status of Command Schedule 19 Command Schedule Status Report

Service 11 is providing all features to control execution of time tagged commands


such a payload switch on / off sequences to perform dedicated Earth observations.

Table 8.11: Onboard parameter monitoring and limit sensing. Source ECSS-E-70-41A

ST Service requests ST Service reports


Onboard monitoring service -12
1 Enable Monitoring of Parameters
2 Disable Monitoring of Parameters
3 Change Maximum Reporting Delay
4 Clear Monitoring List
5 Add Parameters to Monitoring List
6 Delete Parameters from Monitoring List
7 Modify Parameter Checking Information
8 Report Current Monitoring List 9 Current Monitoring List Report
10 Report Current Parameters Out-of-limit List 11 Current Parameters Out-of-limit List
Report
12 Check Transition Report

With the onboard monitoring service 12 the S/C operator can define monitoring of
selected parameters in the OBSW-DP including limits and Service 5 events to be
triggered in case of a single, sporadic or permanent limit exceeding situations.
In flight monitoring limits can be changed dynamically as well as new parameters can
be selected for monitoring and obviously also monitors can again be disabled.
108 Onboard Software Static Architecture

Table 8.12: Large data transfer. Source ECSS-E-70-41A

ST Service requests ST Service reports


Large data transfer service -13
Data downlink operation
1 First Downlink Part Report
2 Intermediate Downlink Part Report
3 Last Downlink Part Report
4 Downlink Abort Report
5 Downlink Reception Acknowledgement
6 Repeat Parts 7 Repeated Part Report
8 Abort Downlink
Data uplink operation
9 Accept First Uplink Part
10 Accept Intermediate Uplink Part
11 Accept Last Uplink Part
12 Accept Repeated Part
13 Abort Reception of Uplinked Data
14 Uplink Reception Acknowledgement
Report
15 Unsuccessfully Received Parts
Report
16 Reception Abort Report

This service provides dedicated features for up- or downlink of large data, such as for
upload of a complete new OBSW image to RAM before it is copied to EEPROM or for
star tracker star map patches.

Table 8.13: Configuration of realtime downlink during ground contact.


Source ECSS-E-70-41A

ST Service requests ST Service reports


Packet forwarding control service -14
1 Enable Forwarding of Telemetry Source
Packets
2 Disable Forwarding of Telemetry Source
Packets
3 Report Enabled Telemetry Source Packets 4 Enabled Telemetry Source Packets
Report
5 Enable Forwarding of Housekeeping
Packets
6 Disable Forwarding of Housekeeping
Packets
7 Report Enabled Housekeeping Packets 8 Enabled Housekeeping Packets
Report
9 Enable Forwarding of Diagnostic Packets
10 Disable Forwarding of Diagnostic Packets
11 Report Enabled Diagnostic Packets 12 Enabled Diagnostic Packets Report
13 Enable Forwarding of Event Report Packets
14 Disable Forwarding of Event Report
Packets
15 Report Enabled Event Report Packets 16 Enabled Event Report Packets
Report
Service-based OBSW Architecture 109

Already in chapter 7 intelligent S/C onboard equipment like GPS receivers or star
trackers were cited which in reality include their own computer with processors often
comparable to the S/C's main OBC. For such equipment meanwhile it is quite
common, that these units themselves are commandable via PUS standard. In such
case telemetry packets generated by such units – this might be housekeeping, event
or diagnostic packets – are to be routed into the TM data pool by the main S/C OBC
and are to be downlinked during ground contact. The Service 14 above allows
activation and control of such packet forwarding by the main S/C OBC.

Table 8.14: Onboard data storage. Source ECSS-E-70-41A

ST Service requests ST Service reports


Onboard storage and retrieval
service -15
Packet selection sub-service
1 Enable Storage in Packet Stores
2 Disable Storage in Packet Stores
3 Add Packets to Storage Selection Definition
4 Remove Packets from Storage Selection
Definition
5 Report Storage Selection Definition 6 Storage Selection Definition Report
Storage and retrieval sub-service
7 Downlink Packet Store Contents for Packet 8 Packet Store Contents Report
Range
9 Downlink Packet Store Contents for Time (8)
Period
10 Delete Packet Stores Contents up to
Specified Packets
11 Delete Packet Stores Contents up to
Specified Storage Time
12 Report Catalogs for Selected Packet Stores 13 Packet Store Catalog Report

Already in chapter 4.4 it was indicated that there exist multiple telemetry streams
which are to be downlinked, the online telemetry and telemetry coming from multiple
onboard buffers – so-called “packet stores”. Several “Virtual Channels” with different
TM have to be multiplexed according to TM priority by the CCSDS processing during
downlink. The topic was revisited in chapter 8.5 figure 8.9. The
● control of such packet stores onboard,
● their activation / deactivation,
● allocation of TM packets to the individual packet stores and
● downlink from packet stores as well as deletion control of downlinked packets
is controlled via Service 15.

Table 8.15: Link test. Source ECSS-E-70-41A

ST Service requests ST Service reports


Test service -17
1 Perform Connection Test 2 Connection Test Report
110 Onboard Software Static Architecture

Service 17 is a pretty simple service for connection tests which provides the
capability to activate test functions implemented onboard and to report the results of
such tests. This service is used mostly during the early S/C AIT phase to verify
whether the OBSW can communicate to a connected S/C onboard equipment via the
according equipment handler. It also is used during quick health tests after
environmental campaigns to re-verify S/C health status, so-called “Abbreviated
Function Tests”, (AFT). Service 17 is not used during normal S/C operations in orbit.

Table 8.16: Onboard control procedures. Source ECSS-E-70-41A

ST Service requests ST Service reports


Onboard operations procedure
service -18
1 Load Procedure
2 Delete Procedure
3 Start Procedure
4 Stop Procedure
5 Suspend Procedure
6 Resume Procedure
12 Abort Procedure
7 Communicate Parameters to a Procedure
8 Report List of Onboard Operations 9 Onboard Operations Procedures List
Procedures Report
10 Report List of Active Onboard Operations 11 Active Onboard Operations
Procedures Procedures List Report

Onboard functions already were mentioned during presentation of Service 8, such as


functions for deployments. However for diverse tasks on board functions are required
which need more flexibility either in their execution flow or in the function triggering
parameters. For such “Onboard Control Procedures”, (OBCP), their loading from
ground, their execution control and parameter settings a dedicated service – Service
18 – is reserved in the PUS. OBCPs themselves will be treated in more detail later in
chapter 9.4.

Table 8.17: Monitoring of FDIR events. Source ECSS-E-70-41A

ST Service requests ST Service reports


Event-action service -19
1 Add Events to the Detection List
2 Delete Events from the Detection List
3 Clear the Event Detection List
4 Enable Actions
5 Disable Actions
6 Report the Event Detection List 7 Event Detection List Report

While onboard parameter monitoring can be controlled via Service 12 and in case of
limit violation the reporting of generated events can be controlled via Service 6, the
Service 19 is the one which links onboard events to onboard actions and which
activates / deactivates action triggering respectively.
Service-based OBSW Architecture 111

With this list of onboard services to be implemented the OBSW static architecture
diagram can be enhanced with the according service handlers as depicted in figure
8.13. Besides the service handlers themselves a central
● configurable parameter monitor,
● a central event manager,
● an OBCP manager interpreting the sequence of such procedures,
● an onboard memory manager handling the packet stores
● and a central scheduler for execution of time-tagged commands

must be implemented. This makes it evident that a considerable part of the OBSW is
to be implemented for provision of onboard status visibility to ground and for control
of all details on board. This part of the OBSW is usually called “onboard data
handling”, (OBDH), software.

Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Pckt.Forw./Retr.Srv.Hdlr.
Payload Ctrl. Power Ctrl. Thermal Fct.Mgmt.Srv.Hdlr.
AOCS App
App. App. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.

Boot Loader RTOS I/O Line Drivers

Figure 8.13: Service based OBDH architecture.

8.7 Telecommand Routing and High Priority Commands

While telemetry forwarding / routing and control is performed via the Packet
Forwarding Control Service 14, the routing of PUS telecommands to PUS compatible
onboard equipment is managed by means of an “Application Process Identifier”,
(APID). The APID was already mentioned in chapter 8.5. The APID in the TC packet
defines the routing / destination of the packet on board (see also figure 8.7). Each
computer or packet terminal has its own APID which allows routing of packets by the
main OBSW. One computer or PUS equipment even can own multiple APIDs. in such
case individual SW processes in the equipment are addressable individually. This TC
112 Onboard Software Static Architecture

routing however is done by the OBSW which receives a packet from the transponder,
checks for the APID and
● either identifies the packet is directed to itself, or
● in the not unusual case that the main applications in the OBSW have different
APIDS it identifies the targeted application process in the OBSW or
● it identifies the targeted equipment occurrence (e.g. star tracker 2) and
forwards the PUS packet to the equipment over the connecting data bus.
To further precise the topic, as can be identified in figures 8.7 and 8.8 the APID
consists of two parts, namely the “Process ID“ and the “Packet Category”. As it is
indicated above the Process ID is used to route the packet inside the OBSW or to
external units. The Packet Category especially is of relevance for telemetry since the
spacecraft designer can define different Packet Categories and for each category the
OBSW has to provide a dedicated packet store. An example can be
● standard housekeeping TM
● event TM
● High Priority Telemetry.
At ground station contact the different packets are downlinked according to the
allocated TM priority.
In case a satellite is not using the PUS standard, but only the underlying CCSDS, like
many telecommunication satellites and NASA S/C, a similar routing can be achieved
by a CCSDS level “addressing”. For purely CCSDS commanded S/C the TC Frame
is unpacked and the first routing is performed on TC Segment level. The segment
header (see also figure 8.10) contains a “Multiplexer Access Point Identifier”,
(MAP-ID). The MAP-ID covers 6bits and thus allows 64 addresses for TC Virtual
Channel selection on board. The MAP based routing is CCSDS level.
When the OBSW of the S/C core OBC itself is down due to failure, the command
routing via SW – e.g. for emergency load switch off by the PCDU from ground – does
not work anymore. In such cases the “High Priority Commands”, (HPC), Level 1 still
can be used which were already cited in brief in chapter 4.5 and for which the flow is
depicted in figure 4.12. These HPC 1 commands are routed directly in hardware to
the Command Pulse Decoding Unit which triggers equipment emergency switching
via individual bi-level pulse command lines.
In European missions PUS is used as command standard and MAP-ID based Virtual
Channel TC routing is performed for the HPC 1 commands on segment level:
● Command segments with MAP-ID = 0 in the segment header contain HPC1
packets and are routed from CCSDS processing directly to CPDU and thus
are processed entirely in hardware.
● Command segments with MAP-ID > 0 are handed over by the CCSDS
processor to the core OBC's onboard software.
In PUS standard any TCs with MAP-ID > 0 are treated equally and different
MAP-ID values are not used for further equipment identification. In PUS
standard the APID on packet level is used for TC routing instead.
For purely CCSDS commanded S/C the identification of a hardware HPC 1 command
already is performed on Telecommand frame level by a dedicated frame header. And
Telecommand Routing and High Priority Commands 113

in this case the HPC itself is directly included in the command frame and is not
encapsulated in a segment. For these details however the reader is directly pointed
to the CCSDS 232.0-B-1 [80] versus the ECSS-E-ST-50-04A [84].
Independent of the applied command standard the OBSW itself also can trigger HPC
commands going to the CPDU. These are called level 2 high priority commands or
HPC 2 commands.
For the OBSW static architecture no dedicated modules are to be foreseen here.
Routing functions are either performed via the PUS packet APIDs respectively by the
CCSDS processor based on MAP-ID. Figure 8.14 depicts the full scope of TC routing
for both standard TCs as well as for HPC1 TCs using a fictional satellite example.
TC Packets
OBCPs
TC Packets
OB Scheduler
Master Timeline
TC Packets
Event / Action Mgr.
TC Packet
TC Packets Processing
OBCP Mgr.

Payload Ctrl. Power Ctrl. Thermal


AOCS App.
App. App. Ctrl. App.

Eq.Hdl.

RTOS I/O Line Drivers


TCs to OBSW
TC Packets routed OBC

CCSDS
to PUS compatible Reconfi-
Processor /
Equipment via APIDs guration
MAP-IF
Module
● Device Commands
CPDU Segments HPC1 Cmds Transponder
to non PUS
Equipment
CPDU

Platform Platform Platform Platform


Payload 1 Payload 2
Equipment Equipment Equipment Equipment

Figure 8.14: TC routing from receiver to equipment via MAP-IDs, APIDS, CPDU.

8.8 Telemetry Downlink and Multiplexing

For commercial and agency satellites TC uplink typically is performed in S-band


frequency range (2.2 GHz). The same band is used for S/C housekeeping TM
downlink. Three types of housekeeping TM are to be determined for all S/C:
114 Onboard Software Static Architecture

● Realtime Telemetry:
This is telemetry being generated on board during established ground link and
which is directly transmitted to ground.
● Playback Telemetry:
This is housekeeping TM which was generated on board during operations of
the S/C out of sight of the ground station and which was intermediately stored
for the purpose of downlink at next ground contact.
● High Priority Telemetry:
This includes all event / action service TM and related TC execution validation
TM which is stored on board at time of ground contact start and which has to
be downlinked to ground with enhanced priority for visibility of events /
actions / recoveries which happened during last flight period and for eventual
manual failure identification and recovery by ground operators.
These three types of TM are transmitted down in parallel via one RF-link through so-
called Virtual Channels (VC). VC multiplexing from different TM buffers into one RF-
link input is performed by the already cited CCSDS processor (concerning OBC HW
see also chapter 4.4 and figure 4.11). So for the OBSW static architecture here no
dedicated modules are to be foreseen. The TM packets just have to be marked with
the according VC identifier when being stored in the OBSW's housekeeping data
memory packet stores, (HK-memory). All routing functions either are covered by the
PUS services 14 and 15 or are handled by the CCSDS processor.
For science data telemetry downlink usually a link in X-band or Ka-band is used to
allow for accordingly high transmission performance. The figure below shows an
example of S/C platform HK TM and science TM being downlinked via different
Virtual Channels.

Figure 8.15: Downlink via multiple Virtual Channels.


Service Interface Stub 115

8.9 Service Interface Stub

The service interface of modern OBCs has already been mentioned. Via a “Service
Interface” the OBSW cyclically reports the following information as pairs of symbol
name and value:
● A preselected set of OBSW variables from each domain in the OBSW-DP,
● OBSW internal variables such as important memory pointers, register entries,
timing parameters, RTOS flags etc.
● Task scheduling parameters for selected applications, handlers etc.
● Data bus access flags etc.
By this means it is possible to check via the SIF and by decoding the binary output
stream
● whether all threads are properly running in the SW
● whether all SW parameters are within limits and
● furthermore S/C control parameters can directly be logged from the OBSW.
The focus during work with the SIF is the check of OBSW health, not S/C monitoring
or control. For this means the SIF already is unhandy, since all OBSW-DP variables
which are accessed are in raw data format and are not calibrated.
Unlike debug code instrumentation the SIF stub module inside the OBSW is kept
included in flight – as was stated before – and therefore it can serve for health
checks until shortly before launch via S/C umbilical connector at the launch site. It is
a fixed building block of the OBSW static architecture.

Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
Serv. IF Hdlr. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Pckt.Forw./Retr.Srv.Hdlr.
Payload Ctrl. Power Ctrl. Thermal Fct.Mgmt.Srv.Hdlr.
AOCS App
App. App. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.

Boot Loader RTOS I/O Line Drivers

Figure 8.16: Service interface stub.


116 Onboard Software Static Architecture

8.10 Failure Detection, Isolation and Recovery

A key topic in OBSW functionality is “Failure Detection, Isolation and Recovery”,


(FDIR). FDIR functionality is implemented on several levels inside an OBSW. In most
cases there exists a top level FDIR module which is the top level health monitor and
recovery action controller of the overall system (cf. figure 8.17).
But failure detection is also included on lower OBSW levels. Equipment handlers for
example monitor the proper data communication between OBC and S/C equipment
and inform the FDIR module in case of anomalies – e.g. when an equipment failed to
respond, an equipment mode transition failed etc. Similarly the application modules
for payload, power, thermal and AOCS include a failure detection layer. AOCS e.g.
must determine equipment performance failures (like increasing RWL friction) or
logical failures (invalid data from sensors).
However as can be seen from these examples the lower OBSW layers mainly
provide functions for failure detection, and only limited features for isolation and
recovery. In case of detected problems passed up to the main FDIR module, failure
isolation and recovery has to determine what to do, e.g. to
● switch off certain equipment (like unnecessary loads in case of a power
failure),
● to trigger a reconfiguration to the redundancy of a failed equipment or
● in last consequence to bring the S/C to another operational mode (Safe Mode
or similar).

Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
System FDIR OBCP Srv.Hdlr.
incl. Reconfiguration OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
Serv. IF Hdlr. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
PL FD AOCS FD Power FD Pckt.Forw./Retr.Srv.Hdlr.
Therm. FD
Fct.Mgmt.Srv.Hdlr.
Payload Ctrl. App. AOCS App Power Ctrl. App. Th. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.

Boot Loader RTOS I/O Line Drivers

Figure 8.17: Failure detection, isolation and recovery.

The underlying basic concept for FDIR is always to handle failures on the lowest
possible level. E.g. in case of a bus transmission error of an AOCS equipment the
Failure Detection, Isolation and Recovery 117

bus controller in the RTOS first performs a bus command retry. If it was successful,
the retry may be logged in telemetry for ground information, but the system
operations can proceed normally. In case also the retry fails the equipment controller
is informed. The Equipment controller may for example recheck the equipment mode
and reinitialize equipment commanding – possibly via the redundant data bus side. If
this also fails control is passed to the next higher level instance which might be e.g.
the AOCS control application which tries to activate the equipment redundancy if
available and if this also fails the S/C FDIR main module performs a S/C mode
transition to Safe Mode and leaves the rest to ground intervention during ground
contact.
For FDIR implementation two basic concepts have to be determined – depending on
the flexibility requested during operations of the S/C:
● One concept is somewhat “hard wired”. In this implementation concept each
lower level module tries its recovery function and in case of failure directly
triggers a fixed function of the next higher FDIR level – in the extreme case it
goes fully up the above described chain to S/C Safe Mode. Therefore however
fixed functions are to be implemented and in case of any changes, the OBSW
or its OBCPs have to be patched. Moreover it is up to each FDIR levels
OBSW code to generate appropriate status telemetry for the ground to be able
to follow what happened.
● A more flexible approach is foreseen by the service concept of the PUS. In this
case an anomaly detected by a PUS monitor triggers an associated PUS
event (which as a side effect provides an event TM packet providing visibility
to the ground). To the event an action is bound which either may induce a
recovery function itself (unit reconfiguration) or which may induce an event on
the next higher level. These monitor ⇒ event ⇒ action chains are more difficult
to implement and require thorough testing, but they are reconfigurable during
flight purely via the instruments of PUS TCs.
Therefore this concept is followed wherever a greater flexibility is required in
the FDIR dependency and reaction chains during flight.
Concerning the FDIR concept and spacecraft operations implications please also
refer to chapter 13.16.

8.11 OBSW Kernel

Finally figure 8.17 depicts as top layer the OBSW “kernel” where this term is not
fixed. It shall represent all the OBSW glue logic which is necessary for
● control of the OBSW startup process sequence after boot loading with
◊ initialization of all components and
◊ interlinking of all OBSW modules
◊ startup of task operations,
● holding the OBSW internal HK data “file system” managed by the OB memory
manager,
118 Onboard Software Static Architecture

● and the control of SW tasks scheduling which will be described in detail in the
following chapter 9.
The kernel also initializes all the different threads of the OBSW, for the equipment
handlers, the applications, the TC and TM handling etc. at SW startup before the
threads themselves are released for running cyclically synchronized according to the
defined scheduling.
During system initialization the OBSW kernel parses the entries in the already
mentioned “Spacecraft Configuration Vector”, (SCV), to properly activate equipment
nominal or redundant sides and to consider equipment which was marked as “non
healthy”. More details on the use of the SCV and its full scope of content will be
provided later in the operations chapter 13.2.
The kernel also is the entity generating all the relevant log entries for later tracking
and evaluation from ground as there are
● boot reports for each OBSW boot,
● system log for tracking status and activated / deactivated equipment
● High Priority Telemetry log and
● reconfiguration log for any reconfigurations performed on board.
These log files reside usually in non volatile safeguard memory areas. Therefore they
are not included in the OBSW schematic figures like 8.17.
The kernel also is the ultimate instance to handle severe HW Traps – as far as still
possible in such case – requiring OBSW identified OBC components reconfiguration.
The kernel for such cases of a “dying” OBSW image also writes the so-called “death-
report” into the system log.
The details of the individual OBSW kernel – sometimes also called “OBSW Core
Data Handling System”, (Core DHS) – implementation are to a large extent driven by
the developer's design methods, even by features of the implementation language
and also by design guidelines and company and agency policies.
Onboard Software Dynamic Architecture 119

9 Onboard Software Dynamic Architecture

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
120 Onboard Software Dynamic Architecture

For all those OBSW building blocks which were presented in the previous chapter the
dynamic architecture has to be developed in a further design step. This comprises
the detailed elaboration and design of
● the internal scheduling of all RTOS threads which encapsulate the presented
building blocks,
● the channel acquisition scheduling,
● FDIR handling,
● processing of Onboard Control Procedures and the
● Service Interface data supply.
These topics shall be addressed subsequently in the following sections.

9.1 Internal Task Scheduling

The basic design paradigm for the dynamic architecture is that all building blocks of
the OBSW as they were presented in figure 8.17 are executed cyclically –
independent of the S/C mode, the applications submode (e.g. AOCS submode) etc.
Most of these building blocks will be implemented as individual tasks / threads on the
RTOS. Task control is subject to the OBSW kernel and is to be designed to be
configurable through a tasking table so applied that changes take effect after a
simple reboot.
Not all tasks need to be executed with the same cycle frequency: E.g. a Thermal
Control Application can well be called 10-50 times slower than the AOCS Control
Application. Certain tasks may have to run at the same frequency, but with timely
staggering between each other: E.g. AOCS sensor data acquisition by equipment
handlers, AOCS control algorithm computation and AOCS actuator control via
equipment handlers.
In former OBC generations in addition a perfectly optimized OBSW tasking with
respect to CPU load management was absolutely essential due to the CPU
performance limits. Therefore up to the 1990s Ada was used without underlying
operating systems up to the MIL-STD-1750 and 31750 CPU chips generation.
With the PowerPC and SPARC chips (ERC32 and LEON) the restrictions became
more relaxed with respect to CPU load limits, but an efficient tuning of OBSW task
scheduling is still necessary to use an RTOS as OBSW baseline since this system
comfort goes at the expense of the gained CPU performance.
Still the requirement remains, that task interaction and data exchange between
building blocks may not lead to conflicts or operational blocking – independent of
● the S/C mode or submode,
● potential parallel running payload instrument operations,
● potential parallel ground contact and data handling.
During OBSW development according scheduling tables for the OBSW Kernel are
worked out for tasking design. An example taken from the Earth observation satellite
CryoSat (ERC32 CPU) is depicted in figure 9.1:
Internal Task Scheduling 121

100ms

Background Tasks:
Memory Scrub

Mil_Bus_Manager Mil_Bus_Manager
MTL_Manager Mil_Bus_Manager MTL_Manager
OBCP_Interpreter Mil_Bus_Manager MTL_Manager Mil_Bus_Manager OBCP_Interpreter
Event_Action_Manager OBCP_Interpreter OBCP_Interpreter OBCP_Interpreter Event_Action_Manager
TC_Manager Event_Action_Manager Event_Action_Manager Event_Action_Manager TC_Manager
Housekeeping TC_Manager TC_Manager TC_Manager Housekeeping
TM_Pkt_Interface Housekeeping Housekeeping Housekeeping Device_Commanding
Device_Commanding Device_Commanding Device_Commanding Device_Commanding Statistics
TM_FIFO_Monitor Mil_Bus_Usage Monitoring MTL_Update RM_Monitor
EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager
SW Watchdog SW Watchdog SW Watchdog SW Watchdog SW Watchdog
0ms
Slot 1 Slot 2 Slot 3 Slot 4 Slot 5

100ms

Background Tasks:
Memory Scrub

Mil_Bus_Manager Mil_Bus_Manager Mil_Bus_Manager


Mil_Bus_Manager MTL_Manager OBCP_Interpreter MTL_Manager
OBCP_Interpreter OBCP_Interpreter Event_Action_Manager OBCP_Interpreter Mil_Bus_Manager
Event_Action_Manager Event_Action_Manager TC_Manager Event_Action_Manager OBCP_Interpreter
TC_Manager TC_Manager Housekeeping TC_Manager Event_Action_Manager
Housekeeping Housekeeping Device_Commanding Housekeeping TC_Manager
TM_Pkt_Interface Device_Commanding Routing Table Manager Device_Commanding Housekeeping
Device_Commanding Separation Event_Action_Update Thermal_Control Device_Commanding
Time_Manager Memory_Management System_Log_Manager AOCS.Handle_STR AOCS.Mode_Handler
EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager
SW Watchdog SW Watchdog SW Watchdog SW Watchdog SW Watchdog
0ms
Slot 6 Slot 7 Slot 8 Slot 9 Slot 10

Figure 9.1: CryoSat OBSW scheduling table. © Astrium

The overall scheduling cycle of the OBSW in this example covers 1 second
(1000 ms). This cycle is split into 10 subcycles with 100 ms duration each, which
partly include the same execution steps and partly differing ones. E.g. a SW
watchdog refresh is performed in each OBSW subcycle, the same applies for call of
the TC manager and quite some others. The Master Timeline Manager, (MTL), which
is called “Onboard Scheduler” in figure 8.17, is called only every second interval. The
handler for PUS statistics service is even only called in subcycle 5, which means only
once per second. The same applies for the thermal control application in subcycle 9.
It also has to be pointed out here that although a task manager is identified in
multiple subcycles, this does not imply that the work performed by the manager per
subcycle is the same. E.g. the data bus manager (MIL-1553 bus in this case) is
called in each subcycle. However not during each one does it communicate with the
same onboard equipment. This subject of channel acquisition scheduling is a
separate topic treated in the next chapter.
122 Onboard Software Dynamic Architecture

What can finally be identified reading the columns of figure 9.1 from bottom to top is
that each sub-cycle foresees a free spare CPU capacity of 25-30% as contingency
for potential FDIR activities.

9.2 Channel Acquisition Scheduling

A key topic to be properly engineered as part of the OBSW dynamic architecture is


the interaction of the OBSW with the S/C equipment connected to the OBC via data
bus or potentially in addition via further low level interfaces (discrete I/O lines etc.) –
please also refer back to figure 8.5.
The most common implementation applied is that the OBC connects directly only to
data bus compatible onboard equipment (except for some software controlled high
priority lines, HPC2). Any equipment, which only provides low level interfaces such as
analog or simple serial or parallel digital lines, is connected to the previously cited
Remote I/O Unit, (RIU), which itself connects to the OBC via data bus. Please also
refer back to figure 8.2. In such a design the OBSW on the OBC core only interfaces
to the data bus – independent of whichever equipment is to be controlled. This
paradigm is also applied onboard CryoSat from which the example figure 9.1 was
taken. On CryoSat the data bus is a MIL-STD-1553 bus.
The concept is even
more fundamentally
implemented in the
“Flying Laptop” sa-
tellite of the University
of Stuttgart, Germany,
where the OBC core
exclusively is com-
municating via data
bus (SpaceWire) to an
I/O unit and the latter
couples to all the plat-
form equipment and to
the payload manage-
ment computer. Figure
9.2 shows a cutaway
of 4.3. In this imple-
mentation the OBSW
is only interfacing to a
data bus – here even
without additional high
priority lines. These
bus interfaces then
have to route the on-
board equipment con-
trol TCs and the on- Figure 9.2: Example: OBC core purely coupled to onboard
board equipment TM. equipment via SpaceWire and I/O board. © IRS, Uni Stuttgart
Channel Acquisition Scheduling 123

AUTO DATA
BUS TYPE SUBTYPE CMD_ACQ TRANS_ALIAS RETRY RT_NAME LENGTH DURATION START END
External Mgmt Y all 1 100 50 150
External Cyclic ACQ_PCDU_Nom PCDU_STATUS_ACQUISITION_REQUEST Y PCDU_N 32 150 150 300
External Cyclic ACQ_PCDU_Red PCDU_STATUS_ACQUISITION_REQUEST Y PCDU_R 32 150 300 450
External Cyclic ACQ_GPS_Nom GPS_STATUS_ACQUISITION_REQUEST Y GPS_N 32 100 450 550
External Cyclic ACQ_GPS_Red GPS_STATUS_ACQUISITION_REQUEST Y GPS_R 32 100 550 650
External Cyclic ACQ_GPS_Nom GPS_POSITION_ACQUISITION_REQUEST Y GPS_N 32 100 650 750
External Cyclic ACQ_GPS_Red GPS_POSITION_ACQUISITION_REQUEST Y GPS_R 32 100 750 850
External Cyclic READ_PCDU_Nom PCDU_STATUS_DATA_READ Y PCDU_N 32 200 850 1050
External Cyclic READ_PCDU_Red PCDU_STATUS_DATA_READ Y PCDU_R 32 200 1050 1250
External Cyclic ACQ_FOGA_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGA 32 110 1250 1360
External Cyclic ACQ_FOGB_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGB 32 110 1360 1470
External Cyclic ACQ_FOGC_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGC 32 110 1470 1580
External Cyclic READ_GPS_Nom GPS_STATUS_DATA_READ Y GPS_N 64 170 1580 1750
External Cyclic READ_GPS_Red GPS_STATUS_DATA_READ Y GPS_R 64 170 1750 1920
External Cyclic READ_FOGA FOG_STATUS_DATA_READ Y FOGA 64 150 1920 2070
External Cyclic READ_FOGB FOG_STATUS_DATA_READ Y FOGB 64 150 2070 2220
External Cyclic READ_FOGC FOG_STATUS_DATA_READ Y FOGC 64 150 2220 2370
External Cyclic ACQ_FOGA_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGA 32 100 2370 2470
External Cyclic ACQ_FOGB_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGB 32 100 2470 2570
External Cyclic ACQ_FOGB_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGB 32 100 2570 2670
External Cyclic READ_GPS_Nom GPS_POSITION_DATA_READ Y GPS_N 128 250 2670 2920
External Cyclic READ_GPS_Red GPS_POSITION_DATA_READ Y GPS_R 128 250 2920 3170
External Cyclic READ_FOGA FOG_INERTIAL_DATA_READ Y FOGA 128 210 3170 3380
External Cyclic READ_FOGB FOG_INERTIAL_DATA_READ Y FOGB 128 210 3380 3590
External Cyclic READ_FOGC FOG_INERTIAL_DATA_READ Y FOGC 128 210 3590 3800
External Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
External Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC PCDU_CMD CMD_PWL_Lines_PCDU_Nom N PCDU_N 256 650 510000 510650
External ASYNC PCDU_CMD CMD_PWL_Lines_PCDU_Red N PCDU_R 256 650 510650 511300
External ASYNC FOGA_CMD N FOGA 128 450 511300 511750
External ASYNC FOGB_CMD N FOGB 128 450 511750 512200
External ASYNC FOGC_CMD N FOGC 128 450 512200 512650
External ASYNC GPS_CMD CMD_MODE_GPS_Nom N GPS_N 256 800 512650 513450
External ASYNC GPS_CMD CMD_MODE_GPS_Red N GPS_R 256 800 513450 514250
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX 950000
OBC Int. Cyclic ACQ_RIU_Nom_HK RIU_Nom_HOUSEKEEPING_ACQ_REQUEST Y RIU_N 32 100 50 150
OBC Int. Cyclic ACQ_RIU_Red_HK RIU_Red_HOUSEKEEPING_ACQ_REQUEST Y RIU_R 32 100 150 250
OBC Int. Cyclic READ_RIU_Nom_HK RIU_Nom_HOUSEKEEPING_DATA_READ Y RIU_N 512 500 250 750
OBC Int. Cyclic READ_RIU_Red_HK RIU_Red_HOUSEKEEPING_DATA_READ Y RIU_R 512 500 750 1250
OBC Int. Cyclic ACQ_RIU_Nom_SADM_Pos. RIU_Nom_SADM_POSITION_ACQ_REQUEST Y RIU_N 32 100 1250 1350
OBC Int. Cyclic ACQ_RIU_Red_SADM_Pos. RIU_Red_SADM_POSITION_ACQ_REQUEST Y RIU_R 32 100 1350 1450
OBC Int. Cyclic READ_RIU_Nom_SADM_Pos. RIU_Nom_SADM_POSITION_DATA_READ Y RIU_N 256 300 1450 1750
OBC Int. Cyclic READ_RIU_Red_SADM_Pos. RIU_Red_SADM_POSITION_DATA_READ Y RIU_R 256 300 1750 2050
OBC Int. Cyclic ACQ_RIU_Nom_FSS_1 RIU_Nom_FSS_1_ACQ_REQUEST Y RIU_N 32 100 2050 2150
OBC Int. Cyclic ACQ_RIU_Nom_FSS_2 RIU_Nom_FSS_2_ACQ_REQUEST Y RIU_R 32 100 2150 2250
OBC Int. Cyclic ACQ_RIU_Red_FSS_1 RIU_Red_FSS_1_ACQ_REQUEST Y RIU_N 32 100 2250 2350
OBC Int. Cyclic ACQ_RIU_Red_FSS_2 RIU_Red_FSS_2_ACQ_REQUEST Y RIU_R 32 100 2350 2450
OBC Int. Cyclic READ_RIU_Nom_FSS_1 RIU_Nom_FSS_1_DATA_READ Y RIU_N 128 250 2450 2700
OBC Int. Cyclic READ_RIU_Nom_FSS_2 RIU_Nom_FSS_2_DATA_READ Y RIU_R 128 250 2700 2950
OBC Int. Cyclic ACQ_RIU_Nom_ES_1 RIU_Nom_ES_1_ACQ_REQUEST Y RIU_N 32 100 2950 3050
OBC Int. Cyclic ACQ_RIU_Nom_ES_2 RIU_Nom_ES_2_ACQ_REQUEST Y RIU_R 32 100 3050 3150
OBC Int. Cyclic READ_RIU_Red_FSS_1 RIU_Red_FSS_1_DATA_READ Y RIU_N 128 250 3150 3400
OBC Int. Cyclic READ_RIU_Red_FSS_2 RIU_Red_FSS_2_DATA_READ Y RIU_R 128 250 3400 3650
OBC Int. Cyclic ACQ_RIU_Red_ES_1 RIU_Red_ES_1_ACQ_REQUEST Y RIU_N 32 100 3650 3750
OBC Int. Cyclic ACQ_RIU_Red_ES_2 RIU_Red_ES_2_ACQ_REQUEST Y RIU_R 32 100 3750 3850
OBC Int. Cyclic READ_RIU_Nom_ES_1 RIU_Nom_ES_1_DATA_READ Y RIU_N 64 180 3850 4030
OBC Int. Cyclic READ_RIU_Nom_ES_2 RIU_Nom_ES_2_DATA_READ Y RIU_R 64 180 4030 4210
OBC Int. Cyclic READ_RIU_Red_ES_1 RIU_Red_ES_1_DATA_READ Y RIU_N 64 180 4210 4390
OBC Int. Cyclic READ_RIU_Red_ES_2 RIU_Red_ES_2_DATA_READ Y RIU_R 64 180 4390 4570
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. ASYNC RIU_CMD CMD_MODE_RIU_Nom N RIU_N 128 450 500000 500450
OBC Int. ASYNC RIU_CMD CMD_SADM_Pos._RIU_Nom N RIU_N 256 450 500450 500900
OBC Int. ASYNC RIU_CMD CMD_MODE_RIU_Red N RIU_R 128 450 500900 501350
OBC Int. ASYNC RIU_CMD CMD_SADM_Pos._RIU_Red N RIU_R 256 450 501350 501800
OBC Int. ASYNC RIU_CMD XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. ASYNC RIU_CMD XXX XXX XXX XXX XXX XXX XXX 980000

Legend: Earth Sensor


Fiberoptic Gyroscope (Instances A, B, C)
Fine Sun Sensor
Global Positioning System Sensor
Power Control and Distribution Unit
Solar Array Drive Motor

Nominal Unit
Redundant Unit

Dummy entries since equipment list not exhaustive here.

Figure 9.3: Channel acquisition scheduling table.

The figure 9.3 depicts a channel acquisition table for such onboard TCs from OBSW
to equipment and for onboard TM from equipment back to OBSW. The example
depicts a fictional satellite for which the OBSW controls some onboard equipment via
an external platform data bus and some equipment indirectly via an internal bus-
124 Onboard Software Dynamic Architecture

coupled I/O unit (see the OBC presented in figures 4.2 and 4.5). A number of basic
concepts shall be explained with the aid of this table.
First of all it shall be explained, that all the bus access calls listed in the table may be
performed exclusively by the OBSW's equipment handlers – please also refer to
figure 8.17. In an ideal OBSW design no Control Application nor other component
shall be granted direct bus access.
Then it can be identified that the internal bus to the I/O unit and the external bus can
be accessed in parallel (please refer to the start / end times for equipment access
calls in the green parts of the table versus the blue). So these two buses really can
be operated fully in parallel.
The next concept are cyclic and asynchronous bus accesses. Cyclic accesses
typically are permanently recurring TM acquisitions from the equipment by the
OBSW. This data (like the PCDU status data acquisition) is permanently polled
except for modes where the equipment (like a payload) is entirely switched off.
Asynchronous accesses to the bus are occurring only when the according equipment
is commanded. If no commands are in the queue, the bus access interval is not
used. To make it more clear the example of the PCDU command slot shall be taken
as an example. If there is some power line to be switched onboard the S/C (e.g. for
power-up of a payload), then a PCDU command has to be executed, but this can
occur only in the slot reserved for control of the nominal PCDU which is between
microsecond 510000 and 510650 of the overall 1 second OBSW cycle or in the time
slot 510650-511300 for the redundant PCDU.
The last concept to be understood is the one of double access calls to the bus for
equipment control. E.g. the Fiber-optic Gyro, (FOG), inertial data acquisition is a
cyclic data acquisition. It is not time efficient to “call” an equipment item and to wait
for it to compute the response and to block the bus until the requested result data are
returned to the OBSW. Instead the OBSW submits an initial call to the targeted
equipment submitting the TM acquisition command – and it just gets a command
receipt confirmation from the equipment's remote terminal on the bus 6. Then the
OBSW performs interactions with other equipment in between while the previously
commanded equipment prepares the TM data in parallel. Then after a certain, fixed
time interval from the initial TM acquisition request command the OBSW can be sure
that the initially interfaced equipment has meanwhile computed all relevant TM data
and has stored it in according remote terminal registers. And thus at this point in time
the OBSW now polls the TM from the equipment's RT.
The key concept behind such an acquisition table is that is stays exactly the same,
with all the command / acquisition timing values being unchanged, independent of
● in which mode the S/C is,
● in which submode any OBSW Control Application is (AOCS, thermal, power,
payload control)
● or even whether the S/C is in normal operation or in severe FDIR conditions.
The bus access timing table is independent of the above mentioned conditions and
thus has to be properly engineered to suit the needs for all S/C operational modes /
6
The time slots usually are defined wide enough to allow for one bus acquisition retry still within the slot in case
the first bus access by the OBC failed.
Channel Acquisition Scheduling 125

cases and to consider all the equipment's timing constraints between data acquisition
requests and telemetry data availability in the RT.

9.3 FDIR Handling

Concerning FDIR handling two basic cases have to be determined, namely failures
detected by software and failures detected directly by hardware.
The handling of SW detected failures in the dynamic OBSW architecture is rather
intuitive and shall be explained with the aid of an example:
● An Earth observation satellite according to a loaded timeline has to perform an
image acquisition and beforehand has to switch from AOCS coarse pointing
mode to fine pointing mode which implies activation of the star trackers.
● In such cases the OBSW Scheduler (cf. figure 8.17) will trigger the AOCS
application to activate fine pointing mode.
● The AOCS application will trigger the Power Application to power the star
trackers, (STR), and the STR equipment handler to activate the STRs – which
includes informing the STR handler about expected telemetry after STR
boot-up.
● It shall be assumed that one STR fails completely (STR1).
● In such case the STR handler will detect missing TM from STR1.
● The STR handler will inform the AOCS application via OBSW-DP flags and
STR TM entries about the failure and the AOCS application will react initially
by canceling fine pointing mode and by information of the System FDIR and
reconfiguration handler.
● It is up to the OBSW specific implementation which level (AOCS FDIR or
System FDIR) will give the STR equipment handler the clearance for further
recovery actions like switch to the alternative bus side, activate a spare STR
or other activity.
What can be identified here is that the entire FDIR processing is well performed in
the frame of the normal bottom to top information path and the normal scheduling of
the static architecture's building blocks. No special interrupts are being raised, no
dedicated recovery threads are started or other mechanisms which would jeopardize
the entire OBSW tasking stability.

For hardware detected failures the situation has to be differentiated further:


● A relatively straightforward solution is handling of a non responsive data bus
terminal. If e.g. the MIL-STD-1553 bus terminal of an STR (to continue with
the example above) does not reply to a bus controller's message the hardware
of the bus controller already identifies this and will automatically initiate a
transmission retry. If this retry works – i.e. the terminal responded – the
problem is completely transparent to all upper OBSW control layers and only a
retry reporting flag is handed over by the bus controller to the equipment
126 Onboard Software Dynamic Architecture

handler which triggered the access. Such retry flags may be used for
monitoring via the PUS statistics service 4.
● In case the retry failed, the equipment handler is informed about the failure
and according failure flags and RT number are recorded in OBSW-DP entries.
In such cases the situation becomes exactly a purely SW managed STR FDIR
chain from bottom via AOCS to System FDIR as described above.
More complicated are hardware detected errors which are induced by OBC hardware
components themselves. In fact only a very limited number of them can be recovered
or partly handled by the OBSW at all:
● A typical problem detected by hardware mechanisms are memory failures in
OBC PROM or RAM due to electromagnetic “Single Event Upsets”, (SEU), or
due to damage by high energetic particles. As already mentioned in chapter
4.2, modern memory chips provide hardware based “Error Detection and
Correction”, (EDAC). The memory EDAC checksum electronics include
corresponding signal lines to the OBC processor's “Line Control Block” bus,
(LCB bus). Modern processors like the LEON include an on-chip EDAC
handling of such EDAC BCH checksums. They provide an autocorrection of
single bit failures fully transparent to the running OBSW (except losing a single
CPU clock cycle) and they can detect double failures. In such double failure
cases according address entry information is placed in special CPU registers
which are cyclically monitored by the OBSW watchdog functions and which
are thus accessible for the OBSW. From this point onwards the problem has to
be handled in software, however since it means a memory chip has failed, first
of all the OBSW itself is prone to crash when accessing this memory address
and secondly the problem only can be handled by OBC HW reconfiguration.
Current onboard realtime operating systems do not provide memory
virtualization so that the RTOS cannot be advised to blank out the bad blocks.
● A further group of hardware detected failures are HW Traps provided by the
OBC processor, like e.g. the “Uncorrectable register file SEU error” of the
LEON. This type of failures is induced by errors even another “step closer” to
the OBSW since they appear directly inside the processor – like the register
SEU example here. When detected by the processor hardware the OBSW in
most cases already has computed some wrong data and the probability of a
coordinated error recovery is low. Therefore the maximum achievable in such
situations is storage of the alert in a non-volatile safeguard memory for later
ground diagnosis and to trigger OBC reconfiguration – independent of S/C
operational mode. Such functions will directly be part of the OBSW System
FDIR module.

9.4 Onboard Control Procedures

“Onboard Control Procedures”, (OBCP), already have been mentioned in conjunction


with PUS service 18 in chapter 8.6. OBCPs are “command scripts” assembled from
S/C PUS service commands. They allow for executing sequences of commands on
board – including if / then / else constructs and loops – but avoiding fixed
Onboard Control Procedures 127

programmed code sequences inside the OBSW. Typical applications are “scripts” for
equipment reconfiguration including verification – e.g. switch over from nominal
sensor to redundant.
● OBCPs can be uploaded to the S/C from ground,
● and thus can be changed during flight,
● without changing the OBSW's compiled binary code – i.e. without patching the
OBSW.
For OBCP processing, an execution engine inside the OBSW is required. In older
S/C missions such OBCP implementations have been based on high level language
interpreters, which is however slow and the OBCP code is not optimized with respect
to compactness.
Newer implementation (e.g. for the ESA Missions GAIA and Bepi Colombo) prefer
bytecode interpreters which require OBCPs to be precompiled similar to Java code.
Latest research approaches even investigate the application of real Java for OBCP
interpreters in onboard software – see [122]. The benefit of this implementation
technique is an improved execution performance or vice versa less CPU load and in
addition a more compact OBCP code on board (reduced memory requirements).
The following basic characteristics apply for OBCPs and their execution:
● An OBCP processing engine in most cases allows multiple OBCPs to be
executed in parallel.
● Since OBCPs are pretested on ground and since each command is checked
for validity before execution on board, they are rather safe w.r.t. OBCP
software bugs.
● But OBCPs definitely are not tested with the same quality and according to the
full scope of the development standards as the rest of the OBSW (cf.
chapter 11). For example they usually do not undergo an independent SW
verification.
OBCP implementation languages typically provide the following functions and
structures:
● Onboard TC submission to connected equipment.
● Simple types:
boolean, signed / unsigned integers, floating point, double precision.
● Arrays / vectors of above types.
● Arithmetic and logic operators:
+, -, *, /, %, &, |, etc.
● Execution control statements and loop constructs:
if … then … else, for … , do ..., while …
● Procedures and functions for structuring code into ”subroutines“ or similar.
● PUS parameter monitor control.
● Tracking functions for event occurrence.
● Event trigger functionality.
● Dedicated OBCP onboard data pool parameters which can be modified by S/C
TC, and which can be observed via S/C TM.
● OBCP management functions in the OBCP handler (cf. figure 8.17) such as:
load, start, stop, suspend etc.
128 Onboard Software Dynamic Architecture

These functions can be applied to running OBCPs via PUS or can be used for
management of one OBCP by another.
● Onboard TC submission to OBC connected equipment.
● Onboard equipment TM packet reception.
● TM packet variable evaluation.

9.5 Service Interface Data Supply

It was already explained that via the Service Interface a set of variables of the
following scope are supplied to an external IF connector of the satellite:
● A set of OBSW variables,
● OBSW internal variables like timing parameters, RTOS flags etc.
● and task / thread scheduling parameters.
All these are parameters available in the OBSW-DP. The only topic is in how far the
parameters online can be selected for export to the SIF or in how far the parameter
preselection is precompiled into the OBSW.
Usually an OBSW has a basic scheduling frequency to which the main elements
(Applications, Handlers) run synchronously or run in frequencies which are integral
multiples higher or lower. E.g. the base frequency can be 10Hz, an AOCS App. runs
with 10Hz while the Thermal App. runs at 1/10 Hz, i.e. 100 times slower. The SIF
handler typically is scheduled with the main cycle frequency of the OBSW or a
minimally lower multiple to limit CPU load induced. The data rate to the SIF is
relatively high. This allows that each entire SIF output data set represents a
consistent insight into OBSW status at a point in time.
Concerning the OBSW dynamic architecture the SIF handler is only a topic with
respect to the induced CPU load. Otherwise it is a simple straightforward process not
interacting with other OBSW blocks.
Onboard Software Development 129

10 Onboard Software Development

OBC controlling AEOLUS © Astrium

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
130 Onboard Software Development

Onboard Software development is a very complex task which by far is not only
difficult due to code implementation challenges but it also implies a lot of spacecraft
systems engineering effort beforehand. The entire OBSW development comprises
the steps:
● Software functional analysis
● Software requirements definition
● Software design
● Software implementation and coding
● Software verification and testing
Each of these topics is worth being addressed separately and is worth being treated
in an individual chapter below.

10.1 Onboard Software Functional Analysis

Chapter 2 already sketched out how the S/C design and the corresponding OBC and
OBSW IF design evolve together (cf. tables 2.2 and 2.3). During S/C development in
Phase B a detailed definition of all onboard functions has to be worked out and it
must be elaborated which functions will be implemented in SW and which ones in
HW respectively. A so-called “Function Tree“ for the OBSW has to be established.
During OBSW Functional Analysis it thus has to be considered
● which functions within the S/C OBSW are required and
● an allocation of these SW functions to the diverse S/C operational modes.
In the first part of Phase C, when the detailed onboard equipment type and supplier
selection is made and when the functional and interface documentation of all this
equipment is provided by the suppliers, the scope of this OBSW Function Tree has to
be refined with all the details on equipment control protocols, necessary equipment
modes switching and equipment FDIR functions to be implemented into the core
OBC's onboard software. At the beginning of the OBSW design activities then the
Function Tree finally comprises all
● commandable functions of the OBSW kernel,
● functions for processing TCs,
● functions for generating TM,
● controller application functions (AOCS, power, thermal, payload),
● equipment IF handler functions,
● surveillance / control relevant functions,
● error / failure diagnostic / failure handling / recovery functions, (FDIR).
Furthermore for the S/C subsystem level, it has to be considered that within each S/C
operational mode the subsystems (e.g. payload) themselves can be operated in
different subsystem modes and may require different active functions for their control.
Figure 10.1 below depicts an excerpt from such a Function Tree as an example.
Onboard Software Functional Analysis 131

In addition the Function Tree has to reflect all control and data handling interaction
functionality between subsystem functions and the S/C command and control – such
as management of boot sequences for intelligent payloads, time / position / velocity
synchronization functions of AOCS algorithms with GPS / Galileo / GLONASS
receivers on board etc.

Figure 10.1: Function Tree example (extract). © Astrium GmbH

The synchronization of S/C operational modes, subsystem mode control and


equipment control (see also figure 10.2) represents a large part of the overall
functions set.
/F- %

8
/.-?@
,(
&2

-!#    01 " 

+ ) AC8 
+



 


AC  <
  A
      



             C
 

 

  



AC
 
 ?
 "<
  
AC 
 8 
  <
 
 
   
 

    @  
 <
  


 AC%
 

 <

 AC 


 <

 AC 




<

 8" 
 <

 AC  %     <
     
  
   


AC
<
   
 AC%


%

<

+                  
   
  <
    

    3
      AC 8 
 +  


 


/.F

 
)  AC8 
+
    
AC<
 
"<



 /FF

 "<


*3 

<

8 
+8 
( 


8
/.F?8 
<
)!) # ,(
&2
/FD %

+ )
)AC<
  
%
  
 
 
  AC 

 
   
     
  <
 "3F-3-H    3-0 
  ) 


8
/.D?AC<



=)
,(
&2
"<



 /FE

(AC<
 %


+   
    

    
   AC 

  
+ 
  )
 

  %AC
 %


 <
 

 %


 
 

 

 P 




 <

 


  
   
  +       

    %


   
%

  =    


%
 =
<
 <
 
! #  
 %


-!5  "

+ 
 
 
)
 

     C 

  2
  %  
       


 
AC 

 (
  3
 
         
        AC


6 3


 %
   
 
 A
 


%
(  

% ?
 6

  AC 

             
      

   
< (     




%+  
3

 
<
 AC
 C 'EC+   '+F6
2(9@ !TD.VTD/V#   

 
    &
   
 



8"+"(*C
  
 




(9&9' 3

 
< 
AC
 I EI  +    +
  F6
+

 (9&9E0%  J

  $

 
%(
  
 
  
AE-A-$23H.A 2  
J

/1IF
 

%93+3/E01
%


%    =         93/IE.  
  >(9 !TD-V  TDFV# 
 

%
 
  

' 3

 
< AC
  H56
 % 
(9&9'((9(!TDDVTDEV#(
 )

 
  
 2(9 
      
        C
 (0F    
  
 
/FH %



% ( 1E     P  (  -..E      6 3

  ' 3
  
   
<        (  0F
6 3

 
<(1E 
 6
 !TDHV# 
   

%    %    
  "

    A
9
  $(
     /1I.  

  
      
 
  




% )
 

 
             AC  

  > 


 
 
 
 %
+


 
  %

!
 

 

 
 

#8AC


  3
6 3
 
<


 ..6
WW!TDIVTD1V#
6 3

% A6
(+L+
 /10. 

6 3


    =       
                
%


    

  
         
      AC
%

%  %
%
WW 
6


% % 

 
 78
9:

 $
%
 & ! T//1VT/-/V#8AC


WW6 3
 
<


-!5!  
 *2" ,
+ 1

+ 7 ( 

+ 
<:!(+#



   

 
  
%

  
  /1I.!  T0IV# +  $(
 8  
 (+ %  

  
 

  
 
% 
+
 
   8
 
 8.! T00V#
(+@ 8.
 
 


 %
 





   
  

  (+


 



  

 
%
 8)?
 8
/.E
%
 
!2CWC#
 8
 /.H 
 )%
= AC 

%
 (
 /.I 
   
AC! (/


/.HT(/V

/./#
*  @ 
 
     
%   8 
    
  
   

  

+     
C

 )  
 @

 /FI

$3D/

.F/--../ .1?F.

 *? %- '?


$C"<D.
*  8 
?
)$(C

 +
 "(9%

+A '+ "+3"+

"(9
+A 8$ 8$93/EEFA
'+ '+"(9
+A "(9  "(9%

" 
 

"(9
+A " 8$
 


 3/FEE
(3.H (3.E
  
 >
 
'+
" 8$ G3

"(9 (3./



 W8$
(3.F >*3
 

+A  3


3
 "(& 8$
(3.D

+' 

 3+)
(

  
%
  %
 
(. %
$"$ %
$3+$
"&
 '- !S2'-#
 CB
EKC F
 $
   "$
+ 
( (.I (.0 <

 
<

$3'$
+  +"  
$3+$
$3"$
(3.- (3.1

A 
$"$ 2'/W2'-

8
/.E?(+%
 @ 
2CC ,(
&2

$3D/

-0./-..- //?D-

 *? - '? #


*  8 
?
$(

 $C"<E.




+ 
 ' + /

 





!S'# +' 
 ' %
 
E
%
$3+$
!S2'-#
'F  
+'  (/
(

  

E !S'# %
 
H

"(9 %
$"$
+A 8$ W8$ -
D 3+)
D
F +A 
+A "(9
/ 8$ F +A '+
+A " /

  
 + 

-
" 'H + 
(-
"
+'  '
 *%
  
   
I
  (
+!ED#Y 

 %Y
%

!S'#
A 
H
$"$
I
+  +"
(F
+    Y
%Y8 '-.

(  

8
/.H?(+%
 @AC,(
&2
/F0 %

8
/.I?(+
   
AC,(
&2
+  
 


 
  
 

 

*


 

 

  
(+
C
 




 

     
 %
 



    
  
      
    

  AC  
   (+ @
 8.  <
  
 ) 6 3

   


-!5!# C 
+
L
% "

72

6 3

:!2#

 


 


 ! T01V T1-V# 

)

 
=)

%


/.02


 C



  (0F8"+"(*
+ 2 %  /10.
     ( 2 


 
   

 %

 /F1

(  cAC

   (  
+9
 

A'
 2c'c
%( 

 2c+
c
+

(

+ +
 +
 +
'  +  2 
L









 cc+
>
 cc
 f


g
f+9cg 2Kc

+9c$c
 %
cc

f2Kg
fA'cg 

c

f %c( 
cg f
cg
%c( 
c$c
 +


c

f&c+cg $  fcg


 %

 +

)c+ +
' (   "9
 +
'
f(c
g +c
f(cg  ( 
A ( + 
L



+ cc
 9 


 


c+cc

f+ ccg + 
cAc
 +c' c c

f$c%
c
g 
cAc$c
 +
c

f$c%
cg 8$ +c88c
c

2c
cAc   c9c


c
 2c
cAc+
c "+c9
 c

f
cg fc )c+g c+c(
fc+cg
*% fcc9c+g fc+c' g
f
cAcg f
 cc+g
( "9
 f c9c+g

cA f+
c+g



 +c  c8c %
&c+
c 
( <
9
 ' +c' c c
"9
 
cA &c*)c8$c' 
f""
g  



&c+
c+c' 
&c$ c

c %
f&g cc&
fA"+"
g fcg
"2 +   9
f"
%cg *% +' 
 
"9

$3" +  
A >(3 A

8
/.0?2
 ),(
&2

(  

27

:
 
   
 
  %          
 
    
  

 !(+ #2) 

 

 C<

 


%
 (  

          2   
    7
     :?
 
  


  
 
 %
 +  

 

 8
      2     
 
  3   :%
  7 
   
  



+ 

 


    
   
    %
  %      
  


 

 

 

 
(   72

6 3

: 
 6 



  


2
 

 
 



 2
% 6 
 
 
  6 

 
+A 
2%2  6 



 %
   



 
   
  

2
 6 



% 
  C


/D. %

-!5!5 ,+& (   4&(

 
 6 
AC


WW 
 

 $
%
  ! T//1VT/-/V#< 
 6  %     
   
    6 



 7$


9:!$9#




?
 C



 C%

J

 C


! 
6 
 
#
 C 

$9! T1FVT1IV#%

J  
 
 
   
% %


%



*%  


   

7:$9
 




  + $9
 
%
%
 3
 'YY


)
 
+ 
$9 

  
  
       $9      
  %   
    
 

J

 
 


$9(  


  $9


 
!WW%b# )$9( 
+   "
  (  
$9 +   7
:     
 3
 
 

 


 3 $9
7:



  
<
+
 



  :
7 


 
 

+ 2 
 
$9+ 


  $9>-

)

 
 ?
 

 
 

 

  

 6 

 ' 

(


 $9-. 
% %

  ?
 ( 
%
 

 $ 

 


 + 
 


 ?
 < 

 



  
%%


 +




 
  
$9
 ) 



 7
': 

 /D/

! 
" # 

   



 
 
 
 6 ! #



" 
c  /?" 
c  /



c 
? cS/-
c?

c? $

 
 %

c? /

W%

 c!# 
W%

 c!#
W%
<c !#
W+ 
!# /
(


%


  .a A

 
 %


 

  
&
J
@

J




 + 


 c!#

 c!#

8
/.1?$9 


     
" #(
 

 

 

$9  
 
   

 
  


  



? 

6  
6 * 
? 3 

? 


? 3 

6"& 6$B 6   *"


? 

? 3 
? 


? 3 
6$ * 60B

8
/./.?$9 
 

/D- %

 
" # 

 

 6 %

%@ 

  


 





 %

 
 

 




 
3-...
 8%9(*

+@+38
+@+38
%
 
A

A
A'  AC

8+ A

8
/.//?$9 


$!% 
" #    
 
           

   


 

 
%%
 





  
3-...

+@+38 8

A
A'  AC


8
/./-?$9 


 /DF

& ' 
" # 6 

 

 

 I  
R
?S-...
ACcc'c(

>
?RT. V

@A
"D--

%
[[
\\[R3\"C9/?c
c3\ 3\/.\


>
  + A

"C9/ "D-- '%



C c
%c  
 
"D--
@A8
Wc
c? A
@A
(*A
&'c" 
% (*A
@A8 '%



>

 + A

@A
(*A

%

8
/./F?+


$96 


"
" #' 

 
 


 

8) 
 
             
  
     %
  


 



%c  AC


%c ??/
- "D--
  WA?

% 
%
W



J!#

8
/./D?$9 



+ 
   
 
 
 



 ! #
       
/DD %

(  % 
" # ( 
%
  
      
      %
  
  

         


                  %
     
 ;  


 ) 8
 ' 

$  @AC

 >
 @
-... A
+



A'

) 
" ' "
' 

\ [S

8
/./E?$9 
%
 


)   
" # $ 

 
 
   
 


   

/
  

AC
(  +

AC%

- "
+' 
7)7
$
  
 3-...
7)7
+


F 2C(+

(+ 


8
/./H?$9 


 /DE

  
" # 


  
!    <
# 


  

!

  

  )


#
    !
(
 


  

'('
)*+,-
%   

&

'('.,-
 &
/%"01  23
" 

& &
 & &

&
 '('4*2,-
 '('4*2,-

/%"01  23 /%"01  43 /%"01  43

" $ & # $

&
 &

& '('4*2,-

&
 /%"01  23 &
/%"01  23 /%"01  43
& $

8
/./I?$9


+ )
 /./I  

 
;3

  

+ 


 
 
   

 

 


+



% 
  
 

  
  
    
   +   6
       <

  %    8"           
  



 
  <
;     
8" 
  
 

?
 $9 
6 3
%
 2 
 +  %
% %
 


 




 
 + 

 %  
   !#+


       
     
       
  
6 
 +     
  
     

  
      +

 
 
  ) %

 *

  
 
 
 
)
 

 

 
%
%




?
  
             ) 
 
 
 
     


/DH %

  
  






 


 A

 
 




 


 

 !  
J
#
 +



  


 
 


* 
" #< 
 
 
 


 
    
 
        
    
  
  = 
 

 

 
 



+' 


7( 7 ?
8 ?ACcK ?'$c
' 
!
#
.?+c.0/E -? 
 
!7
  +
 7#
/? 

 F?  ('

 9C
D?('

AC
h
AC
'$

E?
 


 (K
! 
#
+) 
 


  

8
/./0?$9< 

   
" # 

  
 
      
 
     
   
            
   
 
    %

A3
-?!#


?AC
.?
!#
(
/?A!# K 
'$ A /?!#
(



F?+!#
.?
  
D?+!#
!
%
#


 

 

  3-...

8
/./1?$9 




 /DI

"
" #+



  %
  
 



 
  
    %
+  

<

%


 

 %
   
$9
+   
 


 

A3@3A 



@3A?
 <
+
@3A?2C

C
C


@3A?"
C 

AC?C

<
+C
?AC

AC?"
C

AC?(

/..

 

8
/.-.?$9




$9 


  
$9

 ?
 (
 
 
C =
 

 

  
 + 



 < 


A    $9               



  


    


        )    
    WW
+ 


!AC 

 
WW#$9 

!

#=
   
( 
 
 
 %$9

 
T1IV

-!8       

AC

 

  <
 



 

    AC

 %=


=
"+
/D0 %

     

        


  !    6 


  #
  
 
AC
 
<!$9 #
  
 
 

   

   


 
!%  

   # 

 7;;:
  
 
!  

J

@ #
 
 

 

  

 
 
 %



%
   


AC



%
 
 

)

   
  
   

 
 ( 8   
   
        )
   
J      @       

  


<
 


 
<
 

C 

%





  //+  


 <
  
 

 
 

 
 %

-!9  


  , 

8
 /.-/ 
     

  
   
     
%       @    
   !(          



 AC# %AC% 

 2C@C 


 
AC  
 
AC
 
@<
+ 
)


    T10V
+ 

 %  
>3 

 

 
%


+
>3 %
 


  
            

   >3        
=
 ?
 +%
  
<
   =  

  =    
                
%

 +
 @

  

 +  %
         @   

      3%


AC%

 (
 3%
   

  
   =      
    =         
%
+  
 %


 



   


>


+
 /D1

(
  AC AC@A   
















>






8
/.-/?8 

%




+ 
 
 
2C@C 


 

 
       
     3     


 
 



 
<) 
@

   
%%


 AC
    

    %
  
            
  @
?
 


  
  
=

= % 
 
   
   +   
          
 
     

 +
 
 
Y(
 
 97
    
  

 + 
%
 
 =
 

=
    +
 
 Y

 97

  @
  
=
 
A 

 =%
  3 
AC 

 


 @

 +   
   
                 
%  
 
   
 +




Y
 97+ 

2C@
C  


             +   AC       



 
 
@ %

 +  


 

  
AC    @ 
/E. %


  +
  
 Y2

 97!2+9#  
 

 
<

 


 %
 3  


+   %
@ % 

 


  2+9
   
      @       
 
  
 
  
 
   
 
8
 /.-- %
    %%
      
  
         @



 AC%


 
<
 

  
         
        
       


%


3+ 

+ 

"C9

 

2
A
+ 


' 

A
9
A8 >

9
A8!'# '$
"/3
' 
@ 8
' 
@ 
8
(
' 
8!
%#

8
/.--? 

 =T10V
 
  
%

  
 

 2C
 

+  
  

 

@%


 


 %AC%


8

 

T10V 

   % 


6 
 
<

3


WW


 

-!9!  
 
 $
+E$F

( 
  
       
 
  
 
     
  
   
    


      
   %


  +
  
  

  %  
   

 
  Y8 
>


A Y
>


+
 /E/

!8>A#8  6 


 



 %



 

  

 
AC

@


 ! %@ @ #


<




K
<



 <

  
(
  
<




@

+ 9(*

8
/.-F?8 
>


A =8>A

+ 8>A
  

   
   
         A 
  
    (  


            
  

     


 
 
 
  

'=

  A
  
   

 <
  
 

 
   
         %
        
    )

  
   

%



          
   
     
  
  


 

%

+


  Y(
 
 9Y
  






 
 
;



  @<
  %

 


  

A! 
 #<


 8>A  ) %
  

  (     


    
   %


    8>A          


 
      J      %      
            
+ )  
 <

 
 
 
  AC3'
8     
  <
           
  @  <
   

   
 

   

T10V
/E- %

(   8>A%


  

  
  

  

! 





6 
 

  

#

-!9!#  


 
*EF

     %         


     8>A 
         


3 Y>


8

 Y!>8#+ AC
 
 

   3%


  
=
/.-D  8>A >8
<

 



  
 
A 

 
 A;
 
 )  
 AC 
+ AC 
 '$A 
  
   
  
 >8;A      
 

  +
 
    
       A;           
 
   +   >8 
             Y 
     9Y



 % 

AC


@

 ! %@ @ #

<


A

K

 <


<

 

<



+@+

@
8

+ 9(*

8
/.-D?>


8

 =>8

8      >8   


    

    AC    +   
  A  @
   @      
 
     >8        

    
 
 
      

   
       AC    %


          


  (            8  
    


    %           
          

  %


8>A
>


+
 /EF

+
  
<

  
 8>A)

 ) 

   

 AC3' >8 AC
 A 

  
   %
  


%    
   
   
        +
 

     
     
  
  %       
 
  
 
  )   %
  <
               
  




% 
 = %


K%
/H

 


   
3A<
=

=    %           
           AC  %


  + 

 
 %    

 


 
     %

J
 
  
 /.-D      3

         
<

+  Y
 Y  

 


 
    


    '             %    
      >8   


   <    >8 )
 
 
 
 
 
   ?
 + 
  


AC%   
  
%
 

                   
    AC  
   

   

    

 %; 
 %
 


AC


@

 ! %@ @ #

<


A

K

 <


<

 

<



+@+

@
8

', ',(  2  ,(

+ 9(*

8
/.-E?  
  

/ED %

+   
 
     
 

   %
          
  
              
  

= %
 
   + AC

 

   

  
 
6

 



 + Y+ @+ 8Y!+@+3
8 #

 
 /.-F  /.-E
  %

 


   
%  A
         +                 
+@+38    
                    
%    %
  
  
       A  ;   
 
+ %
%A%
+@+38
  
8


  @


     
  %
                       
  

  


    

    YY 
%   
( 
    
%
 



  +   '$       
  %  
       
 
8    
  
 


%
    
  
 
     
  
      + 


< 


%
   
  
+       
  
  AC       %
         

    
         
  %




   
 
 

  

 

    %

G9    

  
  )       6 ;     @   
     
            AC     

 
+% 
 



8
/.-H?>8
 %3  
 ,(

>


+
 /EE

+ 
  
 
AC C
 

 
      )                       




 %


  


 +
 
       
    AC     
       AC  

  
    
  +  
  %   

 
   
     A



 

   
 

AC
%

 





8
/.-I?% ),(

(    )
  
         %
    3 
Y%
 Y!8#

 
  
 '$


 C

 
 ! TEEV#+ AC


  
    

     
      
  %
   

    



 
$ 


  
%
  ) 
 
 
 
+ 
  
  
 %
  )3
/EH %

8
/.-0?%
  )
A ,(


-!9!5 C**,E,$F

+  
  


7
 9:8

 
 
    )  
  +   
  )       




 )


 %

 +
  
 Y + Y!+A#
+ 
)

 @ 


 +

 
  


 %   
AC 
        A       
  
  
   
             

      

    



  
( % 
      
        %
  
  >8 
   
 
 A

 A 
 
 
+ >8;A  

 

 8'&(
 ( 
%


>8


   )

    



  @
+A

% 


   AC

 
 
  8'&(( 
=

 

=  A
+   
    
        

          2C  @  C



 )  &
3>6 
+    A)
 
  

     +@+38 % 
 

' AC A 
+ %
93+3/EEF @ 
 AC
>


+
 /EI

       
   
    <
 
 
           
%+   
  
 %


8
/.-1?&
3>+A
)
,(


(  


  AC     
 
 
 7
 9: 

+ A 

    






 
;<
6 

 
   >8(

   
 

 

 
 

  A


* )  
 AC



   
    

  
<
+ 

  

@

 ! %@ @ #

<

A 
A

K

@
 <



<

 

' +@+ <

83 83 
 

@

+ 9(*

8
/.F.?8
7
 9:



 6  <
A
 @3
 
%
  
        
  (    

         A  

<
 

   

/.F/
/E0 %

@


 ! %@ @ #


<




K
A <


<

 

' +@+ <

83 83 
 

@

+ 9(*

8
/.F/?+A
  A
 

+
 

  %  @3
  A
 

 
 
  AC 
  
  
 
/.F.  
  
   



 
     %
    

             
 !
#%
 
  

  + 

 <
 


/.F/% 
 
 
  
A@

  

 
 
  
  
 
     
     
   +    


 /.F-   A /= )=  
 

 
 


% 
   



8
/.F-? /+A,(

>


+
 /E1

           )         


     
      
 
  =  
38 =  
 )

/.FF>


 ?
 + +@+38) 
=
 

 +  
   A  
A@3!"$#
 + 
  
    A+

8 

    
%
  
   

 + % 
   
  


 




        
           
;  
 
      



      %    
  
    
  +
         
 
  % 

<



 
 


  


   @  


8
/.FF? !(6 #,(

/H. %

A 
 +A 


 )  A



Y
 9Y
  3 +

     3  
   
       
     
  AC
 
 

 
  
 % 
%

  %>8
 
  
7( 
+
7!(+# 


-!9!8 /


 (E/(F

+ 7
 9: 

  
   
) 
  Y2
 9Y 

       

     
       7  
  8 

:! 8#

<


@

 ! %@ @ #


K

A <


<

 

' +@+ <

83 83 
 

@

+ 9(*

8
/.FD?  

 8

+  8%%  
+A 



  
      

              

<
          %       
 +    



            )  
 
  
  


<
 8)

<
 <

                 &'   
%  +
  



  


 7
   <
:! #' 
                             
;
 
   

<
 
  

 
>


+
 /H/

  

    A  

   
       
    


<

 


  

       
      78:  +   
  

          %



     8 
      
&
3>  
  +                   !'
'
' #



 



8
/.FE?&
3>%

 8

     %  
   
       
    
38
  %

  


      
  

  
   
                A  =
  ) =
%


     


+  
      
  
 

   

   

<
   
  


 

9     %
 
3

     =      
      ;  
      


  
  

  A
     A                 
     


 %
 AC 
 


  
 
 A   <
 

      %  

  @  <
  
  
  
   
  

 

 %
 
 
  
<

 
8 

 
 @





 
 %

   
 %



 
/H- %

8
/.FH? /8 ,(

() 8 

 
  



%
 
 /.FH 


  /6 (  


  8 

 
 


 
 

8
/.FI?
 
<
8


 ,(
&2
>


+
 /HF

-!9!9    ,1




+   


AC%



%%
 < 
 
%+ 

)<  
  C


+


 
)
  AC @3
6 = 

%'$%
 
          6   

 C   
  =  
        

 
   %  
%
%  C    =  
     
  

      

        

 
   =         

 C


'"A(3/
+   
  

=
 
2C
  %8"=


= 


  
 
8>A   

 AC 



+ 
 
  %
AC 
?
 A
 



J
 

 "+8 
= 
 
%
 @%
 
= <
 AC3'


 

 +@+ 
%
!'$#
 +
 ) 
 

 A'
2
 (

 
  
!'' #
+ ) 
 8" 
?
 A2C 
=  ( 

   

 <
 

<
 
 " 

<

   
 
= (8" 
 
  % 

 
@ 


8
 
 %    
 !
# 
 



 /F/D
OBSW Development Process and Standards 165

11 OBSW Development Process and


Standards

Development Standards © ECSS

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
166 OBSW Development Process and Standards

11.1 Software Engineering Standards – Overview

The goal of software development processes and software coding and development
standards is to achieve a sophisticated software design quality with respect to
maintainability and a high SW operational reliability under all nominal and system
failure conditions. For satellite OBSW it has to be considered in addition that the
satellite design typically is only single failure tolerant and that the satellite cannot be
contacted again without running OBSW – except for the limited scope controllable via
High Priority Commands.
Such high software quality can be achieved by
● guidelines for good design and coding practice,
● requirements towards a thorough test concept comprising unit, integration and
system level tests,
● extensive testing enforcing e.g. full branch coverage and node coverage
testing
● and in addition by an independent software verification and validation (ISVV)
from an external partner who has not been involved in SW implementation.
To guarantee consistent and high quality SW engineering, SW technical design, SW
implementation, SW test and verification, SW documentation and maintenance
dedicated software development standards apply for S/C OBSW development.
Such software standards
prescribe diverse de-
velopment guidelines for
software in a space
project – here to be
interpreted for the OBSW,
respectively for all soft-
ware elements in the
OBC or in other S/C
equipment. Prescribed in
such standards are usual-
ly the development ap-
proach, the development
phases, the review mile-
stones and the documen-
tation to be delivered for
each of the milestones.
Several software standard
families exist:

ECSS Standards:
For European space
projects there exists an
Figure 11.1: Family of ECSS standards. © ECSS
entire suite of standards
Software Engineering Standards – Overview 167

for spacecraft development – not only dedicated to OBSW development – the so-
called ECSS standards. These standards are elaborated and published by the
European Cooperation for Space Standardization, (ECSS). This commission includes
members from the European Space Agency, diverse national agencies and industrial
partners. Relevant for software development and thus for OBSW are especially the
standards:
● ECSS-E-ST-40 Software engineering, and
● ECSS-Q-ST-80 Software product assurance.
Please also refer to figure 11.1. The ECSS standards are a family of cross-
referencing documents which is very exhaustive but also sometimes unhandy to
read. The completely revised ECSS standard set which is available since the end of
2010 has actualized all major parts and has improved the precision of the standards
a lot. More details on the ECSS standards are included in chapter 11.3 below which
explains in more detail the content and the intention of such a SW engineering
standard at the hand of the ECSS-E-ST-40C.

Aeronautical Software Standards (Aerospace) – DO178B:


DO178B defines the guidelines for aeronautics software. It was developed by the
Radio Technical Commission for Aeronautics, Inc. (RTCA) and was accepted as
certification standard for aeronautics software by the US Federal Aeronautics
Association FAA (see Advisory Circular AC20-115B). De-facto it is meanwhile a world
wide applied standard for aeronautic software and its development.

● DO178B primarily treats the software development itself. Within the


development process diverse accompanying quality and test documents are to
be worked out. So DO178B to a certain extent is the counterpart to both
ECSS-E-ST-40 + ECSS-Q-ST-80.
● DO178B for the space business is always applicable for systems which in
parallel are interfering with aeronautics, such as
◊ “quasi-airplanes”, like Space Shuttles, or commercial spaceships like
“Spaceship One”, and
◊ aeronautics support systems like GPS or Galileo (especially their payload
software and their ground segment software).

Standards for general Software – ANSI / IEEE:


In the space business generic software standards are only applicable for support
tools where a tool problem or failure would not induce a disturbance of the spacecraft
itself. Such equipment for example can be certain ground support equipment, OBSW
test equipment etc. The IEEE standards for software development are:
● ANSI/IEEE-729 Glossary of Software Engineering Technology
● ANSI/IEEE-1058 Software Project Management Plan
● ANSI/IEEE-830 Software Requirements Specification
● ANSI/IEEE-828 Software Configuration Management Plan
● ANSI/IEEE-1012 Software Verification and Validation Plan
168 OBSW Development Process and Standards

● ANSI/IEEE-1016 Software Design Description


● ANSI/IEEE-730 Software Quality Assurance Plan
● ANSI/IEEE-1028 Software Reviews and Audits
● ANSI/IEEE-829 Software Test Documentation

Software Standards for dedicated Space Projects:


For dedicated large-scale space projects sometimes separate software standards are
defined. In most cases they are derivatives or combinations of various specific
standards or combinations of diverse national standards. Examples are
● the Columbus Software Development Standard, (CSDS) and
● the Galileo Software Standard (GSWS).
The Galileo Software Standard (GSWS) for the European satellite based navigation
system e.g. comprises all the following domains of
● software engineering,
● software quality assurance, and
● software configuration management
in “a single book” which makes it simpler to read and to understand than the ECSS
counterparts although from the point of requirements they impose on OBSW software
they are rather comparable. GSWS is a closed and complete pure software standard,
however it to a large extent neglects the topics of hardware / software integration.

Figure 11.2: Galileo Software Standard as a closed single book standard. © ESNIS

The Galileo Software Standard comprises a common requirements set for all
software development, integration and test phases in the frame of the Galileo
navigation system program. Furthermore operations and maintenance topics are
treated as well as the full scope of software product assurance topics. GSWS is a
common standard for the:
● Space Segment, (SS), which encompasses all elements on board the Galileo
navigation satellites .
● Ground Control Segment, (GCS), comprising all components inside the
ground stations for control and housekeeping of the 30 satellites.
● Ground Mission Segment, (GMS), comprising all components inside the
operator stations by which the Galileo payloads of the satellites are operated.
This includes signal generation, security codes handling, cyclic code updates,
leap time corrections of the atomic clocks aboard etc.
Software Engineering Standards – Overview 169

● Test User Segment, (TUS), comprising all elements for test of Galileo
receivers and car navigation systems under realistic conditions before full in-
orbit availability of the spacecraft.
Further reading and Internet pages concerning software development standards is
provided in the according subsection of this book's reference annex.

11.2 Software Classification According to Criticality

The requirements towards software development, testing and documentation as well


as for formal acceptance, which are prescribed by a software standard, depend
usually on the criticality of the SW for the space mission. OBSW for safety critical
systems is ranked with the highest criticality level, such as control software for ECLS
Systems, manned spaceship control software or navigation software used for
Airplane guidance such as GPS / Galileo navigation payload software. Software for
ground equipment, such as an OBSW test equipment like an SVF has lower criticality
ranking which implies for example that less extensive testing is required. The
following table summarizes the criticality level definition according to the ECSS
standard.
Table 11.1: Software criticality levels.
© ECSS-Q-ST-80C, Annex D

SW Criticality Level Definition

Level A Software that if not executed, or if not correctly executed, or whose


anomalous behavior can cause or contribute to a system failure
resulting in:
Catastrophic consequences (Loss of mission etc.)
Level B Software that if not executed, or if not correctly executed, or whose
anomalous behavior can cause or contribute to a system failure
resulting in:
Critical consequences (Endangering mission)
Level C Software that if not executed, or if not correctly executed, or whose
anomalous behavior can cause or contribute to a system failure
resulting in:
Major consequences
Level D Software that if not executed, or if not correctly executed, or whose
anomalous behavior can cause or contribute to a system failure
resulting in:
Minor or Negligible consequences

A similar classification is available in the DO178B called "Certification levels" and in


the Galileo SW Standard, called “Development Assurance Levels” – ranging from
DAL A to DAL E.
170 OBSW Development Process and Standards

11.3 Software Standard Application Example

The most important characteristics of an OBSW development in accordance with a


software standard shall now be explained taking the ECSS standard as example
since it is the most commonly applied for European space projects. The development
phases for a SW and according intermediate reviews are therefore depicted in the
figure below:

SRR = Software Requirements Review IRR = Integration Readiness Review


PDR = Preliminary Design Review SW-QR = System Qualification Review
DDR = Detailed Design Review SW-AR = System Acceptance Review

Figure 11.3: Software development process and review milestones. © ESA

For the stepwise approach, the required review milestones, the required
documentation as well as for document structures and content and for the product
assurance each software standard has its own “Engineering Requirements”. Some
software standards replace the IRR by a "Test Readiness Review", (TRR).
For the S/C system engineer the problem always exists that such software standards
are written by the authors only having in mind the pure OBSW and accordingly
relevant topics. General hardware / software integration problems, electronics and
electric topics and S/C design problems also affect the OBSW and are mostly not in
focus of these standards and have to be managed by an OBSW system engineer
“translating” consequences from system design and equipment unit design into
OBSW functional requirements. The same applies for any design changes that arise
throughout the entire S/C development.
As already indicated the SW standards also focus on the SW development process
with its milestones, documentation and the like. The ECSS comprises the following
main sections – please also refer to figure 11.4:
Software Standard Application Example 171

● SW related system requirements – which focus on how to derive OBSW


requirements from the S/C requirements
● SW requirements and architecture engineering
● SW design and implementation
● SW validation
● SW delivery and acceptance
● SW verification
● SW operation
● SW maintenance and
● The entire SW management process

5.2 Software related system requirement 5.9 Software operation process


process
5.2.2 Software related system 5.2.4 Software related system
requirements analysis 5.9.2 Process implementation
integration and control

5.2.3 Software related system 5.2.5 System requirement review 5.9.3 Operational testing
verification

5.9.4 Software operation support


5.4 Software requirements and architecture
engineering process 5.7 Software delivery and
acceptance process 5.9.5 User support
5.4.2 Software requirements analysis

5.4.3 Software architectural design 5.7.2 Software delivery


and installation 5.10 Software maintenance
5.4.4 PDR process

5.10.2 Process implementation


5.5 Software design & implementation
engineering process 5.7.3 Software acceptance
5.10.3 Problem and
5.5.2 Design of software items modification analysis

5.5.3 Coding and testing


5.8 Software verification 5.10.4 Modification implementation
5.5.4 Integration process

5.10.5 Conducting maintenance


5.8.2 Verification process
5.6 Software validation process reviews
implementation
5.6.2 Validation process
implementation 5.10.6 Software migration
5.6.3 Validation w.r.t. the technical 5.8.3 Verification activities
specification
5.10.7 Software retirement
5.6.4 Validation w.r.t. the requirements
baseline

5.3 Software management process

5.3.2 Software life cycle 5.3.4 Software project 5.3.6 Review phasing
management review description
5.3.8 Technical budget and
margin management
5.3.3 Joint review process 5.3.5 Software technical 5.3.7 Interface management
reviews description

Figure 11.4: SW requirements grouping according to development subprocesses.


© ECSS-E-ST-40C

As it can already be identified from the topics treated in the ECSS_E-ST40 above,
the requirements in such SW standards are largely focused on the production and
172 OBSW Development Process and Standards

maintenance process of the software and do not contain technical SW requirements


or even requirements inducing design implications.

Software Engineering Requirements of a Software Standard:


The figure below shows an example requirement from the SW design and
implementation process treated in the ECSS.

Figure 11.5: SW development requirement example. © ECSS-E-ST-40C

This example depicts the requirement number, title, text and expected output
documents at dedicated review milestones. At the start of a project the developer
must provide a compliance matrix stating in how far one intends to be fully, partly or
non compliant to all of these engineering requirements. All deviations must be
justified.
At project end one must provide a compatibility matrix stating achieved compliance
and which documents, review minutes, product assurance reports etc. prove the
compliance. An example for such an engineering requirement is given below.

Software Eng. Process Requirements

Software Technical Requirements SW


SW
Requirements SW Design
System Spec. Code
Document Document (DD)
(URD)
(SRD)

Figure 11.6: Software and process requirements driving development process.

Technical Requirements towards an OBSW:


The complementary side are the technical user requirements towards the OBSW
under development. The system architecture and detailed design and code have to
be developed so that the later "product" OBSW fulfills these technical requirements
which have to be verified during the integration and test phase. An example for a
technical requirement is given below.
Software Standard Application Example 173

9.9.9.9 S/C commanding to Safe Mode


The OBSW shall allow to command the S/C into Safe Mode
(after release from launcher), independent from current S/C
configuration and commanded subsystem configuration.
EXPECTED OUTPUT: The following outputs are expected:
a. Software design document and code
[DDF, SDD; PDR];
b. Software validation specification
[DJF, SVS; CDR];

Engineering process requirements and technical requirements together form the


development baseline for the elements of the OBSW to be developed. From the
engineering requirements and the technical requirements results the set of all the
documents which are to be written during the overall OBSW development in the
spacecraft project. The SW development standards include detailed so-called
“Documents Requirements Lists”, (DRL), prescribing which document has to be
available as draft or final issue at which milestone in the development process.
The table below shows the according list cited again from ECSS-E-ST40C as an
example. The documents are grouped into so-called “files” which are document
groups related to a certain topic. E.g. “RB” – for Requirements Baseline – contains all
documents related to the definition of SW user requirements. “TS” contains all
documents related to the definition of the SW technical specification. Further details
on this ECSS example can be taken from [100].

Table 11.2: ECSS-E-ST-40 and ECSS-Q-ST-80 Document requirements list (DRL).


© ECSS-E-40C

Related DRL item SRR PDR CDR QR AR ORR


file (e.g. Plan, document, file, report,
form, matrix)
RB Software system specification (SSS) Ö
Interface requirements document Ö
(IRD)
Safety and dependability analysis Ö
results for lower level suppliers
TS Software requirements specification Ö
(SRS)
Software interface control document Ö Ö
(ICD)
DDF Software design document (SDD) Ö Ö
Software configuration file (SCF) Ö Ö Ö Ö Ö
Software release document (SRelD) Ö Ö
Software user manual (SUM) Ö Ö Ö
174 OBSW Development Process and Standards

Related DRL item SRR PDR CDR QR AR ORR


file (e.g. Plan, document, file, report,
form, matrix)
Software source code and media Ö
labels
Software product and media labels Ö Ö Ö
Training material Ö
DJF Software verification plan (SVerP) Ö
Software validation plan (SValP) Ö
Independent software verification & Ö Ö
validation plan
Software integration test plan Ö Ö
(SUITP)
Software unit test plan (SUITP) Ö
Software validation specification Ö
(SVS) with respect to TS
Software validation specification Ö Ö
(SVS) with respect to RB
Acceptance test plan Ö Ö
Software unit test report Ö
Software integration test report Ö
Software validation report with Ö
respect to TS
Software validation report with Ö Ö
respect to RB
Acceptance test report Ö
Installation report Ö
Software verification report (SVR) Ö Ö Ö Ö Ö Ö
Independent software verification & Ö Ö Ö Ö Ö
validation report
Software reuse file (SRF) Ö Ö Ö
Software problems reports and Ö Ö Ö Ö Ö Ö
nonconformance reports
Joint review reports Ö Ö Ö Ö Ö
Justification of selection of Ö Ö
operational ground equipment and
support services
MGT Software development plan (SDP) Ö Ö
Software review plan (SRevP) Ö Ö
Software configuration management Ö Ö
plan
Training plan Ö
Interface management procedures Ö
Identification of NRB SW and Ö
members
Procurement data Ö Ö
Software Standard Application Example 175

Related DRL item SRR PDR CDR QR AR ORR


file (e.g. Plan, document, file, report,
form, matrix)
MF Maintenance plan Ö Ö Ö
Maintenance records Ö Ö Ö
SPR and NCR – Modification
analysis report – Problem analysis
report – Modification documentation-
Baseline for change – Joint review
reports
Migration plan and notification
Retirement plan and notification
OP Software operation support plan Ö
Operational testing results Ö
SPR and NCR – User’s request Ö
record – Post operation review report
PAF Software product assurance plan Ö Ö Ö Ö Ö Ö
(SPAP)
Software product assurance Ö
requirements for suppliers
Audit plan and schedule Ö
Review and inspection plans or
procedures
Procedures and standards Ö
Modelling and design standards Ö Ö
Coding standards and description of Ö
tools
Software problem reporting Ö
procedure
Software dependability and safety Ö Ö Ö Ö
analysis report – Criticality
classification of software
components
Software product assurance report
Software product assurance Ö Ö Ö Ö Ö Ö
milestone report (SPAMR)
Statement of compliance with test Ö Ö Ö Ö
plans and procedures
Records of training and experience
(Preliminary) alert information
Results of pre-award audits and
assessments, and of procurement
sources
Software process assessment plan
Software process assessment
records
Review and inspection reports
Receiving inspection report Ö Ö Ö Ö
176 OBSW Development Process and Standards

Related DRL item SRR PDR CDR QR AR ORR


file (e.g. Plan, document, file, report,
form, matrix)
Input to product assurance plan for Ö
systems operation

NOTE: Shaded boxes are the contributions of ECSS-Q-ST-80.

In addition such SW standards include so-called “Document Requirements


Definition“, (DRD), tables per document type which are prescribed lists of required
document content / chapters.
The classic “V-Model” SW development process as it is depicted in figure 11.3 is the
most commonly used approach for OBSW. In other standards this approach is also
depicted in a linear “staircase” from upper left to lower right and is called the
“waterfall approach” there. The ECSS standards more or less focus only on on this
single development approach. Other development methods such as the circular
approach or similar are not covered in ECSS, while in contrast for example the
Galileo SW Standard covers a variety of approaches.
Failure Is Not an Option
"Gene" Kranz

Part IV

Satellite Operations
Mission Types and Operations Goals 179

12 Mission Types and Operations Goals

Goldstone antenna © NASA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
180 Mission Types and Operations Goals

S/C Operations is the domain of controlling a S/C from ground ”to perform its work“ in
orbit or for deep space probes during target approaches – under nominal and failure
recovery conditions respectively. For enabling this – besides a suitable ground
infrastructure – a suitable operations concept for has to be engineered and according
functionality has to be designed and implemented on board.
The ECSS-E-ST-70 and its subparts detail – concerning the above topics – which
tasks of operations engineering are to be performed in which of the S/C development
phases.

Table 12.1: Operations engineering tasks during S/C development phases.

Phase A: Operations Engineering to provide operations principles, mission


scenarios and functional requirements.

System Engineering to provide the preliminary spacecraft / payload / orbit / mission


architecture, system performances and configuration implementing the mission
scenarios and considering functional requirements.

Phase B: Operations Engineering to provide consolidated operations and


functional requirements.

System Engineering to provide a consolidated S/C architecture, performances and


configuration implementing the mission scenarios and functional requirements.

Phase C/D: System Engineering to provide requirements and constraints to


be implemented in operations plan and procedures.

Operations Engineering to provide a S/C control policy and its implementation (via
TC/TM definitions etc.) associated to engineering requirements and constraints.

Phase E: Engineering support to Launch Site Operations and Mission


Execution.

Spacecraft manufacturer support to the “Launch and Early Orbit Phase”, (LEOP).

These tasks of operations engineering are closely integrated into the spacecraft and
mission analysis and design process as it was sketched out in chapter 2. These tasks
will be treated in more in chapter 13. The explanations however will not be allocated
closely to the S/C development phases but will instead be structured concerning
engineering tasks under the responsibility of the S/C manufacturer and the tasks later
under responsibility of the operations center and finally the launch and LEOP
activities of both partners together.
Depending on the basic type of S/C mission the operations goals differ slightly and so
do the applied operations infrastructures. A short classification is given below.
Mission Types and Operations Goals 181

Figure 12.1: Sentinel-2, a typical optical Earth observation satellite. © ESA

The goals of S/C Operations for LEO satellites are:


● To perform S/C platform and payload calibration.
● To perform S/C platform and payload control and monitoring.
● To upload operational timelines and to download mission product data.
● If needed to perform orbit correction maneuvers.
● To continuously maintain payload performance.
● To perform recovery activities in case of failure.
● And finally to de-orbit the satellite at end of life.
This is performed via a network of ground stations – potentially different stations for
platform control and science data downlink.
182 Mission Types and Operations Goals

Figure 12.2: Artist view of a geostationary satellite. © ESA

The goals of S/C Operations for GEO satellites are:


● To perform S/C platform and payload calibration.
● To perform S/C platform and payload control and monitoring.
● If needed to perform orbit / position correction maneuvers.
● To continuously maintain payload performance.
● To perform recovery activities in case of failure.
● And finally to re-orbit satellite to a disposal orbit above GEO at end of life.
All this is typically performed via a single ground station due to the spacecraft's
permanent visibility.
Mission Types and Operations Goals 183

Figure 12.3: Mars Express and its planet approach trajectory. © ESA

The goals of S/C Operations for deep space probes are:


● To perform S/C platform and payload calibration.
● To perform S/C platform and payload control and monitoring.
● To upload operational timelines and to download mission product.
● To upload OBSW updates for the different flight phases.
● To command and control trajectory changes and swing-by maneuvers.
● To continuously maintain payload performance.
● To perform recovery activities in case of failure.
These tasks are performed via a deep space network of ground stations.
The Spacecraft Operability Concept 185

13 The Spacecraft Operability Concept

MetOp groundtrack © ESA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
186 The Spacecraft Operability Concept

The spacecraft operability concept, which will be explained in this chapter, covers
diverse engineering topics such as the S/C modes, system autonomy and the like.
They have already to be considered during S/C system conceptualization and have
to be refined subsequently over the development phases – as was already
expressed. And obviously these topics have to be treated during OBSW require-
ments definition, OBSW design and testing. However since these topics become fully
visible not before the start of S/C operations tests and since these topics are only
partly covered by OBC hardware or OBSW, they are treated in this part IV of the
book in a consolidated manner. The main operability concepts to be worked out and
to be defined for S/C operations are:
● The spacecraft commandability
● The spacecraft configuration handling
● The system operability concept
● The PUS tailoring
● The onboard process IDs definition
● The spacecraft mode concept
● The downlink concept
● The mission timelines
● Spacecraft operational sequences
● Spacecraft authentication
● The spacecraft observability from ground
● The spacecraft science data management
● The onboard synchronization functions and science and housekeeping data
timestamping – called “datation”
● The data downlink
● The redundancy concept
● The satellite onboard autonomy
● The spacecraft FDIR concept
● The satellite's operational constraints
● Flight Procedures and their testing
Two important documents related to these topics are generated during the S/C
engineering from phase A to D. One is the “Spacecraft Operations Concept
Document”, (SOCD), developed during phase B and the other one is the “Space
Segment User Manual”, (SSUM), which is also called the “Flight Operations Manual”,
(FOM). This is a multi-volume document which, in its final issue, is provided by the
S/C manufacturer to the S/C operations center crew. It comprises sections on:
● Mission phases and purposes
● System design summary
● System-level autonomy
● System-level configurations
● System-level budgets
● Satellite or ground station interface specifications
● System level operations
● System-level modes
● Mission timelines
● System-level failure analysis
● Platform subsystem descriptions
● Payload definitions
The Spacecraft Operability Concept 187

The key thing to understand is, that the user manual is not produced as a final sum-
up at end of S/C AIT phase, but it is prepared already in a first issue during the
engineering phase C. Already at CDR a first issue (some volumes still may be drafts)
is and needs to be available. The AIT team – when starting system AIT activities after
CDR – is depending heavily on similar information as the ground control team later –
e.g. concerning S/C and subsystem operation. Therefore already the AIT team
directly extensively this SSUM during AIT – and contributes to SSUM improvement.

13.1 Spacecraft Commandability Concept

The spacecraft commandability is achieved by the design of a complete “network” of


possibilities to command the spacecraft
● through the nominal path via PUS TC packets,
● via High Priority Commands of class 1 to CPDU,
◊ being routed accordingly either via MAP-ID
◊ or via TC virtual channel selection.

The TC routing from receiver to equipment via MAP-IDs, APIDs, CPDU has already
been treated in chapter 8 – particularly in 8.7 and according visualizations are
covered by figures 8.10 and 8.14 and shall therefore not be repeated here.
The scope of commandability covers both the setting of parameters on board as well
as the startup or shutdown of onboard functions as they are defined during OBSW
design in the function tree – please refer back to figure 10.3. The engineering task in
this field is to design the S/C commanding in such a way, that whichever single failure
in whatever unit arises, the system can still be recovered.
The commandability concept is completed by definition of commands to the satellite
Authentication Unit. The authentication topic is treated in chapter 13.14.

13.2 Spacecraft Configuration Handling Concept

The next operationally relevant topic to be conceptualized during S/C engineering is


the spacecraft configuration handling. Configuration handling is relevant for proper
onboard computer booting and onboard equipment initialization, for proper control of
fallback to a safe configuration in case of failures and for proper recovery from Safe
Mode back to operational conditions respectively.
Configuration control is done by means of a so-called “Spacecraft Configuration
Vector”, (SCV), a vector containing central system setting variables. Important to note
is that
● the SCV is not to be mixed with the OBSW-DP
● and the SCV is located in a non volatile memory (safeguard memory)
since the settings stored in the SCV must be persistent to OBC power reset. In
practical realization the SCV might also consist of multiple subvectors.
188 The Spacecraft Operability Concept

The first part of information stored in the SCV is the


● actual S/C configuration with its sub-entries:
◊ Nominal Settings,
◊ Safe Mode Settings and
◊ Health Status Parameters.

The first part specifies which is the nominal equipment to be used on board. If not
advised otherwise (e.g. via contradicting health info or via a ground command) the
PCDU and OBSW always will power and boot / initialize the onboard equipment
listed as “Nominal” – in most cases all A sides of redundant units.
The Safe Mode vector part specifies which units are to use in case of S/C fall back to
Safe Mode – in most cases all B sides of redundant units.
The third part contains the vector of units being identified as healthy. In case a unit is
flagged as non healthy any attempt to activate it (e.g. due to a mode change) would
trigger a refuse and an FDIR case. This is to prevent the S/C being commanded to
modes (e.g. by a loaded timeline) while a required sensor or payload was ruled out
as non healthy before. The health vector content can only be reset / changed from
ground.

The second part of information in the SCV is the


● actual S/C status with its sub-entries:
◊ Powered Equipment,
◊ Equipment TM Acquisition and
◊ Equipment Operational Status.

Here the first part lists which equipment is powered which is an important status
information for TM. Furthermore when the OBSW intends to take an equipment (e.g.
an instrument) into operation, it first must advise the PCDU to supply power to the
unit, then it must check power availability and then it can start TM acquisition from
the unit and controlling the equipment.
This leads over to the second part of the vector. It lists from which units cyclic
housekeeping TM acquisition is running – which does not yet mean they all are
operational. E.g. a star tracker may be powered and booted, TM is cyclically
acquired, but it is not yet operationally used by the AOCS control algorithm.
And consequently the third part of the S/C status in the SCV lists the equipment
which currently is in use operationally.

The SCV – particularly the status part – is continuously updated during operations
and the configuration part is directly affected in case an equipment was identified to
have failed. An “equipment” in the context of the SCV can also be a data bus.
An important aspect still has to be mentioned concerning the SCV with respect to its
updates. The updating of entries always has to be performed by a safeguarded SW
algorithm to avoid that in case where the OBSW fails just at the time of SCV update,
corrupted or contradictory entries are included in the SCV. E.g. for each entry a write
flag is set before writing the update which is deleted again after successful write. In
Spacecraft Configuration Handling Concept 189

such case if the OBSW fails during write, the write flag remains set and after reboot
the write flag for the SCV entry still is visible, indicating to the rebooted SW that this
entry is presumably invalid and has to be reverified by additional measures (query
power status by TM from PCDU or other).
Since the management of such safeguard memory parameters and of the SCV are
not part of the standard PUS services, for these tasks dedicated private services are
to be defined in most cases during the S/C engineering phase. Which directly leads
over to the next topic in the context of operability – PUS tailoring.

13.3 PUS Tailoring Concept

The Packet Utilization Standard, its services, the services defined by standard and its
openness to mission dependent tailoring already have been mentioned in
chapter 8.6. However in practically each space mission subservices of the standard
set can be identified which are not needed and other services and subservices can
be identified which aren't covered in the standard repertoire. Such additionally
requested services can be introduced by the spacecraft supplier as mission specific
ones in the numbering scheme between 128 to 255.
PUS tailoring is an essential task throughout the entire spacecraft development and it
covers multiple aspects:
● On the one hand the services from the standard set have to be selected and
tailored for the spacecraft platform control.
● In case where additional services are identified for platform control, they have
to be defined on top. Examples are services for the SCV vector management
as explained in the previous chapter, or a function monitoring service. While
the standard PUS includes service 8 for function commanding, it does not by
default include one for monitoring since function implementations on board
can be too different between missions.
● A further aspect of PUS tailoring comes into the game during S/C
development phase C when the onboard equipment is selected. It was already
mentioned that modern high-end platform equipment such as GPS / Galileo
receivers, star trackers and the like are commandable via PUS TC and TM
themselves. When selecting such equipment from a supplier, the S/C platform
overall PUS however mus comprise at least all those services from the
equipment which are intended to be used. By this effect the overall PUS
service set for the spacecraft – which has to be reflected in the ground
segment's satellite TC / TM database – becomes a superset of the original
platform services + deltas induced by selected equipment.
● A further driver for services can become the combined handling of payload
science data plus platform geolocation and / or attitude data via the science
data downlink (X-band or the like). Such additional platform information – often
called ancillary or complementary data – is downlinked together with the
science data via dedicated service packets to later ease mission product
generation on ground.
190 The Spacecraft Operability Concept

From this it becomes obvious that the PUS command set which the flight operator
later finds in the FOM is only the final outcome of a detailed engineering process
covering the services, the TC and TM packets and the variables and parameters
included in all the service and subservice packets.

13.4 Onboard Process ID Concept

For addressing the diverse onboard software processes – either inside the OBSW or
in other intelligent units – unique process IDs have to be allocated for the entire
spacecraft to allow a proper routing of uplinked TCs and to allow for each TM packet
the identification of the submitting unit and software process. For placement of the
Process IDs in the TC and TM packet headers the reader is referred back to figures
8.10 and 8.11. A Process ID allocation table for a fictional spacecraft is depicted in
the table below.

Table 13.1: System Processes and Process IDs

PRID Unit Application Functions


(hex)

00 TIME (only TM) Time Management

02 OBC-HW OBC High Priority TC Functions High Priority Commanding


to CPDU (MAP-ID = 0)

03 OBC-HW OBC High Priority TM Functions High Priority TM

06 OBC-HW Authentication Function Commands to Authentication


Unit (if available on board)

OBC OBSW PRID’s

0A OBC OBSW Data Management OBSW DMS Kernel functions

0B OBC OBSW AOCS Application AOCS TC and TM

0C OBC OBSW Power Control Application Power TC and TM

0D OBC OBSW Thermal Control Application Thermal TC and TM

0E OBC OBSW System Control Application S/C modes control and TC


and TM

0F OBC OBSW Non PUS Payload Control Application Instrument TC and TM

OBC OBSW external PRID’s


(examples)

20 STR-1 STR-1 Application STR TC and TM

21 STR-2 STR-2 Application STR TC and TM

22 STR-3 STR-3 Application STR TC and TM


Onboard Process ID Concept 191

PRID Unit Application Functions


(hex)

30 GPS-A GPS-A Application GPS TC and TM

31 GPS-B GPS-B Application GPS TC and TM

40 PL Pus compatible payload instrument Instrument TC and TM

Ground

60-77 EGSE reserved

78-7E Ground Segment reserved

Others

7F Idle Packets

13.5 Task Scheduling and Channel Acquisition Concept

The OBSW internal task scheduling with allocation of dedicated time slices for the
diverse application processes and the data bus channel acquisition scheduling
already have been treated in the chapters 9.1 and 9.2. The technical implementation
background can be found there.
The primary driver for onboard task scheduling is usually to achieve the appropriate
controller performance. E.g. the frequency for AOCS application process scheduling
and accordingly the frequency for AOCS sensor data acquisitions and AOCS actuator
control is primarily driven by the required AOCS control precision.
Indirectly these settings however also have implications on the operations. For
example they automatically prescribe the maximum application TM packet generation
rate and thus the resolution of TM that is available for FDIR debugging cases.
Therefore the task scheduling and channel acquisition are usually designed during
early phase C of spacecraft development and are later revisited during verification of
the FDIR operability concept – which is treated detail later in chapter 13.16.
192 The Spacecraft Operability Concept

13.6 The Spacecraft Mode Concept

13.6.1 Operational Phases

The first topic for engineering of the spacecraft mode concept is the freezing of the
spacecraft's operational phases to which later the spacecraft modes and subsystem
operational modes will be allocated. The operational phases for an Earth observation
satellite e.g. are the:

Pre-launch Phase:
This phase covers the final launch preparation activities. It still belongs to the
satellite test program. The principal activities comprise:
● The activation of the S/C with external power supply
● The final check-out for flight
● Switching the power subsystem to satellite internal power supply via battery
● Configuration of the S/C and the instruments into launch configuration

Launch and Early Orbit Phase:


This phase – duration being a few orbits – covers
● The switch-over of the S/C power to internal battery and power bus after
disconnection of the umbilical power supply
● The launch and ascent up to the separation of the S/C from the launcher
● The initial S/C operational sequences for deployments, rotational rate
damping, attitude and initial mode acquisition and eventual first attitude and
orbit correction maneuvers
● Some customers consider the AOCS mode switching from thruster controlled
mode to RWL controlled mode to be part of the LEOP phase. By others this is
already considered as part of the commissioning phase.

Commissioning Phase:
This phase – with a duration of approximately 10-15 days for the platform and
several months for the payloads – is targeted
● towards verification of the proper platform performance (e.g. pointing and
geolocation accuracies),
● to testing of all platform modes – particularly of AOCS and
● towards taking all payloads into operation,
◊ as well as performing their in-flight calibration
◊ and performance verification
◊ together with the payload ground segment.

Nominal Operations Phase:


This is the phase for mission product generation
● according to the timelines uplinked to the satellite,
● with constant monitoring and handling of potential platform anomalies via the
FOC,
The Spacecraft Mode Concept 193

● and the science data downlinks to the PGS.


● The phase also comprises regular orbit correction maneuvers, payload or
platform equipment re-calibrations, OBSW patches etc.
The End-of-Life Disposal Phase:
● It comprises the controlled de-orbiting for Earth observation satellites and the
re-orbiting to a higher disposal parking orbit for telecommunication satellites.
● For deep space probes normally no activities are foreseen during this phase.

13.6.2 System and Subsystem Modes

During the diverse operational phases the S/C can be in different system operational
modes. Obviously not all system modes are relevant for all operational phases. In
addition during one single satellite system mode the S/C subsystems may be
switched to diverse subsystem modes. One of the basic decisions during the
engineering phase concerning the operations concept is the selection between a
closed or open S/C mode concept.
The mode concept starts with defined modes of subsystems. AOCS modes for
example can include
● an AOCS Safe Mode (only ES, SS and MTQs active) and
● one or more nominal modes such as a fine pointing mode with a.m.
components active + GPS, STR, RWLs etc.
Please also refer to figure 13.1 depicting a S/C and an AOCS mode diagram. Further
submodes can be defined depending on the use of the nominal or redundant
equipment chain. Similar modes are definable for each subsystem.
In this example (simplified from CryoSat 1) the following dependencies between S/C
and AOCS modes exist:
● In S/C Off & pre-launch mode, AOCS mode is OFF.
● In S/C launch mode AOCS is in standby mode.
● In S/C separation mode AOCS is in rate damping mode.
● In S/C nominal mode AOCS can be in coarse or fine pointing mode.
● In S/C orbit maneuver AOCS is in orbit control mode.
● For S/C Safe Mode the redundancy setting of key equipment are listed in the
Safe Mode box.
In a S/C featuring a so-called “closed mode concept”, subsystem modes are
constrained to the main S/C operational modes. E.g. when commanding the S/C to
Safe Mode automatically AOCS and all other subsystems also will transit to their
subsystem Safe Mode, and e.g. for AOCS equipment it would not be possible to
manually activate and command GPS or STR from the ground. S/C with “closed
mode concept” need a dedicated FDIR / Safe Mode for service, where ”everything“ is
commandable, but which requires authenticated access.
By contrast in a S/C implementing a so-called “open mode concept” the entire S/C is
also commandable to an overall target mode via a single TC – e.g. S/C including all
194 The Spacecraft Operability Concept

subsystems to Safe Mode – however in each mode ground retains full controllability
of all S/C equipment without bypassing main system control. In addition also
redundant versus nominal equipment activation is still freely selectable.

Figure 13.1: S/C operational modes vs. subsystem modes. © Astrium GmbH

The selection of a mode concept in most cases depends on the mission type and the
the decisions of platform operators in the ground control center:
● While the open mode concept gives more flexibility it requires more caution
since there are less mechanisms in the system control which block switching
to non-optimal configurations or which block reconfiguration of essential
equipment during payload operations.
● Closed mode concepts are better suited for missions with a higher level of
onboard autonomy, since the autonomy master control typically is
implemented as some type of state machine or rule chainer and it will not be
able to handle the high number of free switching permutations which an open
mode concept allows.
The Spacecraft Mode Concept 195

◊ As example for Galileo, the navigation payloads are operated via a payload
OPS center and the platform via experts in a dedicated platform FOC.
Therefore an open mode concept was chosen.
◊ Classic Earth observation satellites like ESA Sentinel 2 (cf. figure 12.1)
also rely on an open mode concept. Sentinel 2 in addition offers a full
command / control symmetry – i.e. the S/C could also be booted on the
launch pad in a configuration applying all equipment and buses on the
redundant side or in any nominal / redundant mix. Systems like this are
very flexible during FDIR operations but require a significant effort during
ground testing.
◊ Closed-mode concepts are often chosen for military satellites due to
implemented higher level of onboard autonomy partly fully automatically
processing mission product requests.
If the S/C is launched with booted OBC – which is the case for most commercial and
agency missions – the initial transfer of the satellite from OFF to pre-launch mode
and then to launch mode is performed via AIT control procedures. During launch the
S/C in launch mode is passive which means
● the OBC tracks / records S/C power and thermal conditions
● the OBC tracks / records position and attitude as far as possible
● but obviously until separation from launcher all actuation activities of the
AOCS are disabled.
This state sometimes is also called “standby mode” instead of “launch mode”.
After release from the launcher the satellite has to deploy antennas and – if not
body-mounted – the solar panels, it has to stabilize its attitude, to reduce rotational
rates and has to acquire the initial attitude with the solar array pointing to the Sun,
antennas pointing to Earth etc. During this “initial acquisition mode” or “rate damping
mode” further orbit correction maneuvers may be required. After successful
finalization of the initial acquisition mode the S/C is ready for being made operational,
i.e. for its operational modes which highly depend on the mission. In figure 13.1
above “coarse pointing” and “fine pointing mode” are cited as nominal operational
ones. In addition a nominal mode for orbit correction maneuvers is to be foreseen.

Table 13.2: Spacecraft modes versus operational phases.


Mission Pre-Launch LEOP Commissioning Nominal End of Life
Phase Operations
S/C
Modes
Nominal
Off X
Standby / Launch X
Rate Damping X X
Coarse Pointing X X X X
Fine Pointing X X
Orbit Correction X X

Non-nominal
Safe Mode X X X X
Failure Modes X X X X X
196 The Spacecraft Operability Concept

For all modes after launcher separation the FDIR functions described in the OBSW
section will be used for failure handling and, depending of the problem, either can
keep the S/C in operational condition or will trigger a dedicated “Safe Mode” or other
failure modes.
Already in the SOCD a first description of each subsystem mode is elaborated which
is finalized then up to satellite SSUM. Such subsystem mode descriptions comprise:
● Which subsystem control loops are executed
● Which parameters and states are initialized
● Which measurements can be processed – for AOCS e.g. attitude and position
sensors
● Which actuators can be commanded
● Which propagation algorithms are running – such as AOCS position and
attitude forward propagation algorithms
● And all these including subsystem operational constraints – such as for AOCS
the maximum duration permitted for a rate damping.

13.6.3 Equipment States versus Satellite Modes

Finally during operations engineering the allocation of equipment states versus the
diverse satellite modes shall be worked out. Here the reader is referred back to
figure 2.7. As chapter 2.3 explained, such tables are already worked out during S/C
engineering phase B and they go down to equipment realization level (e.g. MTQ1,
MTQ2, MTQ3), not only to type level (MTQ).

13.7 Mission Timelines

The mission timelines which will be treated subsequently here are the
● LEOP timeline,
● Commissioning timeline and
● Nominal operations timelines.
The definition of these timelines – particularly of the LEOP timeline already in S/C
development B – require the identification of the envisaged launch vehicle since
especially the LEOP timeline is driven by the launcher characteristics to a very large
extent.
Timelines for deep space probe flight phases are highly dependent on the mission
characteristics, on the celestial body constellations, resulting launch windows, swing-
by maneuver constraints and the like and therefore exceed the scope of this
introductory book.
Mission Timelines 197

13.7.1 LEOP Timeline

The next steps for S/C operational concept definition are the selection of a Launch
Vehicle, the launch site and launch setup (single launch, double launch, piggy back
launch, trajectory injection with S/C coupled kick-stage) and from there resulting the
launch sequence with separation and the launch timeline with ground station
visibilities and contact times.
The satellite operations concept has to to be closely aligned with this Launch and
Early Orbit Phase, (LEOP), of which the first parts largely are driven by the launch
vehicle itself. This comprises the exact definition of the timely sequence and orbit
position and flight vector definitions for the
● launch / lift-off itself,
● ascent phase,
● separation of S/C from the Launch Vehicle,
● potential OBSW auto boot and S-band receivers auto-activation in case of
cold start conditions (in many cases required for piggy-back launched S/C),
● OBC / OBSW properly taking over S/C control,
● auto activation of the S-band transmitter,
● start of AOCS control for rate damping and attitude acquisition respectively,
● auto start of deployments (antennas, solar array),
● execution of potential automated attitude correction maneuvers,
● ground contact acquisition at first ground station visibility,
● and the verification of orbit correctness and command of additional orbit
correction maneuvers from ground respectively.
During LEOP flight phase then – based on the accordingly defined telemetry
packets - these phase goals are monitored by the FOC. For a qualitative
representation of a launch sequence please refer back to figure 2.10. The figure
below depicts an example with timings, altitude specifications etc.

US CE US CE US CE
switching-off switching-on switching-off
t=875 s t=4063 s t=4085 s
Separation
HF jettisonning h=226.1 km h=500 km h=500.0 km
of two SC US drift pulse
t=190 s V=7820 m/s V=7509 m/s V=7590 m/s
t=6000 s performance
h=123.7 km Нп=180 km Нп=180 km Нп=500 km
V=3647 m/s На=500 km На=500 km На=500 km
=15
q=0 kg/m2
Separation of
Separation of US from stage
stages I and II II and CE start
t=319 s
t=136.1 s h=227.4 km
h=66.0 km V=5680 m/s
V=3165 m/s q=0 kg/m2
=22 t – present time from moment of launch
q=73 kg/m2 h – altitude above common Earth ellipsoid
Stage II
Stage I booster V – relative velocity
Maximum velocity booster q – velocity head
head  - trajectory inclination angle
t=66 s HF doors Нп – perigee altitude
h=11.4 km Launc На – apogee altitude
V=605 m/s h
=46
t=0
q=6392 kg/m2

Figure 13.2: Launch Phase – Launch Vehicle Operations. © Astrium GmbH


198 The Spacecraft Operability Concept

This figure 13.2 depicts the time sequence for ascent up to shortly after S/C
separation from launcher. It is furthermore essential to plan the S/C ground station
visibilities for the nominal LEOP ground stations and for additional stations which
could reach the S/C in emergency cases up to finalization of S/C attitude and orbit
acquisition. Such ground station visibility plans are usually depicted in strip chart form
as shown below and are worked out already during S/C engineering phase. They
cover the time window from launch up to end of the LEOP phase i.e. covering several
days.

Contacts with candidate LEOP ground stations

Carnicobar
Cuiaba
Goldstone
Kiruna
Kourou
Mauritius
Okinawa
Perth
Poker Flats
Redu
Santiago
Svalbard
Villafranca
Wallops
Troll
0 3 6 9 12 15 18 21 24
Time from launch (hours)

Figure 13.3: Ground station visibility events during the first orbits. © ESA/ESOC

Preferably the S/C separation from launcher is performed within the visibility range of
a ground station. In this case the OBC / OBSW can activate the S-band transmitter
directly after launcher separation and ground can start acquiring telemetry and can
observe the S/C during processing of its initial procedures.

13.7.2 Commissioning Phase Timeline

During its Commissioning Phase, a satellite is incrementally configured up from the


relatively simple operational state at end of LEOP phase into a fully operational state
including payload instruments. The characterization of the platform (especially w.r.t.
orbit position and pointing accuracies and the like) as well as the characterization of
the payload instruments are the objectives of this phase. Both require the ground
Mission Timelines 199

segment to be involved in this characterization process since on the one hand the
FOC has to participate in S/C position measurements and the PGS has to provide
the capability to deliver user data products from payload instrument science data
downlinked via X-band or other. The main tasks to be performed during this phase
are:
● To take all S/C platform equipment into operation – e.g. to activate AOCS
sensors and actuators which have not yet been used during end of LEOP or
during coarse pointing mode such as GPS, star trackers etc.
● To measure that AOCS performance suits the needs of mission product
generation.
● To measure orbit positioning accuracies from ground via lasers and onboard
retroreflectors and to cross verify against AOCS reported position data.
● To take payloads into operations and to acquire first raw images for getting
operational profiles such as onboard thermal data sets to be used for
quantitative calibration.
● To calibrate payloads by flyover of fully characterized targets to quantitatively
calibrate onboard sensors.
● To support geophysical verification of science data products.
As already mentioned above, this phase is quite extensive and entire payload
characterization usually takes several months.

13.7.3 Nominal Operations Phase Timeline

During the normal operations phase timelines are uplinked cyclically to the S/C for
execution by the onboard scheduler inside the OBSW – cf. figure 8.17. These
sequences of time tagged commands cover the following scope of activities:
● Payload commanding to observe the desired ground targets with the
requested payload instrument settings. This can imply both time tagged
payload operation as well as position tagged payload operation.
● If required corresponding AOCS maneuvers for the observation are included in
the timeline such as slew maneuvers, roll-over maneuvers for bidirectional
reflectance measurements of a target etc.
● Secondly X-band downlinks will be scheduled according to onboard science
data storage resources and downlink bandwidths of the available stations.
● The S-band transmitter is usually commanded to off, out of reach of FOC
ground stations.
The payload operations timelines vary depending on whether the measurement
execution has to be performed automatically without X-band ground station visibility
and mission product data are to be stored on board, or, whether science data can
directly be downlinked during payload operations due to an X-band station visibility or
due to visibility of a relay satellite – presuming the S/C is equipped with a relay
communication equipment (typically an onboard laser communication terminal).
200 The Spacecraft Operability Concept

Orbit correction maneuvers in principle can also be commanded via uplinked timeline
and can be executed automatically, but in most cases it is preferred to perform these
under direct ground station visibility conditions. The same applies for RWL unloading
operations etc.
Generation of these operational timelines are daily work of the Flight Operations
Center once the satellite is in operation. They have to reflect the
● user requests for mission products (potentially competing requests),
● the available onboard resources (particularly mass memory and power,
● the orbit conditions which drive Sun and eclipse phase constraints,
● cyclically required orbit correction maneuvers and
● cyclically necessary system re-calibrations.
During S/C engineering and particularly during OBSW design and the tailoring of all
the TC and TM sets and the PUS services the entire scope of functions and features
has to be considered for performing these tasks.

13.8 Operational Sequences Concept

For simplification reasons an important operational concept so far has been left aside
deliberately – both during OBSW discussion and during the discussion of operational
phases. It concerns the operational sequences which are the type of sequential
functions performed by the spacecraft automatically in the relevant flight phases. The
most common autosequences which can be found in all Earth observation satellites
are the:
● System initialization sequence
● LEOP Autosequence
● System reconfiguration sequence (OBC reboot)
● Unit switch-on sequences
● Unit reconfiguration sequences
● Nominal instrument operations sequence
● Transition to Safe Mode sequence
● Recovery sequence from Safe Mode to a nominal operations mode
● Orbit control maneuver, (OCM), sequence.
It depends on the S/C OBSW design whether orbit control maneuvers are
implemented as timelines of time tagged commands, as OBCPs or as parametrized
functions – i.e. as OCM-sequence.
● The system initialization sequence serves for initial boot of the power unit and
OBC and for read-out of the "Spacecraft Configuration Vector", (SCV), by the
OBSW.
● The LEOP Autosequence serves for autocontrol of the S/C during launch,
separation and all initial flight steps until ground can take over at first contact
or even beyond with ground just monitoring onboard activities. In most cases
this sequence is implemented as OBCP which allows updating it until short
before launch in case of launch window or ground station changes etc.
Operational Sequences Concept 201

● The system reconfiguration sequence is applied when the OBC or some of its
subcomponents failed. The OBC or according subunits have to be switched
over to the redundant side – which except for the hot-redundant CCSDS
receiver side requires OBC reboot. It sets the according SCV entries and
triggers the PCDU to shut down and re-power the OBC components
accordingly. The initiating alarm pattern which caused the failure leading to
reconfiguration must be traceable in the system log by ground.
● Unit switch-on sequences are applied for taking equipment into operation
which was not used in the S/C mode before – e.g. for activation a GPS
receiver and star trackers before transiting from a low level AOCS coarse
pointing mode to a fine pointing mode.
● Unit reconfiguration sequences serve to switch-over onboard equipment to
redundant units such as switching an internally redundant GPS from nominal
to the redundant side. Such unit reconfiguration sequences also have to be
available for switching tetrahedron assemblies of RWLs from 4-unit to 3-unit
mode. The same applies for gyroscope tetrahedron assemblies.
● The Nominal Instrument Operations sequence is used by the time-tagged
commands uplinked for payload instrument control and the generation of
mission product data. The instrument control is achieved by uplink of a
command sequence setting the desired payload operational parameters and
by uplink of a time-tagged or position-tagged function command for triggering
the desired instrument mode transitions sequence to perform the observation.
● The transition sequence to Safe Mode serves for controlled transition of the
S/C to the Safe Mode which in most cases also applies switchover of all
equipment to the – presumably healthy – redundant side.
● The recovery sequence from Safe Mode to a Nominal Operations Mode
obviously serves for the inverse, the switch-back of the S/C to a healthy
operational mode (which then may leave the originally troublesome unit
causing the Safe Mode back on the redundant path).
The operational sequences either can be implemented as “Onboard Control
Procedures”, (OBCP), as described in chapter 8.6 or as fixed OBSW functions
implemented in the OBSW kernel. Both designs have their pros and cons. Important
is, that in both cases these operational sequences have to be designed and have to
be implemented during S/C engineering phase – and obviously have to be
documented for the operators first in the “Spacecraft Operations Concept Document”,
(SOCD), and later in detail in the “Space Segment User Manual”, (SSUM), also
called “Flight Operations Manual”, (FOM). The explanation in the SOCD and FOM is
usually based on classic program flow charts which may include IF / Then forks, Do /
While loops etc.
The nominal path – without forks and loops – of the operational sequences can also
be depicted in a more simplified graphical visualization with time as x-axis and the
S/C configuration vector elements as y-axis. The switch-down or switch-up flow then
can be visualized as bar chart as shown in figure 13.4 for a LEOP Autosequence.
This visualization shows better what is involved in the sequence and how it is
controlled in the flow.
202 The Spacecraft Operability Concept

Figure 13.4: Operational Sequence for LEOP. © Astrium

An important aspect is that these operational sequences must fit seamlessly together
and must allow varying start conditions. For example:
● The System Initialization sequence is applied on ground when booting the S/C
for launch.
● Thereafter the LEOP Autosequence takes over until stabilized coarse pointing
mode is reached.
The end state of the initialization sequence thus must fit seamlessly to the start of the
LEOP Autosequence – including all equipment states, parameter limits, the data bus
acquisition sequence etc.
On the other hand in case the S/C had to be entirely rebooted in orbit and has to be
recovered by High Priority Commands,
● the system initialization sequence is also the initial one switching on the core
power supply and the OBC
● but the detailed nominal or redundant equipment side selections and boot-up
of the data buses, elementary AOCS components and others according to the
Spacecraft State Vector settings is taken over by the recovery sequence.
So in the second case the initialization sequence and recovery sequence must fit.
Covering all sequence interfaces during the design phase and considering all
equipment and the states thereof is a non-trivial piece of engineering work which is
both the input for the OBSW on the one side and for OBCP design or ground
command sequence design respectively. During S/C operation in orbit the ground
controllers must have a good understanding of the detailed effects triggered by these
command sequences.
System Authentication Concept 203

13.9 System Authentication Concept

Commercial and advanced agency satellites are protected against unauthorized TC


access from the ground. The complexity and achieved security of the concept has
also to be designed during spacecraft engineering phase and drives the functionality
to be managed by the operations team later. The concept details are always varying
somewhat from mission to mission, but the basics can be summed up as follows:
● A satellite providing authentication is equipped with an authentication unit
which in most cases is integrated into or directly behind the CCSDS
preprocessing boards inside the OBC.
● Similar to the CCSDS decoder the authentication unit is usually also running
as hot redundant equipment.
● The authentication unit is working with encryption keys. In most cases an AES
encryption (cf. [116]) is used with a key size of 128 bit.
● The encryption information – called signature – is derived from the used
encryption key and is usually added on TC segment level into the tail
sequence of each TC segment.
● An encryption key is always valid for a certain “session”. Depending on the
baseline concept a “session” may be a time window which leads to time-based
key validity (as e.g. for Galileo) or the key may be valid until another one is
activated (which is the most commonly applied scheme) or a key may be valid
only for a single ground station flyover.
● The master key for the entire mission duration is burned into a PROM chip
cartridge of the OBC flight model. After burning the key is no longer alterable.
● Session keys are burned into an EEPROM which is usually integrated into the
same chip cartridge as the master key. The master key is required for the TC
to activate a session key similar to a Transaction Number for an electronic
banking transaction. Further session keys can be linked up from ground via
TC – provided the command is authenticated with the master key.
● For tests of authentication at spacecraft manufacturer premises one freely
accessible authentication key is provided to the AIT team.
● The authentication unit is designed electronically in a way that the PROM keys
cannot be read out via any telecommands and that the OBSW is not able to
access the PROM for key readout. The complete key handling and TC
validation checking is performed in electronic circuitry (FPGAs or ASICS)
without visibility by the OBSW.
● Commands to the authentication unit e.g. for key uploads are also identified
via a dedicated MAP-ID (similar as for the HPC-1 commands to CPDU) and
they entirely bypass OBSW.
● AES encryption mechanisms allow for subsequent generation of varying
signatures for multiple submission of the same command. This means they
provide protection against malicious TC resubmissions. If an attacker would
listen to a successful TC and could read out a complete TC segment including
204 The Spacecraft Operability Concept

encryption tail sequence, a one to one resubmission would be detected as


failure on board.

13.10 Spacecraft Observability Concept

The spacecraft observability concept is the complement to the commandability


concept which was presented in chapter 13.1. The observability concept covers all
aspects to be engineered concerning the monitoring of the spacecraft's
housekeeping status, may it concern the overall system, subsystem level or equip-
ment level. Like the commandability concept this includes diverse functionality and it
must assure to provide full S/C observability in nominal and failure conditions. The
following aspects and functions contribute to the S/C observability:
● “High Priority Telemetry”, (HPTM), provision to ground
◊ by the OBSW,
◊ from non-OBC units routed via OBC and OBSW,
◊ or provided by pure hardware units like the CCSDS processor, authenti-
cation unit etc.
● Provision of OBSW generated housekeeping TM packets with parameter
content according to defined parameter lists – so-called “Structure ID's”, (SID).
◊ A SID describes a list of parameters to be included in a TM packet
including the “Parameter Type Codes”, (PTC), to be used (boolean, integer,
real, string, time etc.) and the calibration ID to be used (curve type linear,
logarithmic etc. and curve characterization points).
◊ For S/C equipment generating TM which is not evaluated on board but just
is routed via OBSW to ground, separate TM SIDs are to be foreseen.
The SID numbering scheme is usually closely aligned with the Process ID
numbering explained in chapter 13.4. And in addition SID number ranges
are defined for both nominal housekeeping packets and diagnostic packets
such as event TM or HPTM.
◊ The combination of Process ID and SID allows a unique identification of
the packet structure by the ground segment software.
● The functionality of cyclic generation of these TM packets from the OBSW
internal data pool variables and the storage of external equipment provided
housekeeping TM packets.
● Providing limit monitoring of OBSW-DP variables and equipment variables.
● Providing logging mechanisms for events which happen on board such as limit
violations, equipment failures, bus failures, OBC HW failures etc.
◊ Events shall be logged in a safeguard memory which is non volatile to be
able to reconstruct the failure source from event history in case the OBSW
fails and OBC is rebooted or reconfigured to the redundant side.
◊ The TM log shall at least comprise the entries from the following PUS
services:
TC acceptance failure TM(1,2), TC execution failure TM(1,8), Anomaly
Spacecraft Observability Concept 205

Report Medium Severity TM (5,3) and Anomaly Report High Severity


TM(5,4).
● Providing a reconfiguration log identifying reconfigured equipment and status
after OBC / OBSW reboot.
● Providing equipment health status parameters of all equipment being stored in
the SCV (i.e. OBC configuration elements and onboard equipment like AOCS,
power, thermal, payload instrument units etc.) and the latter being
manageable from ground. Preferably processor module status TM and
reconfiguration module TM should be accessible from ground via “High Priority
Telemetry”, (HPTM), in case the OBSW is not running.

The relevant TM packets, logs and the like may not all be used during each S/C
operational phase or may not all be available during all phases (e.g. some AOCS
equipment may be off during LEOP phase or Safe-Mode). Therefore the overall
concept must be engineered to provide sufficient observability during all phases and
all operational modes.
A complex additional perspective is the monitoring and ground observability of
commanded or automated state transitions on board. TM packets are not only to be
defined for flight variable value tracking – like AOCS position vector, but also
dedicated switch packets are to be engineered to track the successful status or mode
switching of diverse equipment.
Even more complex is the topic of tracking the status of running OBCPs or OBSW
functions. Such functions can be fixed preprogrammed sequences of the OBSW or
can be Onboard Control Procedures triggered by TC, a timeline, an event or similar.
This topic is called “functional sequence monitoring”.

Cyclic Parameter Monitoring


Cyclic parameter monitoring is usually implemented through the use of PUS
Service 12:
● Limit violations are included in a dedicated report.
● Events are linked to parameter monitors so that a limit violation can trigger an
event.
● Events at least trigger an event telemetry and an entry into the safeguard
memory.
● Actions can be bound to an event via an event-action-table. The action then
can cover a broad spectrum from triggering a further command via trigger of a
subsystem or even system mode change to a reconfiguration, or combinations
thereof.

Functional Sequence Monitoring


However the functional sequence monitoring itself must be implemented into the
code of the OBSW function or OBCP so that it provides intermediate “milestone”
status information to trace proper execution.
206 The Spacecraft Operability Concept

● By this means also standard monitors and events can be bound to the
execution progress of a functional sequence – for example to step counters.
● Other functional sequence or system parameters may be linked to such
intermediate status data, such as the AOCS controller during LEOP
autosequence waiting for the solar array deployment signal and then for the
solar array locked state signal before starting rate damping control.
● Since the execution engines for OBCPs or the binary code of fixed coded
sequences is spacecraft specific, no standard PUS service is available for this
means. Therefore in addition to the execution engine for the functional
sequence the OBSW has to provide a dedicated PUS service handler which
can collect the intermediate transition of the function’s state, can evaluate and
can report them in packet format.
The proper execution of all defined onboard functions – please again refer back to
figure 10.3 – must be observable after finalization of the observability concept
engineering task.

13.11 Synchronization and Datation Concept

The synchronization and datation concept concerns the onboard time generation,
distribution and the coherent synchronization of all OBSW processes and the
onboard equipment which requires timing information or time progress information.
The key elements for this concept already have been mentioned in previous chapters
such as
● the physical quartz based oscillator clock module(s) on the processor board(s)
(see chapter 4.1),
● the onboard system clock time, (OBT), in the OBSW (see chapter 8.4) and
● a GPS or Galileo atomic clock time reference from an according receiver
which provides more exact time and better stability concerning clock drifts
compared to the quartz based clock modules.
Availability of time information on board is an essential function for OBSW task
control and for time stamping of S/C TM packets – housekeeping TM, event TM and
HPTM and science data packets.
For some operational modes such as Safe Mode the standard oscillator based OBC
clock precision may be sufficient while for certain operational modes with payload
instrument data generation, a very high timing precision for proper geolocation is
necessary and requires a GPS or Galileo atomic clock reference in most cases.
● At OBC boot-up – may this be on the launchpad for hot operations launch or
during LEOP for cold start launch or in orbit after any reboot of the OBC – at
first only the internal oscillator clock is available.
● It starts as a counter running up and the absolute time must be set by TC.
● The clock module of an OBC in most cases implements at least 2 hot
redundant oscillators – in most cases even 3 of them with a synchronization to
Synchronization and Datation Concept 207

each other and in case of 3 oscillators a 3:2 voting mechanism to detect and
rule out a damaged oscillator.
● The clock module then distributes the time information to the OBC's operating
system and thus it becomes accessible to all OBSW threads.
● If there are further equipment units on board which require absolute time
information (such as payloads or star trackers), such timing information must
be made available to these units via onboard TCs over data bus.
● As soon as GPS / Galileo precision time becomes available on board via a
booted receiver, it must first be verified, that the time information provides a
certain quality before switching over the OBSW to the GPS or Galileo signal
source by default.
● When the GPS / Galileo time signal is accepted, the OBSW is clocked in sync
to this signal by the following mechanism:
◊ The GPS / Galileo time signal is a signal arriving at certain time events –
e.g. as a “pulse per second”, (PPS), strobe – once per second.
◊ The “Numerically Controllable Oscillators”, (NCO), on the CPU boards
themselves freely perform a large number of oscillations between two such
GPS / Galileo time events.
◊ Then the electronic controlling these oscillators computes the deviation
between the GPS / Galileo Δt and the one theoretically resulting from the
number of NCO oscillations and re-adjusts the oscillator division factors to
align with the progression of time in the GPS / Galileo signal.
◊ The mechanism is comparable to adjusting the internal clock of a Linux
operating system – based on the PC motherboard quartz – via NTP time
signal.
◊ It also must be assured that as soon as the GPS / Galileo timing precision
again becomes weak – for whatever reason – the system automatically
switches back again to standard oscillator controlled mode – which might
imply falling back from higher measurement modes to idle or Safe Mode.
◊ During operations phase ground must always be informed about the time
source currently being applied via according TM packets.
● The datation concept further implies the time stamping of science data
packets with distributed OBT. Usually this time stamping is performed either
directly in the payload instrument or in the MMFU when storing the data.
● A further topic is the system synchronization on board:
◊ Synchronization of processes inside the OBSW is already provided by OBT
availability – from whatever source – to the operating system.
◊ Synchronization of external equipment is performed by distribution of a
time clock strobe – usually in the form of a “Pulse Per Second”, (PPS),
signal. The PPS signal is generated by the same source as the OBT
itself – i.e. from the processor board quartz oscillators – independent of
whether they are clock master or GPS / Galileo slave at that time.
208 The Spacecraft Operability Concept

13.12 Science Data Management Concept

The science data management concept can only be treated on a somewhat generic
level here since the space missions differ too much with respect to payload
instruments, data stores on board, application of ground link types etc. In the overall
operational concept the following aspects must be elaborated during engineering
phase together for both space and ground segment:
● Which payload instrument data are generated on board,
◊ from how many payloads,
◊ in separate measurement modes or in combined observation modes?

● Which data formats are generated, such as


◊ A / D converted streams,
◊ CCD sensor photo-like readouts,
◊ Video scenes,
◊ Radar patterns?

● Where are the data stored,


◊ in a separate MMFU or
◊ in a combined HK TM plus science data storage?

● Is science data to be stored separately or are certain complementary platform


data (see chapter 13.3) to be stored in combination for later processing on
ground?
● Which are the science data downlink facilities- or in more detail,
◊ which downlink bands are available (X-band, Ka-band etc.),
◊ for which ground stations,
◊ plus which potential relay satellite links?

And obviously the topic of datation as it was treated in the previous chapter is also a
key issue for the science mission product:
● Concerning time stamping at time of generation for storage on board.
● And concerning TM packet stamping for downlink data stream generation on
board and package sequence re-assembly on ground in case where multiple
links are used in parallel or changing links are used.

13.13 Uplink and Downlink Concept

Links and Channels:


Data links between the satellite and ground classically are the platform control links:
● S-band uplink for the transmission of
◊ Telecommands
Uplink and Downlink Concept 209

● S-band downlink for the transmission of


◊ High priority telemetry
◊ Real-time telemetry
◊ Playback telemetry – housekeeping data stored in the onboard mass
memory.
The real time telemetry downlink is usually managed via PUS Service 14 – “Packet
Forwarding Control Service”.
The playback housekeeping telemetry downlink is usually managed via PUS Service
15 – “Onboard storage and retrieval service”.

The payload mission product data links between the satellite and ground are the:
● X-band Mission Data Downlink for the transmission of
◊ payload mission product data
◊ system complementary data
Some missions furthermore also offer the feature of platform housekeeping data
being transmitted via:
● X-band Satellite Housekeeping Data Downlink

In the frame of the engineering task for these links the according Virtual Channels
must be allocated. The configuration of the data processing in the FOC and PGS
then must be designed in a complementary fashion.
An example for the allocation of Virtual Channels to S-band and X-band data for a
fictional satellite is provided already in figure 8.15.

Spacecraft Link Establishment:


For link establishment and constant carrier lock a compensation of the Doppler effect
is essential. When a S/C is passing over and is exactly above its ground station
(G/S), its velocity vector is exactly orthogonal to nadir direction – see case A in figure
13.5. In such conditions the G/S directly can communicate with the S/C via the ideal
carrier frequency without any Doppler shift influence.
Practically when a spacecraft becomes visible for a G/S over the horizon the S/C
velocity vector has a significant component towards the G/S. So any radio signals
from G/S to S/C (and vice versa) show Doppler shift to higher frequency. Vice versa,
when the S/C has passed over the G/S and is successively departing, signals show
Doppler shift to lower frequencies. This has to be taken into account when trying to
contact a S/C.
On the one hand the receivers both on board and on the ground must be able to
● detect the Doppler shifted carrier frequency and to lock to it and
● to keep the communication running even while the carrier frequency changes
during S/C flyover due to Doppler shift.
210 The Spacecraft Operability Concept

This is usually achieved today by use of lock-in amplifiers which detect the phase
and then keep the phase synced to the detected carrier frequency by a phase locked
loop oscillator.

Figure 13.5: Ranging and Receiver Locking

The Doppler frequency shift however can also be used for spacecraft position
detection in case it is not yet or no longer precisely known. For example for small
satellites launched as piggy-back payload often the exact achieved orbit altitude and
position is known with limited precision only before the first ground contact. After cold
start boot in orbit furthermore no precise position information is yet available from
GPS / Galileo receivers. Similar situations exist for professional satellites after severe
failures and later recovery of the system in orbit.
To detect position via measured Doppler effect a method called “ranging” can be
applied:
● When a S/C is expected to come into sight, the G/S transmits a carrier wave
towards its assumed position and this carrier is sweeped up and down the
ideal frequency slowly – varying in a sawtooth profile.
● When the S/C transponder identifies this signal it transmits back a carrier on
the S/C  G/S frequency with the same Doppler shift it has identified.
● The signal arriving back on ground thus shows
◊ sweep
◊ plus uplink Doppler effect

◊ plus downlink Doppler effect.


Uplink and Downlink Concept 211

● By tracking this double Doppler effect the G/S infrastructure can compute the
S/C velocity component towards G/S and can ”measure“ distance (ranging).
● As soon as this up / down loop is closed the G/S mediates the link signal
carrier to the target frequency (sawtooth variation stops) and so does the S/C
transmitter to ground.
● As soon as the S/C receiver then has stable carrier without losses and sweep
it ”locks“ the receiver to be 'in service' and sends a carrier lock signal via
dedicated lines to the OBC CCSDS processor.
● The latter then automatically transmits a carrier lock TM packet down to G/S
which thus identifies the S/C being ready to accept telecommands.

13.14 Autonomy Concept

13.14.1 Definitions and Classifications

For satellite spacecraft it can be stated that practically each example of today's
implementations already features some onboard autonomy functionality. The same
applies for transfer vehicles like ATV with their autonomous docking functions etc. For
satellites the autonomy is largely focused to perform operations during orbit periods
without ground contact. The S/C must be to manage the already mentioned failure
cases without ground station contact even if this implies payload instruments to be
switched off. At least a transition in a stable Safe Mode has to be guaranteed. But
there exist even higher levels of autonomy which can be achieved – and and which
have to be tested in case they are part of the design. However before discussing
these, some terminology definitions shall be introduced to define functionality and
features – also because the term “autonomy” sometimes is used very imprecise in
diverse contexts.

Table 13.3: Key terminology definitions concerning autonomy.

Autonomy Autonomy is a system feature based on diverse functionalities


and technologies.
Autonomy can be implemented on the basis of automatic
functions and / or autonomous functions.

Automatic They run directly and straightforward and are initiated according
functions to a master schedule or by a control program, (OBCP executor),
and they are checked against a schedule (operations timeline).

Autonomous They implement the decision making procedure and a reaction


functions to anomalies, (events), which occur during the execution of an
automatic function.
212 The Spacecraft Operability Concept

Autonomy Event Events are triggered either through anomalies (violation of limits,
error flag activations) or through the occurrence of a predefined
status modification (e.g. position reached, attitude reached).

Autonomous Such systems involve both spacecraft (e.g. satellite) plus ground
spacecraft system segment and are distinguished by a wide independence from
permanent human interventions. The distribution of intelligent
functions for autonomy between space segment and ground
segment is not prescribed.

Autonomous A spacecraft which is characterized by being largely indepen-


spacecraft dent from ground support and ground contact. Its intelligent
functions achieving the autonomy are implemented on board
and serve to achieve essential parts of the mission objectives
without ground intervention.

In addition to these definitions diverse levels of autonomy can be distinguished.


While still in the 1990s the definition of such levels was handled very inconsistently
from project to project and even different between space agencies their applied
individual classifications.
Today the ECSS-E-ST-70C standard [107] comprises a rather concise classification.
The ECSS determines between the autonomy for mission execution (E-Levels), the
levels for data storage (D-Levels) and for “onboard fault management”, i.e. autonomy
in FDIR. The tree classifications are cited in the tables below.

Table 13.4: Mission execution autonomy levels according to ECSS-E-ST-70C.

Level Description Functions

E1 Mission execution under ground Real‐time control from ground for


(low) control; limited on‐board capability nominal operations
for safety issues Execution of time‐tagged commands
for safety issues

E2 Execution of pre-planned, ground‐ Real‐time control from ground for


defined, mission operations on‐ nominal operations
board Execution of time‐tagged commands
for safety issues

E3 Execution of adaptive mission Event‐based autonomous operations


operations on‐board Execution of on‐board operations
control procedures

E4 Execution of goal‐oriented mission Goal‐oriented mission re‐planning


(High) operations on‐board
Autonomy Concept 213

Table 13.5: Data management autonomy levels according to ECSS-E-ST-70C.

Level Description Functions

D1 Storage on‐board of essential Storage and retrieval of event reports


(low) mission data following a ground Storage management
outage or a failure situation

D2 Storage on‐board of all mission As D1 plus storage and retrieval of all


(high) data, i.e. the space segment is mission data
independent from the availability
of the ground segment

Table 13.6: FDIR autonomy levels according to ECSS-E-ST-70C.

Level Description Functions

D1 Establish safe space segment Identify anomalies and report to


(low) configuration following an on‐ ground segment
board failure Reconfigure on‐board systems to
isolate failed equipment or functions
Place space segment in a safe state

D2 Re‐establish nominal mission As F1, plus reconfigure to a nominal


(high) operations following an on‐board operational configuration
failure Resume execution of nominal
operations
Resume generation of mission
products

Autonomy is a key system level technology for spacecraft and it can have two basic
different characteristics:

Table 13.7: Autonomy classification w.r.t application.

“Enabling Technology” – enables a certain mission: e.g. for:

● Enables survival of the spacecraft without radio ● Interplanetary


contact missions
● Enables spacecraft maneuvers without radio ● Rovers
contact ● Transfer vehicles
● Enables mission product generation without ● Military Satellites
radio contact
214 The Spacecraft Operability Concept

“Process Improvement Technology” – simplifies or e.g. for:


cheapens the mission:

This can be achieved by autonomy allowing:

● Single-shift spacecraft operations in the control ● Earth observation


center spacecraft
● Mission product generation without radio contact ● Telecom satellites
● Data recording and processing focused on user ● Navigation missions
requests ● Deep space probes

13.14.2 Implementations of Autonomy and their Focus

High level of autonomy – “Enabling Technology”

Basic characteristics of this type of spacecraft systems – like space probes, landers,
rovers, transfer vehicles – are:
● The level of autonomy typically being able to be designed between
◊ simple executions of macro command sequences, – “sophisticated
OBCPs” and
◊ an intelligent onboard available mission planner.

● The technical implementation inside the onboard software is focused towards:


◊ Modular onboard software concepts
◊ OBSW based on real operating systems
◊ Multi-CPU board architectures, potentially separate payload OBCs
◊ Higher control software layers being independent from the target hardware

These autonomous system architectures have to remain maintainable and have to be


verified and validated very thoroughly. For this purpose spacecraft testbenches
capable of simulating detailed scenarios are necessary. They need to provide error
injection mechanisms on different levels of the simulated spacecraft and must
provide ability to model complex error symptoms through parallel manipulation of
multiple failure symptom relevant parameters to test the onboard software's error
identification mechanisms. The autonomous function of an OBSW of such type
typically are verified on SVF testbenches as they were discussed in chapter 10.5.2.

Moderate level of autonomy – “Process Improvement Technology”


Basic characteristics of this type of spacecraft system are:
● System control for the mission product generation of a satellite being
supported by so-called “user requests”:
◊ This means – e.g. taking the example of an Earth observation satellite
again – a mission product customer no longer specifies when the onboard
instrument shall be switched on with which settings,
Autonomy Concept 215

◊ but instead they specify the geometrical observation target, the spectral or
other mission product characteristics they desire and they specify the
delivery date of the mission product.

● Supported by a mixture of sources of archive data in the ground segment from


previous observations and by identification of still missing information, an
intelligent mission planning system can generate the command sequence for
onboard execution for the satellite
◊ to observe the mission product parts not available in archive,
◊ to downlink the data, and
◊ to merge on ground the latest observation data with archive data
◊ to finally deliver the requested mission product to the customer.

● Such an infrastructure opens the door towards a semi-automatic single shift


operation of spacecraft platform and payload control.

Only the precise testing of the overall scenario can prove the “process improvement”.
Required here are again system environments to simulate detailed scenarios.
However besides pure spacecraft simulation in this case the functionality of the
ground segment elements – including the user request based mission planning – is
to be included in such verification scenarios.

13.14.3 Autonomy Implementation Conclusions

Autonomy as enabling technology typically is reflected in advanced space missions


with very limited ground contact or long signal delay times such as deep space
missions. For such S/C autonomy is less a topic for the ground system rather than for
the OBSW.
In the extreme example of the New Horizons probe – which is presented in the annex
of this book – the OBSW during the planetary approach permanently must process in
parallel:
● Payload timeline updates (to automatically activate payloads w/o ground
intervention),
● AOCS measurements (since the quantitative parameters such as gravitational
field data in the target vicinity are not known with sufficient precision)
● and it must monitor the health status of the S/C including according dynamic
reactions (FDIR).
Therefore for such highly autonomous S/C the OBSW comprises in most cases a
central control module which is tightly coupled to the subsystem controllers, the
parameter monitors and the event manager – please refer to figure 13.6.
Furthermore it closely interferes with or even comprises the onboard scheduler:
216 The Spacecraft Operability Concept

Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
Autonomous OBCP Mgr. OB Storage Hdlr.
System Large Data Srv. Hdlr.
Serv. IF Hdlr. Ctrl. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Subsytem Auton. Pckt.Forw./Retr.Srv.Hdlr.
Fct.Mgmt.Srv.Hdlr.
Payload Ctrl. App. AOCS App Power Ctrl. App. Th. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.

Boot Loader RTOS I/O Line Drivers

Figure 13.6: Autonomous system and subsystem control integrated with monitoring,
event handling and scheduling.

Autonomy as improvement technology in most cases is targeted to optimization of


operations related tasks in the S/C ground segment and focuses on
● ground segment infrastructure,
● mission product handling infrastructure and
● mission timeline generation / optimization software systems.
Mission ground infrastructure will be treated in more detail in chapters 14 and 15. An
example for a system testbed on autonomy for purpose of process improvement also
is presented in the annex of this book – the “Autonomy Testbed”.

13.15 Redundancy Concept

The redundancy concept elaborated also during the S/C engineering phase has an
essential influence on the operability of the spacecraft in nominal and failure cases.
During development phases B and C the redundancy types for each equipment and
major subunits must be frozen. It must be defined – inline with features of the to be
procured equipment – which units
● are internally redundant,
● or externally redundant
and furthermore it must be distinguished between
● hot redundant equipment,
● cold redundant units
Redundancy Concept 217

● and specific redundancies such as the 4/3 redundancy of RWLs in a tetrahe-


dron assembly.
Below an example of equipment redundancies for a fictional satellite is depicted:

Table 13.8: Equipment redundancies example.

Equipment Subunit Redundancy


Onboard Data Hdlg.

OBC

Processor Module 2 units, cold redundant

CCSDS Processor 2 units, hot redundant

Reconfiguration Unit 2 units, hot redundant, master /


backup

Safeguard Memory 2 units, hot redundant

HK TM Mass Memory 2 units, cold redundant

Authentication Unit (If available) 2 units, hot redundant

RIU / OBC-I/O Boards

I/O Boards 2 units, cold redundant

Data Bus

SpaceWire 2 buses, hot standby + internal


redundancy

CAN 1 bus, internal redundant

S-band Transmitter

Modulator 2 units, cold redundant

Amplifier 2 units, cold redundant

Power

PCDU

Controller 2 units, hot redundant

LCL Banks 2 banks, cold redundant

Solar Array Drive

Drive Motor 2 coils, cold redundant,


hot for position lock
218 The Spacecraft Operability Concept

Equipment Subunit Redundancy


AOCS

Earth Sensor -- 1 unit internally redundant

Sun Sensor -- 1 unit internally redundant

Star Tracker -- 3 units, 2 out of 3 operational

GPS / Galileo Receiver -- 1 unit internally redundant

FOG

Coil Assembly 4 coils in tetrahedron -> 3 out of 4


redundancy

Electronics 1 unit internally redundant

Magnetometer -- 3 units non redundant

Magnetotorquer -- 3 units internally redundant

Reaction Wheel

Wheel Assembly 4 wheels in tetrahedron -> 3 out of 4


redundancy

Wheel Drive Electronic 1 unit internally redundant

Reaction Control System

Thruster 2 RCS branches -> entire branch


switchover

Latch Valve 2 RCS branches -> entire branch


switchover

Pressure Transducer 2 RCS branches -> entire branch


switchover

Thermal

Heaters -- Not redundant

Thermistors -- Internal thermistor triples, majority


voting in OBSW

Payload Instruments

Sensor 1 -- 1 unit, non redundant

Sensor 2 -- 1 unit internally redundant

Payload Data Hdlg.

Payload Data Processor -- 2 units, cold redundant

MMFU -- 1 unit internally redundant


Redundancy Concept 219

Equipment Subunit Redundancy


X-band Transmitter

Modulator 2 units, cold redundant

Amplifier 2 units, cold redundant

Depending on the redundancy design, the according TM and TC for each redundant
unit equipment must be made available for operations on ground. Units separate
from each other – even when operated in cold redundancy – must be commandable
individually and telemetry must be uniquely identifiable as coming from the nominal
or redundant source.
In case of PUS commanded intelligent units, internally redundant (cold redundant)
units may be addressable by the same APID from ground. In contrast thereto re-
using the example above where 2 star trackers out of 3 will be used at a time, these
3 star trackers are individual units and each one of them requires a separate TM and
TC set with separate APID.
A further topic is the coupling between units or subunits respectively. This concerns
whether the OBC processor module A is only coupled to safeguard memory A or
whether both A and B units are cross coupled. Such design decisions later essentially
drive system commandability from ground. While in the above example of the OBC
processor module and the safeguard memory the choice for full cross coupling will be
obvious, such decisions are less trivial e.g. for payload sensor coupling to payload
data handling chain equipment, MMFU and the like.
The design of the system redundancies and cross couplings directly has high
influence on the spacecraft commandability concept as was presented in
chapter 13.1 and even more on the spacecraft observability concept, see
chapter 13.10.
Concerning the redundancies available on board and the operational preselection the
following basic principle of “Health overrules redundancy preselection” is explained at
hand of an example:
● In the above table there are 2 operational STR occurrences needed during
operations – to be selected out of 3 available ones.
● Assuming operation is performed with STRs 1 and 2.
● In case of necessary reconfiguration – e.g. STR1 to be deactivated and STR3
to be taken into operation, the SCV health entry information overrides
redundancy reconfiguration.
● If STR3 were marked “non-healthy” – in the SCV, this reconfiguration
approach would be rejected. See also chapter 13.2.

13.16 FDIR Concept

“Failure Detection, Isolation and Recovery”, (FDIR), was already explained as key
functionality of the OBSW. Obviously not all failures are subject to onboard
220 The Spacecraft Operability Concept

identification and not all failures are subject to onboard recovery. The FDIR concept
to be worked out for the spacecraft during the engineering phase follows some basic
requirements and principles, implements a certain failure hierarchy – specifying
furthermore on which level the failure is to be fixed – and finally it implements a
consistent approach for the functionality transferring the spacecraft to Safe Mode and
how to recover from there. A properly defined Safe Mode with full S/C observability is
essential for FDIR operations. The Safe Mode must also assure a proper balance of
the S/C produced and consumed resources (mainly power) since the diagnosis of
failures plus recovery in most cases will not be possible within one ground contact (in
particular not for polar orbiting Earth observation satellites).

13.16.1 FDIR Requirements

Typical requirements for FDIR design at the beginning of the S/C system engineering
phase request that:
● A clear hierarchy is to be defined, which type of failure is to be identified and
managed on which level FDIR level.
● The S/C must be able to reach its Safe Mode autonomously.
● The Safe Mode, if triggered, shall not limit ground in any way w.r.t. spacecraft
observability and commandability.
● Ground may also be allowed to submit commands which are blocked for the
OBSW or are not allowed in that sequence for the OBSW.
● Ground must be able to perform a detailed status analysis and failure event
history analysis for unique failure identification.
● Ground may alter operational limits to avoid future Safe Modes – e.g. in cases
of failures triggered by equipment degradation.
● Obviously – but not trivial to realize – the transition to Safe Mode itself shall
not endanger the S/C, i.e. for example shall not require potentially hazardous
commands or command sequences.
● Also in Safe Mode the OBC shall be running and shall allow for OBSW patch
and dumps and memory patch and dump functions.
● For all failures imagined during S/C engineering it must be assured that they
clearly can be distinguished due to their symptom sets.

13.16.2 FDIR Approach

The FDIR approach is based on sequences of failure detection in onboard TM or


corresponding variables in the OBSW-DP and as a result on onboard and ground TC
actions for isolation and recovery. These may not necessarily be unique due to the
engineered redundancies and unit internal and external cross couplings. For each
FDIR Concept 221

potential failure these chains of failure detection and resulting failure handling – at
least failure isolation, preferably also including recovery – must be elaborated. Such
a design is typically achieved by following the design guidelines cited below:
● Failure detection must be based both on parameter monitoring on unit and on
system level and as a complement on functional monitoring level. This implies
that onboard monitoring permanently must check whether parameters are
within appropriate ranges and whether all relevant processes are running,
mode transitions are properly performed etc.
● Usually the FDIR concept provides both basic approaches:
◊ Fail Operational – where a redundant equipment directly can be called into
operation without endangering failure escalation (e.g. in case of a heater
failure, a thermistor failure, an X-band modulator or amplifier failure).
◊ Fail to Safe Mode – which transfers the S/C to Safe Mode.
● For the Fail Operational case the failure isolation will be performed by
removing the failed equipment from the operational functional chain by
reconfiguration to the redundant one. The failed unit then is listed in the SCV
as non-healthy unless reset by ground intervention.
● Onboard reconfigurations are based on OBSW functions or dedicated OBCPs
according to the changed settings in the SCV and the recovery function /
OBCP being triggered.
● The Safe Mode must be properly defined. Safe Mode is usually the mode
operating the S/C with equipment that has the maximum redundancy and
consumes the minimum amount of resources. Besides the Safe Mode there
may exist other safeguarding S/C configurations which are subject to the
individual S/C design.
● By which means Safe Mode can be triggered – OBSW functions, limit
exceeds, HW alarms etc. has to be carefully engineered. OBSW triggered
Safe Mode must be armed against accidental function triggering (arm and fire
principle).
● The transition to Safe Mode usually clears all HW interfaces and SW
functions. In most cases this is achieved by switching over the entire S/C HW
to its redundant side – which then automatically makes use of the redundant
set of physical units, interconnections and cabling. And in addition by OBC
reconfiguration and resulting reboot things like loaded timelines, running
OBCPs or functions etc. are all cleared. This prevents the OBSW to resume
interrupted functions or timelines during or after FDIR process.
● Each OBC processor board keeps its own OBSW image in NV RAM. One
OBC processor running one image keeps the S/C stable in Safe Mode. PUS
Service 6 is applied for OBSW patching and Function Service 8 is used for
triggering reconfiguration functions or OBCPs respectively which reconfigure
to the other processor with the patched OBSW image or which reboot the
same OBC processor with the patched image.
● Obviously there are some additional constraints such as for example Safe
Mode triggering during LEOP phase may not trigger deployments nor AOCS
222 The Spacecraft Operability Concept

actuator control before stage separation is reached. This is usually inhibited by


electrical switches and not in SW.
The overall FDIR concept in summary is closely tied to previously treated concept
design steps such as the commandability concept, the observability concept, the S/C
mode concept and the S/C redundancy concept.

13.16.3 FDIR and Safeguarding Hierarchy

Already it was indicated that an FDIR concept usually follows a hierarchical


approach. Figure 13.7 below depicts such an approach – again for a fictional S/C.

Major overall system failures


Level 4 - Communication failures
Handled by Ground - Deployment failures
- etc.
Hardware induced alarm
Level 3
- Multiple EDAC alarms
Handled by OBC HW
- S/C power failures
Reconfiguration Unit
- etc.
System Malfunction
Level 2 - Attitude computation inconsistencies
Handled by S/C System SW - S/C power failures
- etc.
Subsystem Malfunction
Level 1 - subsystem equipment failure
Handled by Subsystem SW - subsystem intercommunication failure
- etc.
Unit internal Malfunction Unit internal Malfunction Data bus Malfunction
Level 0 - internally recoverable - requiring instant reaction - recoverable failure
Unit internal Handling - EDAC error or similar - short current protection - MIL-bus retries
- etc. - etc. - etc.

Figure 13.7: FDIR and safeguarding hierarchy example.

● The lowest level comprises the handling of failures entirely on unit level, either
because it is feasible – such as EDAC error handling – or because the
equipment by default provides this feature, or because a certain FDIR function
on lowest level is extremely time critical – such as reaction to short currents or
overvoltage. This level also comprises data bus failures invoked by
electromagnetic effects and the like.
● The next higher levels 1 and 2 cover failures being handled on OBSW level,
either on subsystem control level or requiring upper system level. Examples
are also indicated in the figure. On this level of above equipment there are
monitors available for limit check of unit parameters but also subsystem level
abstract verifications such as for example a plausibility check of GPS provided
position against internal solution from orbit propagator functions.
● Level 3 then comprises failures which need hardware reconfigurations via the
OBC's reconfiguration unit. These include the monitoring and reaction to HW
alarms and the like.
FDIR Concept 223

● And finally level 4 comprises the failures that cannot be handled on board the
S/C itself without ground intervention at all.
Each level of FDIR handling function can escalate the failure to the next higher layer
in case the problem cannot be isolated or recovered on its level. E.g. many system
level failures may lead to hardware alarms triggering reconfigurations on Level 3 –
such as power failures or OBC watchdog failures.
Vice versa failure recovery is always performed from higher to next lower level. E.g.
in case of a 2 out of 3 redundancy for star trackers as in the example of table 13.8, if
star tracker 3 so far is off and star tracker 2 reports failures or shows failure
symptoms the AOCS subsystem FDIR level can reconfigure the S/C to using STRs 1
and 3 for further operation.
Again it must be remembered, that a simple equipment reconfiguration to its
redundant occurrence – triggered on whatever FDIR level – and keeping the rest of
the S/C on nominal side can only be applied with restrictions. Depending on root
cause – this approach might lead to killing the redundant unit too. Therefore this
method is avoided in all severe FDIR cases and the entire S/C is reconfigured to
Safe Mode which – as was cited – usually reconfigure the entire S/C including buses
and power lines to the redundant side.

13.16.4 Safe Mode Implementation

Having explained the FDIR hierarchy the Safe Mode shall be described again in a bit
more detail. Since transition of the S/C to Safe Mode breaks all onboard functions
and thus all mission product generation by means of the above cited hierarchical
FDIR approach the cases for Safe Mode triggering shall be limited as far as possible.
The need for automated Safe Mode triggering is also driven by how fast ground is
able to identify failure symptoms and ground is able to trigger isolation and recovery
activities. The possibilities in this area for a permanently visible geostationary satellite
differ significantly from those for a polar orbiting LEO spacecraft.
The guidelines for a Safe Mode configuration are as follows:
● OBC will preferably operate on the redundant side – including OBC HK mass
memory unit and safeguard memory for SCV and including CCSDS
processing unit.
● OBSW is operational in Safe Mode controlling S/C in a way to assure attitude
stability and sufficient power generation by solar array pointing.
OBSW in particular will also perform S/C limit monitoring with dedicated Safe
Mode settings.
● The main data bus on board will be operating on the redundant side.
● The OBC I/O unit, (RIU), will be operating on the redundant side.
● The Power Control and Distribution Unit, (PCDU), will at least be operated on
its redundant controller side. PCDU LCL bank redundancy switching is usually
only applied in case of failures in the PCDU itself.
224 The Spacecraft Operability Concept

Power bus voltage monitoring is performed by PCDU applying dedicated Safe


Mode limits.
● AOCS will operate on the redundant side – including reaction control system.
● Unnecessary equipment such as payload instruments and other equipment
from the payload data handling chain (X-band transmitter and MMFU) is not
used due to interrupted mission product generation and will be shut off or
down to a low resource consuming state.
● The “Payload Data Handling and Transmission”, (PDHT), subsystem (X-band-
transmitter, MMFU) and the payload instrument(s) will be switched off or down
to safe configurations.
● S-band receivers will – if not affected themselves by the failure – remain hot
redundant.
● S-band Transmitter will – if not affected itself by the failure – remain on the
nominal side.

Transition to Satellite Safe Mode:


Safe Mode can be induced from ground via the following mechanisms:
● By execution of a dedicated High Priority Command for Safe Mode
● By execution of a dedicated Safe Mode TC function or OBCP – representing a
critical command and requiring an Arm-And-Fire mechanism – which triggers a
dedicated alarm to the OBC reconfiguration module
Safe Mode can be induced on board at least by the following mechanisms:
● Failures detected by the AOCS
● Failures detected by essential system monitors
● System undervoltage detection (via PCDU logic)
● Failures during repeated OBC reconfiguration sequences of the S/C

Recovery from Safe Mode:


A key principle of Safe Mode is that Safe Mode recovery requires ground interaction.
No auto-recovery from Safe Mode is foreseen in contrast to other potential
safeguarding modes of a specific mission. For recovery command from Safe Mode
the following steps are required as minimum in most cases:
● Configuration of the spacecraft SCV for nominal operations after completion of
failure diagnosis
● In case OBSW patches – were applied – selection of the OBSW boot image
● Reboot of the desired OBC redundancy with the selected / patched OBSW
image and loading of the SCV
● Wait until OBSW has applied SCV and has switched all redundancies to
desired settings
FDIR Concept 225

● Perform all S/C system mode transitions to a nominal mode including AOCS
subsystem to a nominal AOCS mode
● Preparation of nominal S/C operations by resource reconditioning, loading of
new mission timeline etc

13.17 Satellite Operations Constraints

While the previous chapters treated the satellite operations functional design and the
functional behavior here in short the topic of operational constraints shall be tackled.
In general all operational constraints are highly S/C design specific. They can be
broken down into
● S/C platform operational constraints and
● payload instrument operations constraints.
For both classes operational constraints arising from
● resource limits or
● functional dependencies
can be identified.
The resource limit constrains are intuitively to understand. Optical payloads for
example may be only operated in sunlight conditions. In eclipse phase their operation
– except for dark image calibrations – makes no sense. On the other hand the overall
payload operational time between two ground station passes may be limited due to
the limited amount of science data storage resources on board. Multiple payloads
here also might compete for the memory resource. As another example Synthetic
Aperture Radar instruments or radar scatterometers – especially when operated in
eclipse phase – are typical payloads with operational constraints due to their high
power consumption.
Constraints due to functional unit interdependencies for example might be that due to
the common use of A/D input converters of the MMFU two payloads may not be
operated in parallel. Or – even if this is not a desirable case – there might be
constraints limiting the MMFU to perform in parallel science data recording and
playback data streaming via X-band to the PGS. Another type of common operational
constraint is that during certain S/C AOCS modes – like target rollover bidirectional
measurements or for some S/C even spin stabilized Safe Mode – no X-band
downlink is possible due to the antenna pointing angle limits or even the rotating
solar array interference with the necessary antenna pointing direction.
Operational constraints increase as soon as a data routing “equipment” like a data
bus or OBC I/O unit (RIU) has a failure. The details of remaining operational flexibility
is then highly dependent on the engineered redundancy concept.
226 The Spacecraft Operability Concept

13.18 Flight Procedures and Testing

A spacecraft has usually different data links for platform control and for science data
downlink8. The flight control systems and the data processing systems for platform
and payload differ to a certain extent. Common for both – platform and payload
control – is, that for a standard satellite all commanding is performed via one TC link.
Also all S/C housekeeping telemetry – for both platform and payload – is downlinked
via a common S-band TM link to the platform control station – the “Flight Operations
Center”, (FOC). This allows full operational observability of the system's status,
health and resources.
Payload science data are downlinked to the “Payload Ground Segment”, (PGS). In
some cases also a copy of the platform HK data is downlinked to the PGS. In such
case however the platform data usually serve to cross verify payload timestamping
and geolocation parameters as well as to cross verify proper platform health during
the entire mission product generation to avoid science measurement
misinterpretations. Such complementary or ancillary data have already been
mentioned.

Figure 13.8: Connection of S/C ground and space segment. © ECSS

The CCSDS standard protocol for telecommand and telemetry transmission was
already treated. In the ESA ECSS compliant missions the transmitted information is
encoded in PUS conformal TC and TM packets respectively.
● A TM packet contains a set of onboard variable values in its packet body and
in packet header the submitting unit / process as well as packet generation
time is included. This already requires
◊ the definition of which packets exist (e.g. for one equipment, a S/C
subsystem and finally on system level),
◊ the definition of which packet comprises which variables in which data
format and which calibration characteristics
8
In some missions they can even be served by different ground segments. An example for such a configuration is
the European satellite navigation system Galileo.
Flight Procedures and Testing 227

◊ and the packet generation frequency (which is a configurable parameter for


the OBSW).
● The TC direction comprises TC packets which are routed by APID on board
and which are identified by packet type and with this trigger according
functions in the targeted equipment (OBC / OBSW or other equipment). This
already requires
◊ the definition of which command packets exist (e.g. for each equipment, a
S/C subsystem controller in the OBSW and finally on OBSW system level)
◊ and the definition of which command packet needs additional control
parameters and in which data format and with which calibration
characteristics.
All these details are stored in the ground segment in a so-called “Satellite Reference
Database”, (SRDB). All these TCs with their command parameters and the TM
packets with their onboard variable data from the OBSW data pool form the lowest
information level of S/C command.

))

))
)

))
))

)
TC/TM
Database
(SRDB)

Figure 13.9: Satellite Reference Database in ground segment.

However it is very cumbersome to command via low level commands transitions like
the satellite switching from LEOP mode after launcher separation to a nominal mode
with lots of onboard units to be activated and their telemetry to be checked. To ease
the command and control of the S/C for the ground staff two layers of abstraction are
introduced.
● On board OBSW functions are introduced which can be triggered / activated
from ground via the already cited PUS Service 8 (Function management
service).
An example could be a function for activation of a payload from ground where
the OBSW executes the detailed steps from power supply switch via payload
controller boot control, initial PL onboard data bus TM verification, power
consumption control etc.
228 The Spacecraft Operability Concept

● Flight Procedures are another means for increasing the level of commanding.
Flight Procedures are somewhat the complement to OBCPs. While an OBCP
is a sort of “command script” executed on board, a Flight Procedure is a
“command script” implemented in the ground control system.
An example could be a flight procedure which submits the function commands
for the AOCS to switch from idle mode to fine pointing, for the data handling
subsystem (MMFU etc.) to prepare for science data recording and for the
payload to switch on – all for preparation of a payload instrument
measurement on board.
Flight Procedures can comprise both low level commands to S/C units, higher level
commands to S/C subsystems and system level commands and they can trigger
onboard functions and OBCPs. Any command defined in the SRDB (and thus
implemented in the OBSW) can be included in a Flight Procedure. Both individual
commands and entire flight procedures can be commanded from a S/C ground
control console.
An example for a ground control system – here a SCOS 2000 from ESA / ESOC – is
shown in figures 13.10 and 13.11. They show both TC/TM log windows as well as
graphic parameter displays, so-called synoptic displays.

Figure 13.10: Command log of S/C (here during OBSW test on SVF).
© IRS, Universität Stuttgart
Flight Procedures and Testing 229

Figure 13.11: Command log of S/C (here during OBSW test on SVF).
© IRS, Universität Stuttgart

Flight procedures allow the definition of


● absolute and relative time tags for the individual commands,
● command flow IF / Then branching according to “return values” received via
TM back from the S/C during procedure execution – provided ground contact
exists and
● DO / WHILE loop constructs – as far as supported by the procedure execution
engine in the ground control system.
The execution, i.e. the subsequent submission of such Flight Procedure command
sequences and the according branching in IF / Then cases requires more than just a
simple playlist of commands. It requires the commands being embedded into a script
language and the execution of such scripts by the ground control console. Thus an
according procedure execution engine has to be coupled to (or must be integrated
into) the ground control system. For SCOS several script languages and execution
engines are available, one for the older TCL language (cf. [115]), and one for the
newer PLUTO language which is standardized by the ECSS (cf. [114]) and is applied
in modern ESA missions.
To avoid writing such procedures via a text editor and inducing errors via typos etc.
Flight Procedures are nowadays defined by means of flow chart editors as for
230 The Spacecraft Operability Concept

example the one depicted in figure 13.12. They provide different views on the task
flow and offer the operator to select commands and according parameters as they
are defined in the SRDB.

Figure 13.12: Definition and test of flight procedures via the MOIS flowchart editor.
© IRS, Universität Stuttgart

As already indicated there are Flight Procedures defined for the S/C system level
control, for subsystem control and equipment control. An example structure could be
structured as follows:
● System Level procedures
● Subsystem level procedures:
◊ Data Handling Subsystem procedures
◊ Electrical Power Subsystem procedures
◊ Attitude and Orbit Control Subsystem procedures
◊ Reaction Control Subsystem procedures
◊ S-band Subsystem procedures
◊ Thermal Control Subsystem procedures
◊ Payload Data Handling and Transmission Subsystem procedures

● Equipment control procedures (including data bus control)


◊ Data Management procedures (TM packet enable / disable etc.)
Flight Procedures and Testing 231

◊ Generic PUS procedures (TM packet activation / deactivation etc.)


◊ Platform equipment procedures:
► On-Board Computer and RIU procedures
► AOCS Sensor and Actuator Procedures (dedicated ones for each
equipment type)
► Mass Memory Formatting Unit Procedures
◊ Payload Procedures (dedicated ones for each payload type)
Each of these procedure sets comprise
● nominal operations procedures,
● contingency case procedures and
● procedures triggering OBCPs from ground – e.g. for reconfiguration.
For platform specific operations there exist procedures for dedicated mission phases,
namely the
● Pre-Launch Phase procedures,
● LEOP Phase procedures,
● Commissioning Phase procedures and
● End of Mission procedures.
Flight Procedures and the entire operational S/C command sequences have to be
tested first at the S/C manufacturer's premises in frame of the Functional Verification
campaign. This covers the first level of tests of the OBC / OBSW / CPDU as receiver
of the Flight Procedure commands with the transmitting “ground station”. However in
this context the S/C always is still commanded via the checkout Control Console –
also called Core EGSE, not yet via the FOC.

TM/TC X-Band Power


Front- SCOE Front-
end end

DSL

Figure 13.13: SVT test constellation.

To assure full compatibility with both “Flight Operations Center”, (FOC) and the
“Payload Ground Segment”, (PGS), multiple so-called “System Validation Tests”,
(SVT), are carried out during the subsequent integration of the spacecraft. In this
232 The Spacecraft Operability Concept

context ”system“ refers to the entire assembly, space + ground segment. SVTs are
tests conducted by the agency being connected via a high performance data link
(DSL or similar) and via S-band and X-band SCOE to the S/C which is physically
located in the manufacturer's integration hall. During SVTs the spacecraft
commanding is performed via the same Flight Procedures and low level TCs as later
used for S/C in orbit. TM is acquired also by FOC and PGS and is evaluated by the
mission ground segment accordingly.
Mission Operations Infrastructure 233

14 Mission Operations Infrastructure

GOCE operations © ESA

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
234 Mission Operations Infrastructure

14.1 The Flight Operations Infrastructure

The missions operations infrastructure shall be explained by redirecting the reader


first to figure 13.8 of the S/C ground segment infrastructure which depicts the
interconnections of FOC, PGS, ground communications system and the antenna
ground stations. Key element in the FOC is the “Mission Control System“, (MCS), for
the S/C platform. An Example of such a system – the ESA SCOS 2000 – already was
presented in figures 13.10 and 13.11.
The PGS is targeted for download of payload data from the S/C and it hosts the
infrastructure and team for mission product data processing over the diverse levels.
The PGS – “PDGS” in ESOC terminology – furthermore is responsible for mission
product archiving and mission product distribution to customers or into the public
domain.

Figure 14.1: Mission operations infrastructure and ground communications system.


Example: CryoSat-2 Mission. © ESA / ESOC

As input to mission operations the PGS usually collects the user requests from the
S/C users – in the Earth observation and science domain called “principle
investigators” – and the PGS prepares the initial mission planning and hands it over
to the FOC for integration into the overall satellite mission timeline. The PGS usually
has no command uplink to the spacecraft.
The Flight Operations Infrastructure 235

Via the ground communications system the FOC is connected to the S-band antenna
ground stations and the PGS is connected to the X-band science data link antenna
ground stations. The antenna ground stations are positioned at “strategically”
important points all over the Earth to achieve optimum S/C visibility. No space agency
owns antenna stations at all important positions on the globe. Therefore – to support
especially the LEOP phase of a new S/C but occasionally also during commissioning,
normal or FDIR phases - the space agency may procure the use of other agencies or
commercially operated stations for a limited period.

Figure 14.2: Ground station network example.


© ESA / ESOC

The ground station visibility ranges as function of S/C altitude and link budget are
known already at S/C design phase. With the orbit analysis performed during S/C
engineering phase the station visibilities for a full orbit repeat cycle are computed –
see figure 2.3 and figure 14.3.
Based on this information the FOC can activate the antenna ground stations
accordingly for each S/C contact which is especially important during the LEOP
phase to be able to properly track all S/C activities such as deployments, equipment
activations, mode transitions and the like.
236 Mission Operations Infrastructure

Figure 14.3: Ground station visibilities per orbit. © ESA / ESOC

Unlike S/C operations in AIT or OBSW testing in a simulation infrastructure it is not


possible to monitor and control an entire S/C via single or dual screen setup of a
Mission Control System as is depicted in figures 13.10 and 13.11. Operations Mission
Control Systems like SCOS are scalable and the data TM streams from the S/C can
be routed to multiple workstations – each of them handling the data for a specific
functional domain, like AOCS, power, thermal and payload instruments.

Figure 14.4: Flight Operations Center control room example. © ESA / ESOC
The Flight Operations Infrastructure 237

The more challenging the mission, the more sophisticated the FOC infrastructure is
designed. For a standard Earth observation scientific satellite there will be typically
one to two user work places, with 2-3 screens each, per main functional domain, i.e.
for:
● Overall system control
● Data handling control
● AOCS control
● Power control
● Thermal control
● One for the entire payload data handling chain (PLs, MMFU, X-band)
● One per payload instrument9
The following figure depicts the functional domain driven workplace allocation in the
main control room, (MCR), by example of an ESA / ESOC mission – CryoSat-2.
There are workplaces which provide overview and key parameter visibility to the
spacecraft operations manager and the flight operations director and furthermore the
workplaces monitoring detailed information for the individual subsystem controllers.

Figure 14.5: Mission control room and workplaces – schematic. © ESA / ESOC

9
Payload operations normally is not yet part of the LEOP phase but the according operations workplaces are
mentioned here already.
238 Mission Operations Infrastructure

The workplaces are in detail:


● Overall mission control (Flight Operations Director)
● Overall S/C system control (Spacecraft Operations Manager)
● Subsystems operations engineers:
◊ AOCS
◊ Data handling
◊ RF subsystems (S-band and if applicable X-band)
◊ Power
◊ Thermal
◊ Payload(s) as far as applicable during LEOP 10
● Mission Control Systems (monitors the performance of all SCOS servers and
clients) – the data systems manager
● Ground Stations and interface to ECC (Ground Operations Manager)
● Spacecraft Controller, SPACON, for anything beyond the control scope of an
individual subsystems operations engineer
● Analyst
The ground communications system routes all the downlinked TM from the antenna
stations into the database of the mission control system. This step already includes
certain TM sequence and consistency checks on frame level and higher. The MCS
software decommutates the relevant subset of TM parameters for the individual users
according to the operational domains and forwards the data cyclically to the
workstations. The subsystem operations engineers can configure their workstation
displays to visualize principally any TM parameter received from the S/C – even
outside their dedicated operational domain.
Vice versa S/C command and control is handled by building and uplinking of
command sequences from a command stack. For S/C commanding only a subset of
the FOC workstations are used. The workplaces of the individual subsystem
operations engineers have command and control access and so does the Spacecraft
Controller, (SPACON). The other workstations serve purely for monitoring. The
command packets, subsequently released from the command stack, are generated
by the MCS using the relevant information in the satellite database, and then are
forwarded via the Network Interface System to the selected antenna ground station
for transmission to the satellite.
The entire workstation infrastructure including TM/TC Database, servers etc. is
redundant as it is indicated by the “backup” workstations marked in figure 14.5. The
same applies to the overall network infrastructure – see “red / green” network
connections for the diverse stations in the same figure. This guarantees S/C
operability even in the case that one entire MCS branch fails.
Especially during LEOP phase the flight operations team is enhanced by the
inclusion of the experts from industry and also the agency project team, who have
been responsible for the procurement of the satellite and launcher. The Project
10
In the example of CryoSat for the payloads only the navigation solution receiver DORIS was part of the LEOP
phase. Therefore an according DORIS workstation can be found in figure 14.5. CryoSat still used the DORIS
radiopositioning system (cf. [123]) instead of GPS.
The Flight Operations Infrastructure 239

Representative, who is normally located in the Main Control Room next to the Flight
Operations Director, provides the authority from the agency project management.
The industry team and the specialists from the agency project team are usually
located in a so-called “Project Support Room”, (PSR), and have visibility and
parameter read access to the operations being performed by the flight control team in
the mission control room.

Figure 14.6: Project Support Room with S/C Supplier Workstations. © ESA / ESOC

Figure 14.6 shows the workstations placed in the Project Support Room where the
following shall be cited – again as example from the CryoSat2 mission:
● A dedicated workstation for the star tracker supplier since for this mission it
was a new and mission critical element.
● An AOCS analysis workstation for e.g. computation of data for orbit correction
maneuvers.
● A workstation for the satellite geodesy system “Doppler Orbitography and
Radiopositioning Integrated by Satellite”, (DORIS) – cf. [123].
● A dedicated workstation for OBSW runs, testing etc.
This assistant expert team monitors S/C health in parallel to the Subsystem
Operations Engineers and provide expertise in case of any unforeseen deviation
240 Mission Operations Infrastructure

from expected behavior of the S/C. Anomaly situation treatment is performed under
the management guidance of the Flight Operations Director. Any failure detection or
isolation and recovery activities are to be signed off by quality assurance before
command submission.
During the platform commissioning phase and even later during the payload
commissioning phase the level of support is subsequently reduced but key members
of the agency project team and the industry team remain on-site at the MCC until the
S/C is declared ready for nominal operations.

14.2 Support Infrastructure

Besides the FOC / PGS control and monitoring infrastructure, the ground communi-
cations system and the antenna stations, the ground infrastructure comprises a
significant number of additional tools which are not directly involved in daily S/C
command and control. Out of these the three most important shall be cited:

Spacecraft Simulator:

Figure 14.7: The system simulation environment SIMSAT by ESOC. © ESA / ESOC

One is the system simulation infrastructure. The simulator is either a system


functionally comparable to the SVF treated in chapter 10.5.2 or is even a direct
derivative of this. It is used already prior to launch for validation of flight procedures
as was explained in chapter 13.18 and for training of the mission control team. After
launch it serves for validation of operational conditions, simulation and debugging of
Support Infrastructure 241

failure conditions of the satellite, for symptom analysis and for pretest of recovery
activities. Furthermore it serves for verification of OBSW patches before uplink.

Flight Dynamics Infrastructure:


The next element to be mentioned is the Flight Dynamics Infrastructure. The flight
dynamics team has to perform continuous orbit monitoring and monitoring of specific
AOCS equipment and according equipment parameters. Orbit position tracking is
performed via S/C TM from its position receivers (GPS / Galileo / GLONASS – or in
the depicted example case of CryoSat-2 – the DORIS system). In addition this can
be supported by tracking via laser retroreflectors from ground. The determined orbit
is continuously compared to reference data and via appropriate tools an orbit
propagation into the future is performed, taking actual space weather information into
account. By these means the need times for orbit correction maneuvers can be
predicted and according time slots can be reserved in the mission planning. The
detailed quantitative design of the individual orbit correction maneuvers is also
elaborated by the Flight Dynamics Team. Flight dynamics also considers all
continuous parameter changes in the S/C over lifetime such as change of center of
gravity and change of mass due to fuel consumption. In addition flight dynamics has
to handle all parameters which result from performance degradation over the mission
lifetime – like RWL bearings friction. For these tasks also simulation based
infrastructures are used – in most cases implemented on the basis of MatLab /
Simulink or Embedded MatLab.

Mission Planning Facility:


The final type of infrastructure to mention are the mission planning systems. If the
mission is not targeted for a continuous measurement – like GOCE was for the Earth
gravity field measurement – dedicated target observations are the normal case for an
Earth observation satellite. The planning for mission segments – typically a segment
between two ground visibilities – in the first place is driven by so-called “ user
requests” or observation-requests which the PGS receives. Such a request then
includes a time window for the observation, the desired payload, payload operations
parameters like observation spectral band and target coordinates – or even a target
area. Such user request files from the PGS plus flight dynamics information, ground
station visibility information and dedicated operational steps foreseen by the flight
control team are combined into mission timelines, i.e. TC command sequences for
uplink to the satellite. Performing this for multiple user requests from different users
which easily start to compete for resources is not a trivial task and requires dedicated
SW infrastructure.
As already was explained earlier, modern satellites like the ESA GMES program S/C
support both time-tagged command as well as position-tagged command since via
onboard GPS / Galileo receivers they always are informed about the current position
and velocity vector and via orbit propagator functions in the OBSW they can predict
their position. This type of “position-tagging” is extremely useful for payload
operations which are geo-located or for science data downlink operations tagging to
dedicated ground stations.
Bringing a Satellite into Operation 243

15 Bringing a Satellite into Operation

Ariane V164 © ESA / Arianespace

J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
244 Bringing a Satellite into Operation

15.1 Mission Operations Preparation

For the mission operations team it is an essential task to familiarize oneself with the
ground segment infrastructure, the Mission Control System, its control consoles,
databases etc. The ground operations team must be in the position to exercise all
nominal and contingency operations for LEOP phase, the commissioning phase and
the routine operations phase. Satellites are usually operated in two shifts per day.
The A or prime team is the one which has already participated in the System
Validation Tests and in verification of the S/C Flight Procedures. This team handles
the critical operations sequences. The B or secondary team is subsequently trained
up. This can comprise less experienced operations engineers or operations experts
from other missions.
Training for a two shift team plus backup personnel may consist of:
● Classroom training and facility familiarization.
● Training and simulation sessions performed before launch:
◊ S/C Operations controlling the real spacecraft (e.g. in SVT) or the S/C
simulator.
◊ The first simulations are “nominal” to allow all team members to become
familiar with the sequence of operations to be performed.
◊ A series of simulations of the critical phases with an increasing level of
complexity for all teams follow.
◊ Anomalies on the simulated satellite, ground segment facilities, launcher
and ground stations are injected in increasing numbers and levels of
difficulty, culminating in parallel failures of different systems.
◊ Shift handover, both in nominal situations and also in the case where
anomalies have prevented one team from completing all of the planned
operations.
◊ Routine operations over several days are trained – with simulated S/C – to
allow the spacecraft controllers and subsystem operations engineers to
validate the systems and procedures to be used after the LEOP phase.
The ground segment infrastructure and the antenna station network are
included – partly as simulations – via so-called Mission Readiness Tests, to
validate the ground stations using an already flying satellite as the target.
● Participation and training of all external partners.
● Verification of event sequences (uninterrupted).
● Usually two launch rehearsals one or two of them performed with:
◊ Full included FOC
◊ Potential antenna stations
◊ A simulated S/C to achieve first acquisition operations to be performed
following the countdown activities
◊ And the launch site interface – personnel, data lines from launch site to
FOC, Go / No-Go flags transmission, launcher and AIT.
Mission Operations Preparation 245

Detailed complete system simulators resulting from the simulator infrastructures


implemented for OBSW testing – as described in chapter 10.5.2 – can finally be
applied to support spacecraft operations. The SVF configuration is the most
appropriate setup. It can be modified such that the control console is replaced by the
flight operations system installed in the FOC. The SVF simulator's interfaces and the
data protocols between the simulator and the control console are already
implemented to be compatible with the Mission Control System of the FOC. The
resulting simulator setup in the ground station can be used for
● training of the spacecraft operations staff, and for,
● tests of OBSW patches and bug fixes on the simulator before they are
uplinked to the real spacecraft.
The acceptance of such simulators originating from spacecraft system development
largely varies from space agency to agency. Some are using the system simulators
with the argument that such a simulator has already passed a comprehensive
verification process and has a very good validation quality – such as DLR / GSOC
applied the S/C supplier's simulators from engineering and AIT phase for satellite
operations in the TerraSAR-X project and Eumetsat applied the MeteoSat simulator
from industry – especially for the 2nd MeteoSat Generation, (MSG).
Some agencies do not accept system simulators from the S/C development cycle
because of their philosophy to use only tools for operations support which are
developed independent from the S/C AIT. This approach minimizes the risk of
potential inherent development process errors and such errors can be spotted during
operation. The "ESA Space Operations Center", (ESOC), for example has developed
its own system simulation infrastructure called SIMSAT which was already presented
in figure 14.7. For technical details please also refer to [32].

S/C Simulator
System Models (Env./Dyn./etc.)

Equipment
Model
OBC
Simulator Kernel

Model Equipment
Model
Equipment
Model

Equipment
Model

TM/TC
Simulator I/O
Frontend

GCS LAN

Figure 15.1: System simulator for spacecraft operations support.


The second major topic for simulators in mission operations preparation was also
already mentioned as a side topic. It must be assured, that the technical
infrastructure of both FOC and PGS are 100% compatible with the real S/C, not only
a with simulation. For this purpose – and not so much for reasons of crew
246 Bringing a Satellite into Operation

familiarization – the so-called “System Validation Tests”, (SVT) are performed during
S/C integration phase D. “System” here refers to the overall mission system, i.e.
including both ground segment as well as space segment.
● As cited in chapter 13.18 during the SVTs the S/C – positioned in the clean
room at manufacturers premises – is commanded remotely from the FOC.
● Multiple such tests are performed with increasing functional test scope – the
ECSS standards require 4 of them, SVT 0 to SVT 3.
● In the higher ones also payloads are operated and payload science data are
recorded as far as possible under clean room conditions – significant
limitations will exist for example for radar instruments.
● Payload science data playback from MMFU via X-band link (excluding RF
part) is then streamed to PGS for verification of compatibility of PGS tools with
X-band data stream and formats.

15.2 Launch and LEOP Activities

A number of activities are to be carried out during the so-called “Launch and Early
Orbit Phase”, (LEOP). For these activities the S/C prime contractor supports the
operations team as defined in the catalog of their phase E tasks. Days before launch
in the FOC all control stations are rechecked and flight operators prepare for the
launch date. Shift plans are frozen and last organizational topics are
clarified.Although the details for the LEOP phase activities differ highly from mission
to mission – especially for the Earth observation and science domain – the general
activities include the following:
● Final pre-launch check of the ground systems including FOC, antenna ground
stations and communication links.
● The launcher fairing being closed and the S/C being connected to ground via
the umbilical connector.
● Final pre-flight check of S/C at the launch site.
● Final pre-flight check of the launcher Go / No-Go signals.
● In case of launch with running S/C OBC continuous monitoring of proper auto-
boot of S/C OBC and OBSW into launch mode during the early phase of the
count down is performed.
● During the ascent phase of the launch no signals are available.
● In the case launcher separation happens during ground station visibility, LEOP
tasks comprise monitoring of the execution of the post-separation
configuration operations, performed autonomously by the satellite in the frame
of the LEOP Autosequence which are:
◊ In case of cold launch (OBC off) first of all a proper boot of OBC / OBSW.
◊ Ground connection establishment with the OBSW – automatically
transmitted TM from S/C.
Launch and LEOP Activities 247

◊ Establishing command link with the satellite and starting the orbit
determination via radiometric data (ranging).
◊ Performing auto-deployments and initiation of ground controlled
deployments for antennas, solar arrays respectively.
◊ Control of attitude stabilization respectively its monitoring in case of auto-
sequence based attitude acquisition.
◊ Via the last two steps verifying that the satellite configuration is as
expected after launcher separation w.r.t. approximate orbit, attitude and
correct deployments.
● In case of launcher separation out of ground station visibility the first step in
vicinity of a station is a TC based ground contact establishment and to
command the downlink of the LEOP autosequence TM packet history. for
verification of proper S/C status.
● Further following steps – which for some missions are already part of the
commissioning – comprise the commanding required for transition of the
satellite into the higher operational modes needed for payload activation and
for commissioning operations.
For example switch on of AOCS units, power subsystem equipment and of
thermal subsystems needed for payload operations.
● Furthermore the detailed verification of correct orbit and the preparation of
potentially necessary orbit acquisition maneuvers in most cases is still counted
as part of the LEOP phase.

Figure 15.2: Mission operations © ESA / ESOC


248 Bringing a Satellite into Operation

For a simpler Earth observation satellite these LEOP activities all together sum up to
two to tree days. For more complex missions or satellites with specific orbits – like
e.g. the Hubble Space Telescope – these tasks can consume a few weeks all in all.
The same applies for constellations like TerraSAR-X / TanDEM-X or navigation
constellations like GPS / Galileo / GLONASS.
All these activities from the countdown tasks, the activities performed only seconds
after launcher separation to those performed days after launch are precisely planned
beforehand. Each shift performs their scheduled operations.
The plans take into account the availability of ground station visibilities, plus any
constraints coming from the different support facilities.

Figure 15.3: Post separation activities. © ESA / ESOC

Then on day 0 during countdown, step by step the launch resource criteria – also
called Go / No-Go criteria – are checked, namely the data links to antenna stations,
to the launch site, the telemetry channel from the launcher to FOC etc. Please also
refer to figure 15.4.
Finally the S/C is launched and after upper stage separation it starts executing the
key parts of its LEOP autosequence. At first successful ground contact essential
telemetry is downlinked and the operators get first visibility of the status. A S/C
telemetry monitoring desktop example is provided in figure .15.5
Launch and LEOP Activities 249

Figure 15.4: Launch Go / No-Go criteria. © ESA/ESOC

Figure 15.5: S/C telemetry monitoring desktop – example: CryoSat-2. © ESA/ESOC


250 Bringing a Satellite into Operation

15.3 Platform and Payload Commissioning Activities

If all goals of the LEOP phase plan have been successfully achieved the Flight
Operations Director will declare the LEOP completed and the Commissioning Phase
can start.
The key task of the commissioning phase is the subsequent taking into operation of
the so-far unused platform equipment and of all payload instruments, to verify all
operational modes and to perform for both the platform and the payload all calibration
and performance characterization tasks.
The distribution of S/C platform and payload commissioning tasks between LEOP
phase and a dedicated S/C commissioning phase is highly mission specific. On the
one hand, the LEOP phase might not even cover all AOCS modes and use all AOCS
equipment. An example is TerraSAR-X where the reaction wheels were first activated
during platform commissioning – not yet during LEOP phase. On the other hand
LEOP already might include initial payload switch-on and checkout and X-band data
downlink.
For payload instrument commissioning the detailed tasks are again highly dependent
on the instrument characteristics and mission type and have to be analyzed
individually per mission. Payload calibration methods are:
● Calibration via flyover of reference targets and comparison of received to
expected results. This is a typical method for Earth observation satellites.
● Radio signal quality measurements. This method is essential for telecom
satellites.
● Pointing to reference targets and calibration of sensor with previously acquired
target characteristics from previous missions etc. This method is typically
applied for space telescopes and the like.
● Platform characterization may imply previously performed specific platform
equipment operations, such as STR characterizations or GPS geolocation
characterization.
● The commissioning phase in many cases includes the calibration /
characterization of ground processing facilities in the PGS for higher level
mission product data from the raw measurements.
Similarly to the LEOP the S/C commissioning phase is planned in detail before
launch, but the planning is generally at a higher level and the activities are not
usually time critical and are subject to change depending on the satellite performance
and operations during the LEOP phase. The commissioning phase may last from
several weeks to a number of months, depending on the S/C type, orbit, number and
type of payloads etc.
An example for such a commissioning phase planning is given in the figure 15.6
below.
Platform and Payload Commissioning Activities 251

Figure 15.6: S/C system commissioning schedule – Example: CryoSat-2. © ESA/ESOC

After platform and payload commissioning the S/C supplier's tasks are done and the
normal operations phase with continuous mission product generation starts under
sole responsibility of the operations team.

Figure 15.7: Kiruna antenna station facilities. © ESA


Annex: Autonomy Implementation Examples 253

Annex: Autonomy Implementation Examples

New Horizons © NASA


254 Annex: Autonomy Implementation Examples

Autonomous onboard SW / HW Components

In October 2001 ESA launched the first


satellite of the PROBA series – “Project for
Onboard Autonomy”. With these satellites
new technologies heading for higher levels
of onboard autonomy and higher auto-
mation levels in satellite operations were
tested.
PROBA 1 served for in-flight testing of
following technologies (cf. [108]):
● First in orbit use of Europe's 32bit
space application microprocessor –
the ERC32 chip set.
● First use of a digital signal
processor, (DSP), as an instrument
control computer ICU. Figure A1: PROBA 1. © ESA
● First in orbit application of a newly
designed “autonomous” star sensor.
● Use of onboard GPS for the first time.
● And following innovations in the onboard software:
◊ ESA for the first time flying an OBSW coded in C instead of Ada.
◊ ESA for the first time flying an OBSW based on an operating system
instead of a pure Ada coded OBSW implementation (VxWorks was applied
here).
◊ The GNU C compiler for the ERC32 target was finally validated by flying a
GNU C compiled OBSW running on the ERC32.
The achieved new onboard functionalities were:
● For the first time having an ESA satellite with position determination in orbit by
means of GPS.
● Attitude determination through an active star sensor automatically identifying
star constellations.
● Autonomous prediction of navigation events (target flyover, station flyover)
● A limited onboard “mission planning” functionality based thereupon.
Annex: Autonomy Implementation Examples 255

Improvement Technology – Optimizing the Mission Product

This example (cf. [109]) depicts a combined ground / space architecture of the ESA
study “Autonomy Testing” where the design of a potential onboard mission planning
function for payload operation was analyzed.
The idea behind this is that users “only” needs to transmit their observation requests
(“user requests”) to the combined system consisting of space segment (simulated
satellite) and ground segment (simplified ground station). The customer requesting a
mission product defines
● by which payload,
● in which operating mode,
● with which settings,
● they want to have which target area observed
● in which time window.
It was analyzed in how far it would make sense to implement parts of the mission
planning and overall system timeline generation (ground + space) on board the
spacecraft to shorten mission prediction response times. In such cases the satellite
constantly has to collect customer requests from the various sequentially visible
ground stations and is equipped with an intelligent mission planning system. This
system generates a detailed timeline comprising all commands for all involved
platform subsystems – mainly AOCS – and the involved payload(s).

Onboard System Controller  Autonomous On-board Architecture:


 System Supervisor (from DLR MARCO study)
Timeline
Level of autonomy scaleable from simple
Supervisor
Generator macro-command execution via onboard
control procedures processing up to
onboard timeline execution
VxWorks Solaris  TINA Timeline Generator,
providing onboard generation of directly
Simulator executable mission timelines from user
requests and platform service requests.

Stimuli / Environment
TT & C Subsystems Payloads Observations Simulation  Test Infrastructure:
 Simulated Satellite and Space Environment:
- SSVF simulator
- Spacecraft model, and environment
models derived from SSVF

Central Ground System  Ground segment/checkout system:


- SSVF/CGS configuration
- TINA console for user-request definitions
Mode Test Test
TINA Console
Preparation Preparation Exec ution
Environment Environment Environment

EMCS

Figure A2: Onboard autonomy test infrastructure: "Autonomy Testbed". © Astrium GmbH
256 Annex: Autonomy Implementation Examples

The prototype from the ESA “Autonomy Testing” study consisted of:
● A Core EGSE acting as a simplified ground station
● A satellite simulator
● An onboard computer board as simplified single board computer
● An onboard software with a macrocommand interface (somewhat like OBCPs)
running on this board
● A mission planning algorithm which created an activity timeline from the cited
user requests including all macrocommands to the onboard software.
The onboard software executed the spacecraft macrocommands in the generated
mission timeline and thus controlled the simulated satellite. In this autonomy testbed
complex scenarios were tested which comprised:
● Nominal operational cases in which user requests were uplinked, processed
and the results were downlinked at the next ground station contact.
● Furthermore scenarios which lead to planning conflicts on board and where
the user requests could only be partially satisfied within the operating period.
● And finally scenarios during which manually injected equipment failures
occurred and where initially a suitable error recovery needed to be identified
and to be performed – followed by a replanning of the activities since after
error recovery the satellite had already missed some of the observation
targets. See also figure A4.

Figure A3: Autonomy testbed setup. © Astrium GmbH

Such mission planning algorithms impose high requirements towards


● the onboard software (which needs to intercept any potentially erroneous
commands, which might be created by the mission planning tool),
● and to the spacecraft simulation infrastructure which has to reflect sufficiently
realistically the overall scenario including payload operations.
Annex: Autonomy Implementation Examples 257

Generate
Recover TL for Recover failed
failed Queue - Generate Queue -
Execute Rest Cont. TL‘s Execute Rest

Failure Diagnose
Execute
Cont TL‘s
Exec TL1
STL1.......N
Onboard ..... ..... .....
!!
Exec TL3
Uplink Uplink Uplink Uplink
TL1 TL2 TL3 TL4
(STL1...N) (STL1...N) (STL1...N) (STL1...N)
Downlink Downlink
Status 1 Status 2
with missed
....
UR‘s from
STL2
Orbit Set 1 Orbit Set 2 Orbit Set 3 Orbit Set 4

On Ground Prep Vali- Vali- Prep Vali-


Prep
TL2 date TL3 date TL4 date
TL2 TL3 TL4

Figure A4: Autonomous recovery scenario on board. © Astrium GmbH


258 Annex: Autonomy Implementation Examples

Enabling Technology – Autonomous OBSW for Deep Space


Probes

Figure A5: New Horizons Probe. © NASA

In spring 2006 NASA launched the deep space probe “New Horizons” to explore the
trans Neptunian objects Pluto and Charon. It represents probably the highest level of
onboard autonomy ever flown to date.
The onboard software of New Horizons is based on a case based decision algorithm
and a rule chainer algorithm. In place of onboard control procedures as used in
conventional satellites here structures are implemented applying Artificial Intelligence
techniques to control the nominal approach maneuvers as well as the error recovery.
Cases are implemented on the lower processing level to identify abstract symptoms
from parameter measurements and above these cases a rule network is
implemented for situation analysis and system control.
The following figure provides a sketch of a small extract from the overall rule network
– here for the handling of an error during Pluto approach. The failure can either be
handled or results in the space probe going to Safe Mode – depending on the
detailed conditions. The rule network implements a forward chaining method for
processing.
For explanation of the figure below also please refer to [110] and [111]:
● The Rxxx-identifiers represent rules.
● The Myyy-identifiers represent macros which are executed by the activated
rules.
● All spacecraft commands initiated by rules are encapsulated in such macros.
● The transition times for the rules / macro execution are depicted as well (some
cover several days due to spacecraft coast or approach phases).
Annex: Autonomy Implementation Examples 259

● For the rules / macros the onboard processor executing them is shown (in this
extract from the rule network P3 and P5 are cited)
● and in the rule identification information is contained (for details see [110]):
◊ The rule priority
◊ The rule persistence
◊ The methodology how the rule result is to be handled by the inference
system, when the rule result is obviously outdated
◊ The state during the loading of the rule into memory (active / inactive).

Figure A6: Extract of a rule-based mode-transition network of an OBSW


(from [110]) © NASA
References 261

Tranquility Base here, the Eagle has landed.


Neil Armstrong
July 20 , 1969, 20h 17m 43s UTC

References
262 References

References on Missions driving OBC / OBSW Technology

General:
[1] Tomayko, James:
Computers in Spaceflight: The NASA Experience
http://www.hq.nasa.gov/office/pao/history/computers/Part1-intro.html

NASA Mercury Program:


[2] http://www.nasa.gov/mission_pages/mercury/missions/program-toc.html

[3] http://www-pao.ksc.nasa.gov/kscpao/history/mercury/mr-3/mr-3.htm

NASA Gemini Program:


[4] N.N.:
On the Shoulders of Titans:
A History of Project Gemini – (NASA report SP-4203)

[5] http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19780012208_19780
12208.pdf

[6] N.N.:
Project Gemini – A Chronology (NASA report SP-4002)
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19690027123_19690
27123.pdf

[7] Leitenberger, Bernd:


Das Gemini-Programm. Technik und Geschichte,
Books on Demand, 2008
ISBN 3-8370-2968-9

NASA Apollo Program:


[8] Apollo Mission:
http://spaceflight.nasa.gov/history/apollo/index.html

[9] Apollo Guidance Computer:


http://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/apollo/public/visu
al3.htm
and
http://de.wikipedia.org/wiki/Apollo_Guidance_Computer
References 263

[10] Tomayko, James:


The Apollo guidance computer: Hardware.
In: Computers in Spaceflight: The NASA Experience. NASA

[11] Tomayko, James:


The Apollo guidance computer: Software.
In: Computers in Spaceflight: The NASA Experience. NASA

NASA Space Shuttle Program:


[12] N.N.:
IBM and the space shuttle,
http://www-03.ibm.com/ibm/history/exhibits/space/space_shuttle.html

[13] Space Shuttle Onboard Computer:


http://en.wikipedia.org/wiki/IBM_AP-101

[14] Tomayko, James:


Computers in Spaceflight: The NASA Experience,
http://www.hq.nasa.gov/office/pao/History/computers/Ch4-2.html

NASA Mariner Program:


[15] Dunne, James A. ; Burgess Eric:
NASA History Office: The Voyage of Mariner 10
http://history.nasa.gov/SP-424/sp424.htm

[16] Tomayko, James:


Computers in Spaceflight: The NASA Experience,
Appendix IV – Mariner Mars 1969 Flight Program
http://www.hq.nasa.gov/office/pao/History/computers/Appendix-IV.html

[17] Hooke, A.J.:


In Flight Utilization of the Mariner 10 Spacecraft Computer,
J. Br. Interplanetary Society, 29, 277 (April 1976).

NASA Voyager Program:


[18] N.N:
JPL News & Features
Engineers Diagnosing Voyager 2 Data System
http://www.jpl.nasa.gov/news/news.cfm?release=2010-151
264 References

NASA Galileo Mission:


[19] N.N:
NASA: Solar System Exploration: Galileo
JPL: Galileo Project Home
http://solarsystem.nasa.gov/galileo/

[20] Tomayko, James:


Computers in Spaceflight: The NASA Experience,
Chapter Six: Distributed Computing On Board Voyager and Galileo
http://history.nasa.gov/computers/Ch6-3.html

[21] Thomas, J. S.:


A command and data subsystem for deep space exploration based on
the RCA 1802 microprocessor in a distributed configuration
Jet Propulsion Laboratory, 1980
Document ID: 19810003139
Accession Number: 81N11647

References on Microprocessors for Space

CDP1802:
[22] N.N.:
CDP1802 datasheet,
http://homepage.mac.com/ruske/cosmacelf/cdp1802.pdf

[23] N.N.:
RCA 1800 Microprocessor
User Manual for the CDP1802 COSMAC Microprocessor

Am2900:
[24] N.N.:
The Am2900 Family Data Book
http://www.bitsavers.org/pdf/amd/_dataBooks/1979_AMD_2900family.pdf

MIL-STD-1750 compatibles:
[25] N.N.:
MIL-STD-1750 A
http://www.xgc.com/manuals/m1750-ada/m1750/book1.html
References 265

[26] N.N.:
Dynex Semiconductor MA31750 Processor (Datasheet)
http://www.dynexsemi.com/assets/SOS/Datasheets/DNX_MA31750M_N
_Feb06_2.pdf

[27] N.N.:
UT1750AR RadHard RISC Microprocessor Data Sheet
http://aeroflex.com/ams/pagesproduct/datasheets/ut1750micro.pdf

RS/6000 – RAD6000:
[28] http://en.wikipedia.org/wiki/IBM_POWER

[29] RAD6000™ Space Computers


http://www.baesystems.com/BAEProd/groups/public/documents/bae_publ
ication/bae_pdf_eis_sfrwre.pdf

MIPS R3000 (Mongoose V):


[30] N.N:
Synova Inc.
http://www.synova.com/proc/processors.html

ERC32 and LEON:


[31] N.N.:
Sun SPARC:
http://en.wikipedia.org/wiki/Sun_SPARC

[32] ESA microelectronics ERC32 website:


http://www.esa.int/TEC/Microelectronics/SEM2XKV681F_0.html

[33] N.N.:
SPARC Series Processors ERC32 Documentation,
http://klabs.org/DEI/Processor/sparc/ERC32/ERC32_docs.htm

[34] N.N.:
LEON2 and 3 VHDL Code (under LGPL),
http://www.gaisler.com/

[35] N.N.:
LEON Processors,
http://www.gaisler.com/cms/index.php?
option=com_content&task=section&id=4&Itemid=33
266 References

[36] LEON 3 Single Board Computers:


http://www.gaisler.com/cms/index.php?
option=com_content&task=view&id=189&Itemid=120
and
http://www.gaisler.com/cms/index.php?
option=com_content&task=view&id=315&Itemid=212

[37] Koebel, Franck; Coldefy, Jean-François:


SCOC3: a space computer on a chip
An example of successful development of a highly integrated innovative
ASIC,
Microelectronics Presentation Days
ESA/ESTEC, March 2010,
Noordwijk, Netherlands

[38] Poupat, Jean-Luc; Lefèvre, Aurélien; Koebel, Franck:


OSCAR: A compact, powerful and versatile On Board Computer based
on LEON3 Core
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta

Diverse:
[39] Weigand, Roland:
ESA Microprocessor Development
Status and Roadmap
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta

References on Programming Languages

HAL/S:
[40] Highlevel Assembler Language / Shuttle – HAL/S:
http://en.wikipedia.org/wiki/HAL/S

[41] NASA Office of Logic Design:


http://klabs.org/DEI/Processor/shuttle/
● HAL/S Compiler System Specification
● HAL/S Language Specification
● HAL/S Programmer's Guide
References 267

● HAL/S-FC User's Manual


● Programming in HAL/S

JOVIAL:
[42] JOVIAL (Jules Own Version of the International Algorithmic Language):
http://en.wikipedia.org/wiki/JOVIAL

[43] N.N.:
MIL-STD-1589C, MILITARY STANDARD: JOVIAL (J73)
United States Department of Defense. 6 JUL 1984
http://www.everyspec.com/MIL-STD/MIL-STD+(1500+-+1599)/MIL-STD-
1589C_14577/.

Ada:
[44] Ada:
http://en.wikipedia.org/wiki/Ada_(programming_language)

[45] Barnes, John:


Programming in Ada 2005
Addison-Wesley Longman, Amsterdam, 2006
ISBN 978-0-321-34078-8

C:
[46] Kernighan, Brian W.; Ritchie, Dennis M.:
C Programming Language,
Prentice Hall,
2nd edition, 1988
ISBN: 978-0131103627

C++:
[47] Stroustroup, Bjarne:
The C++ Programming Language
Addison Wesley, Reading, Massachussetts,
2nd Edition, 1993,
ISBN: 0-201-53992-6

[48] Eckel, Bruce:


Using C++, Covers C++ Version 2.0,
Osbourne McGraw Hill, Berkeley 1989,
ISBN: 0-07-881522-3
268 References

[49] Ellis, Margaret A.; Stroustroup, Bjarne:


The Annotated C++ Reference Manual
Addison Wesley, Reading, Massachussetts, 1990,
ISBN: 8131709892

Assembler to C:
[50] Patt, Yale; Patel,Sanjay:
Introduction to Computing Systems: From bits & gates to C & beyond,
McGraw-Hill,
2nd edition, 2003
ISBN: 978-0072467505

References on Realtime Operating Systems

VxWorks:
[51] http://www.windriver.com/products/vxworks

RTEMS:
[52] OAR Corporation:
http://www.rtems.com

References on Data Buses and other Interfaces

MIL-STD-1553B:
[53] MIL-STD-1553B:
Digital Time Division Command/Response Multiplex Data Bus.
United States Department of Defense, September 1987.
http://www.sae.org/technical/standards/AS15531

[54] N.N.:
MIL-STD-1553 Tutorial and Reference from Alta Data Technologies
http://www.altadt.com/support/tutorials/mil-std-1553-tutorial/

SpaceWire:
[55] ECSS SpaceWire Standard Homepage:
ECSS-E-ST-50-12C – SpaceWire – Links, nodes, routers and networks
References 269

http://www.ecss.nl/forums/ecss/_templates/default.htm?
target=http://www.ecss.nl/forums/ecss/dispatch.cgi/standards/docProfile/
100654/d20080802144344/No/t100654.htm

Subpages:
ECSS-E-ST-50-53C SpaceWire – CCSDS packet transfer protocol
ECSS-E-ST-50-52C SpaceWire – Remote memory access protocol
ECSS-E-ST-50-51C SpaceWire protocol identification

[56] ESA SpaceWire Homepage:


http://spacewire.esa.int/content/Home/HomeIntro.php

[57] http://en.wikipedia.org/wiki/SpaceWire

Controller Area Network, CAN:


[58] ISO 11898:
http://www.iso.org/iso/search.htm?
qt=Controller+Area+Network&searchSubmit=Search&sort=rel&type=simp
le&published=true

[59] Davis, Robert I.; Burns, Alan; Bril, Reinder J.; Lukkien, Johan J.:
Controller Area Network (CAN) schedulability analysis: Refuted, revisited
and revised,
Real-Time Systems
Volume 35, Number 3, 239-272, DOI: 10.1007/s11241-007-9012-7,
http://www.springerlink.com/content/8n32720737877071/

[60] http://en.wikipedia.org/wiki/Controller_area_network

OSI Network Model:


[61] Open Systems Interconnection model (OSI model):
http://en.wikipedia.org/wiki/OSI_model

References on OBC Debug and Service Interfaces

JTAG / ICD:
[62] http://en.wikipedia.org/wiki/JTAG

[63] http://en.wikipedia.org/wiki/In-circuit_debugger
270 References

Service Interface:
[64] Wiegand, M.; Schmidt, G.; Hahn, M.:
Next Generation Avionics System for Satellite Application,
Proceedings of DASIA 2003 (ESA-SP-532) pp. 38 ff, 2-6 June, 2003,
Prague, Czech Republic
http://articles.adsabs.harvard.edu//full/2003ESASP.532E..38W/0000038.0
01.html

References on Onboard Equipment Development

Technology Readyness Level:


[65] Mankins, John C.:
TECHNOLOGY READINESS LEVELS, A White Paper,
April 6, 1995,, Advanced Concepts Office, Office of Space Access and
Technology, NASA

[66] http://en.wikipedia.org/wiki/Technology_readiness_level

References on Technologies for persistent Memory

Flash Memory Technology:


[67] http://en.wikipedia.org/wiki/Flash_memory

[68] http://en.wikipedia.org/wiki/Solid-state_drive

Magnetoresistive Memory Technology:


[69] http://en.wikipedia.org/wiki/MRAM

[70] http://www.everspin.com/products.html

References on Solid State Recorders

Solid State Recorders:


[71] http://www.astrium.eads.net/node.php?articleid=4966
[72] http://sbir.nasa.gov/SBIR/successes/ss/5-004text.html
References 271

References on Command / Control Standards

[73] Wertz, James R.; Larson, Wiley J. (Eds.):


Space Mission Analysis and Design,
Springer, Microcosm Press, 3rd edition, 2008
ISBN: 978-1-881883-10-4

[74] ECSS-E-70-01A – Ground systems and operations – Part 1: Principles


and requirements

[75] ECSS-E-70-01A – Ground systems and operations – Part 2: Document


requirements definitions
Annex D: Space segment user manual (SSUM)

[76] CCSDS 131.0-B-1 TM Synchronization and Channel Coding

[77] CCSDS 132.0-B-1 TM Space Data Link Protocol

[78] CCSDS 133.0-B-1 Space Packet Protocol

[79] CCSDS 231.0-B-1 TC Synchronization and Channel Coding

[80] CCSDS 232.0-B-1 TC Space Data Link Protocol

[81] CCSDS 232.1-B-1 Communications Operation Procedure-1


CCSDS 732.0-B-2 AOS Space Data Link Protocol

[82] ECSS-E-ST-50-01A Space engineering – Space data links – Telemetry


synchronization and channel coding

[83] ECSS-E-ST-50-03A Space engineering – Space data links – Telemetry


transfer frame protocol

[84] ECSS-E-ST-50-04A Space data links – Telecommand protocols,


synchronization and channel coding
[85] ECSS-E-ST-50-05A Space engineering – Radio frequency and
modulation

[86] ECSS-E-70-41A Space engineering – Ground Systems and Operations –


Telemetry and telecommand packet utilization
272 References

References on SADT / IDEF0 based Software Design

[87] Marca, D.; McGowan, C.:


Structured Analysis and Design Technique,
McGraw-Hill, 1987
ISBN: 0-07-040235-3

[88] N.N.:
Overview of IDEF0:
http://www.idef.com/idef0.htm

References on HOOD Software Design

[89] http://www.esa.int/TEC/Software_engineering_and_standardisation/TEC
KLAUXBQE_0.html

[90] Rosen J-P.:


HOOD: An Industrial Approach for Software Design,
Edited by: HOOD User Group
ISBN: 2-9600151-0-X

[91] Selic, Bran; Gullekson, Garth; Ward, Paul T.:


Realtime Object Oriented Modeling,
Wiley & Sons, 1994
ISBN: 978-0471599173

[92] Burns, A.; Wellings, A.:


Hard Real-Time HOOD: A structured Design Method for Hard Real-Time
Ada Systems
Elsevier Science Ltd, 1995
ISBN: 978-0444821645

References on UML Software Design

[93] Booch, Grady; Rumbaugh, James; Jacobson, Ivar:


The Unified Modelling Language User Guide,
Addison Wesley Longman, Reading, Massachussetts, 1999
ISBN: 0-201-57168-4

[94] Rumbaugh, James; Jacobson, Ivar; Booch, Grady:


The Unified Modeling Language Reference Manual,
References 273

Addison Wesley Longman, 1999,


ISBN: 020130998X

[95] Si Alhir, Sinan:


Learning UML
O'Reilly, 2003,
ISBN: 0-596-00344-7

[96] N.N.:
OpenAmeos – The OpenSource UML Tool,
http://www.openameos.org/

[97] Fowler, Martin:


3rd Edition, Addison-Wesley Longman, Amsterdam, 2003
ISBN: 978-0321193681

References on Simulation and Verification Testbeds

[98] Eickhoff, Jens:


Simulating Spacecraft Systems,
Springer Verlag GmbH, 2009,
ISBN: 978-3-642-01275-4

[99] Eisenmann, Harald; Cazenave, Claude:


SimTG: Successful Harmonization of Simulation Infrastructures,
10th International Workshop on Simulation for European Space
Programmes,
SESP 2008, October 7th - 9th 2008, ESA/ESTEC,
Noordwijk, Netherlands

References on Software Development Standards

ECSS Standards:
[100] ECSS-E-ST-40C Space Engineering – Software
[101] ECSS-Q-ST-80C Space product assurance – Software product
assurance
DO-178B:
[102] RCTA/EUROCAE:
Software Considerations in Airborne Systems and Equipment
274 References

Certification, DO-178B/ED-12B, December 1992

[103] http://www.rtca.org/downloads/ListofAvailable_Docs_WEB_NOV_2005
.htm

Galileo Software Standard:


[104] Montalto, Gaetano:
The Galileo Software Standard as tailored from ECSS E40B/Q80,
European Satellite Navigation Industries SPA,
BSSC Workshop on the Usage of ECSS Software Standards For Space
Projects,
http://www.estec.esa.nl/wmwww/EME/Bssc/BSSCWorkshopProgrammev
5.htm

NASA Standards (Top level view):


[105] http://sw-
eng.larc.nasa.gov/process/documents/wddocs/LaRC_Local_Version_of_
SWG_Matrix.doc

MIL Standards:
[106] MIL-STD-2167A,
Military Standard, Defense System Software Development,
Department of Defense, Washington, D.C., February 29, 1988.

References on Onboard Autonomy

[107] ECSS-E-ST-70-11C, Space segment operability

[108] http://www.esa.int/SPECIALS/Proba_web_site/SEMHHH77ESD_0.html

[109] Eickhoff, Jens:


System Autonomy Testbed
Product Flyer, Dornier Satellitensysteme GmbH,
Friedrichshafen, Germany, 1997

[110] Moore, Robert C.:


Autonomous Safeing and Fault Protection for the New Horizons Mission
to Pluto,
The Johns Hopkins University Applied Physics Laboratory, Laurel,
References 275

Maryland, USA,
57th, International Astronautical Congress,Valencia, Spain,
October 2.-6., 2006

[111] http://www.nasa.gov/mission_pages/newhorizons/main/index.html

References on Flight Operations

[112] ECSS-E-ST-70-11C Space Engineering – Space segment operability

[113] ECSS-E-ST-70-31C Space Engineering – Ground systems and


operations –
Monitoring and control data definition

[114] ECSS-E-ST-70-32C Space Engineering – Test and operations procedure


language

[115] http://en.wikipedia.org/wiki/Tcl

[116] http://en.wikipedia.org/wiki/Advanced_Encryption_Standard

References Diverse

[117] N.N.:
Radiation Resistant Computers:
http://science.nasa.gov/science-news/science-at-nasa/2005/18nov_eaftc/

[118] N.N.:
AMBA on chip bus architecture:
http://www.arm.com/products/system-ip/amba/amba-open-
specifications.php

[119] Eickhoff, Jens; Stevenson, Dave; Habinc, Sandi; Röser Hans-Peter:


University Satellite featuring latest OBC Core & Payload Data Processing
Technologies,
Data Systems in Aerospace,
DASIA 2010 Conference,
Budapest, Hungary, June, 2010
276 References

[120] Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sandi A.; Witt,
Rouven; Röser, Hans-Peter:
Common board design for the OBC I/O unit and the OBC CCSDS unit of
the Stuttgart University Satellite "Flying Laptop"
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta

[121] Fritz, Michael; Röser, Hans-Peter ; Eickhoff, Jens; Reid, Simon:


Low Cost Control and Simulation Environment for the
“Flying Laptop“, a University Microsatellite
Spaceops 2010 Conference, Huntsville, Alabama, USA, 25 - 30 April
2010

[122] Rivard, Fred; Prochazka, Marek; Pareaud, Thomas:


Java for On-board Software,
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta

[123] Seeber, G.:


Satellite Geodesy
De Gruyter, Berlin, 2003, 2nd Edition

[124] Kranz, Gene:


Failure Is Not an Option: Mission Control from Mercury to Apollo 13 and
Beyond,
Simon and Schuster, 2000
ISBN 978074320079
Index 277

Index
A C
Actel ProASIC.........................................47 C .............................................42, 136, 254
Actel RT-AX............................................47 C++.......................................................136
Ada.............33, 42, 46, 120, 135, 136, 254 Calibration.............................................250
Aeolus.....................................................45 CAN bus..................................................47
ALGOL..................................................135 CANaerospace........................................62
Algorithm in the Loop....................149, 151 Cassini....................................................41
AMBA bus...............................................47 CCSDS...........................................62, 154
AMD 2900 ..............................................40 CCSDS packet......................................102
Analog sensor equipment.......................56 CCSDS processor...................63, 101, 114
Analog spacecraft control.......................23 CCSDS standard..............................62, 95
Antenna effects.......................................73 Channel Access Data Unit................63, 95
Antenna ground station.........................235 Channel acquisition table.............123, 124
AOCS mode..........................................193 CISC........................................................43
AP-101....................................................32 Classroom training................................244
Apollo program.................................24, 29 Clock module........................................207
Application Process Identifier..................... Clock strobe..........................................207
.............................................95, 111, 187 Closed-loop...........................................160
Application Specific Integrated Circuit....44 CMOS memory.......................................36
ARINC 825..............................................62 Code inspection....................................135
ARM............................................46, 47, 54 Code instrumentation..............................67
ASIC................................................78, 156 Columbus Software Development
Assembler.....................26, 30, 33, 37, 135 Standard............................................168
Assembly, Integration and Testing........160 Command and Data Subsystem.............38
ATLAS.....................................................41 Command Link Transfer Unit............62, 95
Attitude acquisition................................197 Command Pulse Decoding Unit.................
Attitude and Articulation Control .............................................64, 112, 187
Subsystem.....................................36, 38 Commissioning phase..........................250
Attitude and Orbit Control System..........56 Commissioning Phase..........................192
ATV........................................................211 Compact PCI...........................................47
Authentication.......................................203 Consultative Committee for Space Data
Autocode...............................................148 Systems.........................................62, 95
Autonomy..............................................163 Control and Data Management Unit.........6
Autonomy testbed.................................256 Control console.............................152, 153
Controller Area Network..........................61
B Controller in the Loop...149, 156, 157, 160
Ball Grid Array.........................................72 Controller network...................................58
Bepi Colombo.................................61, 127 Core Data Handling System.................118
Bit failure...............................................126 Critical Design Review..............................8
Bitslice arithmetic logical unit..................40 CryoSat.......................................................
Boot.......................................................246 ......................45, 52, 120, 158, 193, 237
Boot loader..............................................91 Current free encoding.............................59
Boot memory.....................................54, 56
Boot report............................................118 D
Breadboard Model..................................77 Data bus............................................54, 58
Built-in self test........................................37 Data downlink.......................................209
Bus controller....................................54, 59 Data management autonomy...............213
278 Index

Data pools...............................................13 Event history.........................................204


De-orbiting..............................................10 Event TM packet...................................117
Death-report..........................................118 Event-action-table.................................205
Debug interface.......................................54
Debug support unit.................................66 F
Debugger..............................................155 Fail Operational....................................221
Deep Space Network..............................34 Fail to Safe Mode..................................221
Deep space probe................................183 Failure Detection, Isolation and Recovery
Deployment...................................197, 247 .....................................4, 5, 17, 116, 219
Development board................................77 FDIR and safeguarding hierarchy.........222
Diagnostic packet.................................204 FDIR autonomy.....................................213
Digital Command Sequencer..................34 FDIR concept........................................220
Digital Equipment PDP 11......................40 FDIR function........................................163
Digital signal processor.........................254 Field Programmable Gate Array.............44
Direct Memory Access Controller...........42 Flash EEPROM.......................................56
DO178B................................................167 Flash memory.........................................83
Docking maneuver..................................24 FlatSat...........................................161, 162
Document Requirements Definition......176 Flight Acceptance Review.........................8
Documents Requirements List.............173 Flight Data Subsystem............................36
Doppler effect........................................209 Flight Dynamics Infrastructure..............241
Dynamic RAM.........................................57 Flight Model......................................15, 76
Flight Operations Center.......................180
E Flight operations director......................237
ECSS standards...................................167 Flight Operations Director.....................238
EDAC memory........................................57 Flight Operations Manual.............186, 201
Electrical Functional Model...................160 Flight Procedure...................................228
Electrically Erasable PROM....................56 FORTRAN.......................................42, 135
Electromagnetic compatibility...................5 FPGA........................................78, 79, 156
Elegant Breadboard................................77 FPGA board............................................77
EMC tightness.........................................73 Function Tree................................130, 131
Enabling technology.............................213 Functional domain........................236, 237
Encryption.............................................203 Functional requirements.......................180
End-of-Life Disposal Phase..................193 Functional sequence monitoring...........205
Engineering Model............................15, 77 Functional Verification Bench...............150
Engineering Qualification Model.............76
Environmental Control and Life Support G
System.................................................24 GAIA.....................................................127
Envisat..............................................41, 42 Galileo Navigation System..........................
Equipment handler..................................92 ..................45, 52, 54, 62, 156, 161, 168
Equipment health status.......................205 Galileo Software Standard....................168
Equipment operational modes................12 Gemini Digital Computer...................25, 30
Equipment states..................................196 Gemini program......................................24
ERC32................................45, 46, 52, 120 GEO satellite.........................................182
Error Detection and Correction.......57, 126 GIOVE-A.................................................62
ERS-1/2............................................42, 76 GNU Ada compiler..................................42
ESA Space Operations Center.............245 Go / No-Go criteria................................249
Ethernet..................................................47 GOCE.....................................................45
European Cooperation for Space GPS......................................................254
Standardization..................................167 GR UT699...............................................48
Event.............................................205, 206 GRACE...................................................76
Index 279

Ground communications system..........238 L


Ground segment infrastructure.............234 Launch and Early Orbit Phase....................
Ground station visibility plan.................198 ........................9, 16, 180, 192, 197, 246
Launcher separation.............................197
H Launchpad..............................................67
Hardware / software compatibility tests 156 LEO satellite..........................................181
Hardware alarm....................................223 LEON........................................46, 54, 120
Hardware in the Loop...................150, 160 LEON3FT................................................46
Hardware verification..............................80 LEOP Autosequence............200, 201, 246
Harel state machine..............................145 Limit violation........................................205
Hierarchic Object-Oriented Design.......138 Line Control Block.................................126
High energetic particle............................22 Lock-in amplifier....................................210
High Priority Command............................... Logging mechanism..............................204
.............................64, 112, 166, 202, 224
High Priority Telemetry......63, 66, 114, 205 M
High Priority Telemetry log....................118 MA31750.................................................41
Highlevel Assembler Language / Shuttle Magnetic core memory...............26, 30, 33
.........................................33, 40, 42, 135 Magnetic tape.............................27, 37, 58
Housekeeping.......................................204 Magnetoresistive Random Access
Housekeeping data memory...........57, 114 Memory..........................................56, 57
Housekeeping packet...........................204 Man Machine Interface...........................27
HPTM log..............................................118 Manufacturing.........................................80
Huygens..................................................41 MAP-ID..................................112, 187, 203
HW Trap........................................118, 126 Mariner missions.....................................34
Mars Express..........................................41
I Mars Reconnaissance Orbiter................44
I/O Board.................................................52 Mass Memory and Formatting Unit............
IBM System 360......................................32 .............................................6, 54, 57, 82
IDEF0....................................................136 Master Timeline Manager.....................121
IEEE 1355...............................................60 Mechanical loads....................................72
IEEE 802.3..............................................59 Memory failure......................................126
IEEE standards.....................................167 Mercury program....................................23
In circuit debugger..................................66 Metamodel....................................140, 147
Instrument operations sequence..........200 MeteoSat...............................................245
Integral....................................................41 MetOp.........................................41, 42, 76
Integrated circuit...............................30, 37 Microprocessor.......................................54
Integrated circuits...................................36 MIL-STD-1553..........58, 92, 122, 125, 156
Intel 80386..............................................46 MIL-STD-1750..............41, 42, 45, 46, 120
Intel 80486..............................................49 MIL-STD-1815........................................41
Intel 80x86..............................................43 MIPS.................................................46, 54
Interface driver........................................91 Mission analysis........................................9
Internal memory......................................54 Mission Control System........234, 236, 244
Interrupt.................................................125 Mission execution autonomy................212
IP-Core..............................................77, 79 Mission lifetime.....................................241
Mission planning...................241, 254, 256
J Mission planning tool............................256
Java......................................................127 Mission scenarios.................................180
Joint Test Actions Group.........................66 Mission timeline....................................196
JOVIAL............................................42, 135 Mode concept...............................192, 193
JTAG interface........................................66 Monitor..................................................206
280 Index

Monitoring.....................................204, 221 Oscillator...............................................206


Motorola 68xxx........................................46 OSI layer model..........................59, 60, 92
Multiplexer Access Point Identifier........112
Multitasking.............................................30 P
Packet Category....................................112
N Packet store..........................................109
Navigation receiver.........................82, 131 Packet structure....................................204
New Horizons........................................258 Packet Utilization Standard........5, 92, 102
NOAA-18.................................................42 Parameter monitoring...........................205
Nominal operations orbit.........................10 Parameter Type Code...........................204
Nominal Operations Phase...................192 Payload commissioning................198, 251
Non Return to Zero.................................62 Payload Data Handling and Transmission
NV RAM................................................221 ...........................................................224
NV ROM..................................................56 Payload Ground Segment....................226
Payload management computer...............6
O Payload Management Computer............42
OBCP....................................................205 Performance characterization...............250
Onboard autonomy...............................211 Phase locked loop oscillator.................210
Onboard computer....................4, 6, 22, 52 Pioneer..............................................34, 37
Onboard computer housing....................72 Platform commissioning........................198
Onboard computer mechanical design...72 Playback Telemetry.........................63, 114
Onboard computer models.....................76 Position-tagged command....................241
Onboard computer simulation model....155 Power Control and Distribution Unit.....223
Onboard Control Procedure........................ Power supply....................................54, 68
...................................110, 120, 201, 206 PowerPC.......................43, 44, 46, 54, 120
Onboard data handling..........................111 Pre-launch Phase.................................192
Onboard software...............................4, 88 Preliminary Design Review.......................8
Onboard Software Data Pool..........93, 187 Preliminary Requirements Review...........8
Onboard software dump...............106, 220 Printed circuit board..........................69, 72
Onboard software dynamic architecture PROBA-1........................................45, 163
...........................................................120 Process ID............................112, 190, 204
Onboard software function.......................... Process improvement technology........214
...................................163, 201, 205, 221 Product tree ............................................11
Onboard software kernel...............118, 120 Program / erase cycle.............................83
Onboard software patch............................. Programmable Read Only Memory........56
....................37, 106, 193, 220, 224, 241 Project for Onboard Autonomy.............254
Onboard software requirements...........132 Proto Flight Model...................................76
Onboard software static architecture......90 PSLV.......................................................41
Onboard software tests.........................135 PUS event.............................................117
Onboard synchronization......................206 PUS monitor..........................................117
Onboard time..........................................94 PUS services........................................102
Operating system....................................30
Operational constraints.........................225 Q
Operations concept...............................180 Qualification Review.................................8
Operations Interface Requirements
Document..............................................4 R
Operations plan....................................180 RAD6000..........................................43, 44
Operations procedure...........................180 RAD750..................................................44
Orbit analysis........................................235 Radiation.............................................5, 22
Orbit control maneuver...................10, 200 Radiation hard circuitry...........................78
Index 281

Radio Technical Commission for Service Type.........................................102


Aeronautics........................................167 Shift handover,......................................244
Random Access Memory........................56 Shift plan...............................................246
Ranging.........................................210, 247 Shock loads............................................72
RCA (CDP) 1802 microprocessor...........39 Shuttle Data Processing System............32
Re-orbiting..............................................10 SIMSAT.................................................245
Read-only core rope memory.................30 Simulation session................................244
Realtime Operating System..............44, 91 Simulator interface card........................158
Realtime Telemetry.........................63, 114 Simulator telemetry packet...................154
Reconfiguration.....................................221 Simulator-Frontend.......................159, 161
Reconfiguration log.......................118, 205 Sine vibration..........................................72
Reconfiguration unit..........................54, 65 Single Event Upset.........................57, 126
Recovery sequence..............................200 SkyLab....................................................32
Recovery thread...................................125 SMOS.....................................................45
Redundancy........................................5, 65 Software coding....................................147
Redundancy concept............................216 Software design....................................135
Redundancy design..............................219 Software development standard...........166
Remote Interface Unit.......................52, 54 Software engineering............................167
Remote terminals....................................59 Software functional analysis.................130
Rendezvous maneuver...........................24 Software in the Loop.....................149, 152
Requirements analysis.........................135 Software product assurance.................167
Review milestones................................170 Software requirements definition..........132
Review of design..................................135 Software verification and testing...........148
RISC.................................................42, 43 Software Verification Facility.......................
RMAP protocol........................................92 .............................................27, 152, 245
Rosetta....................................................41 Solid State Recorder...............................82
Router.....................................................58 Space Segment User Manual.......186, 201
RS/6000..................................................43 Space Shuttle..........................................32
RTEMS....................................................46 Space Transportation System................32
Rule network.........................................258 Spacecraft commandability..................187
Spacecraft Communication and Command
S Subystem.............................................35
S/C configuration..................................188 Spacecraft configuration handling........187
S/C control..............................................90 Spacecraft Configuration Vector.....57, 187
S/C mode................................10, 193, 250 Spacecraft Controller On a Chip.............48
S/C status.............................................188 Spacecraft observability........................204
Safe Mode................................................... Spacecraft operations...........................245
...........................117, 206, 211, 220, 223 Spacecraft operations concept.................9
Safe Mode recovery..............................224 Spacecraft Operations Concept Document
Safeguard memory..................................... ...................................................186, 201
...............................54, 57, 126, 187, 204 Spacecraft operations manager...........237
Satellite operations...................................4 Spacecraft Operations Manager...........238
Satellite Reference Database...............227 Spacecraft State Vector........................202
Satellite Requirements Specification........4 SpaceWire............................47, 58, 60, 92
Scheduling cycle...................................121 SpaceWire routing..................................61
Science data management...................208 SPARC............................................46, 120
Science data memory.............................57 Special Checkout Equipment...............160
SCOC3....................................................48 Sputnik....................................................23
Sentinel 1 to 4.........................................45 State machine.......................................145
Service Interface.......54, 67, 115, 128, 155 Static RAM..............................................56
282 Index

Structure ID...........................................204 TSC695...................................................45


Structured Analysis and Design Technique TTL circuits..............................................36
...........................................................136
Subservice Type...................................102 U
SuperH..............................................46, 54 Umbilical connector.................67, 115, 246
Surface Mounted Device........................72 UML activity diagram............................144
SWARM............................................45, 76 UML class diagram...............................141
System 4 Pi.............................................32 UML communication diagram...............146
System initialization sequence.............200 UML component diagram.....................142
System log............................................118 UML composite structure diagram........141
System on Chip................................46, 92 UML deployment diagram.....................142
System reconfiguration sequence........200 UML object diagram..............................143
System Requirements Document.............4 UML package diagram..........................143
System Requirements Review.................8 UML profiling.........................................147
System simulation.................................240 UML sequence diagram........................146
System Testbench.................................156 UML state machine diagram.................145
System Validation Test..................231, 246 UML timing diagram..............................147
UML use case diagram.........................144
T Unified Modeling Language..................140
TanDEM-X.......................................45, 248 Unit reconfiguration sequence..............200
Task.......................................................120 Unit switch-on sequence.......................200
Task control...........................................120 US Federal Aeronautics Association....167
Task scheduling....................................191 User request.................................214, 241
Technology Readiness Level..................76 User requests........................................255
Telecommand frame.............................112 UT699...............................................46, 47
Telecommand processing.......................89
Telemetry generation..............................89 V
Temperature cycles.................................72 Venus Express........................................41
TerraSAR-X.............................45, 245, 248 Viking Mars landers................................35
Test Readiness Review........................170 Virtual Channel................98, 109, 112, 114
Thermal control equipment.....................69 Vostok.....................................................23
Thread...................................................120 Voyager missions........................34, 35, 38
Time-tagged command.........................241 VxWorks..........................................44, 254
Timeline.................................................255
TM/TC-Frontend...................154, 156, 159 W
Trajectory................................................10 Wear prevention techniques...................83
Trajectory injection................................197 Wear problem.........................................83
Transfer orbit...........................................10 Work memory..........................................56
Transition to Safe Mode........................224
Transition to Safe Mode sequence.......200 X
Transponder interface.............................54 XML.......................................................154
TSC21020...............................................49 XMM........................................................41

You might also like