Professional Documents
Culture Documents
Jens Eickhoff
Onboard Computers,
Onboard Software and
Satellite Operations
An Introduction
123
Prof. Dr.-Ing. Jens Eickhoff
Institute of Space Systems (IRS),
University of Stuttgart,
Germany
DOI 10.1007/978-3-642-25170-2
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation,
reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and
permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement,
that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
September 2011
List of Abbreviations.....................................................................................................XV
Part I Context
1 Introduction.................................................................................................................3
1.1 Design Aspects....................................................................................................4
1.2 Onboard Computers and Data Links..................................................................6
2 Mission / Spacecraft Analysis and Design.................................................................7
2.1 Phases and Tasks in Spacecraft Development..................................................8
2.2 Phase A – Mission Analysis.................................................................................9
2.3 Phase B – Spacecraft Design Definition...........................................................10
2.4 Phase C – Spacecraft Design Refinement.......................................................14
2.5 Phase D – Spacecraft Flight Model Production................................................15
2.5.1 Launcher Selection....................................................................................15
2.5.2 Launch and Early Orbit Phase Engineering..............................................16
2.5.3 Onboard Software and Hardware Design Freeze.....................................17
Technical Abbreviations
AES Advanced Encryption Standard
AFT Abbreviated Function Test
AGC Apollo Guidance Computer
AIT Assembly, Integration and Test
AOCS Attitude and Orbit Control System
APID Application ID
ASIC Application Specific Integrated Circuit
ATV Autonomous Transfer Vehicle
BC Bus Controller
BGA Ball Grid Array
BIOS Basic Input / Output System
CADU Channel Access Data Unit
CAN Controller Area Network
CASE Computer-Aided Software Engineering
CDMU Control and Data Management Unit
CDR Critical Design Review
CISC Complex Instruction Set Computer
CLTU Command Link Transfer Unit
CM Apollo Command Module
CPU Central Processing Unit
DDF Design Definition File
DHS Data Handling System
DJF Design Justification File
DLR Deutsches Zentrum für Luft- und Raumfahrt
DMA Direct Memory Access
DMAC Direct Memory Access Controller
DORIS Doppler Orbitography and Radiopositioning Integrated by Satellite
DPS Shuttle Data Processing System
DRD Document Requirement Definition
EBB Elegant Breadboard
ECC ESTRACK Control Center
EDAC Error Detection and Correction
EEPROM Electrically Erasable Programmable Read Only Memory
EFM Electrical Functional Model
EM Engineering Model
EMC Electromagnetic Compatibility
EQM Engineering Qualification Model
ESA European Space Agency
X VI
Part I
Context
Introduction 3
1 Introduction
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
4 Introduction
Although the payloads of a satellite such as radar or optical instruments are the
principle performance driver for a spacecraft, the platform control functionality plays a
significant role in mission efficiency. Considering key characteristics like required
payload data geolocation precision of today's Earth observation missions, the
requirements towards the satellite platform control functionality are even more
continuously increasing. The same trend can be detected for specific missions like
Earth gravity field measurements, for deep space missions and for latest concepts on
Earth observation from geostationary orbit positions.
The platform control functionality is centrally driven by the functionality included in the
onboard software, (OBSW), and the operational flexibility from ground – being based
on onboard software functions and features. The performance of the onboard
software itself is driven respectively limited by the performance of the available
onboard computer, (OBC), hardware. Thus the chain of spacecraft operations from
ground, complemented by the OBSW and controlling platform and payload
equipment via the OBC hardware is the key system engineering challenge.
Concerning the onboard computers, the software and the operations concept a
number of aspects have to be taken into consideration. The onboard computers
compared to standard industry embedded controllers or automotive controllers have
to provide
● significant failure robustness only achievable by internal redundancy,
● electromagnetic compatibility, (EMC), to the space environment conditions
● and in addition radiation robustness against high energetic particles.
● The latter cannot be achieved by standard highly integrated circuit, (IC),
designs as used in today’s PC microprocessors. Space application processors
require a lower circuit integration density and further manufacturing specifics.
● This again results in lower achievable processor clock frequencies (20-
66 MHz are typical values).
● Furthermore onboard computers today still have to serve a large number of
different types of interfaces such as:
◊ Serial or LVDS interfaces on the transponder side.
◊ Analog and data bus interfaces on platform and payload equipment side.
● And finally also these interface connections at least partly need to be
redundant.
Similar dedicated constrains affect the onboard software of a satellite. The OBSW
needs to be a
● realtime control software,
● allowing both interactive spacecraft remote control and automated/
autonomous control.
● The onboard software concept typically today is a service based architecture
covering several control and input/output, (I/O), levels:
◊ Data I/O handlers and data bus protocols,
◊ control routines for payloads, AOCS, thermal and power subsystems,
◊ up to Failure Detection, Isolation and Recovery routines.
Module OBC
MMFU
PMC
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
8 Mission / Spacecraft Analysis and Design
The following figure shows the phase breakdown of spacecraft development. Listed
in addition are also the main tasks to be performed within each phase. Figure 2.2
depicts additionally the prescribed review milestones according to ECSS-E-M30A.
Phase 0/A Phase B/C Phase C/D Phase E
and alternatives / payload concepts system and system layout regarding commissioning
variants ●Analysis of equipment ●EGSE develop operational and ●Payload calibration
●Documentation requirements and and performance verification verification of flight provides trouble
constraints verification ●Development and procedures shooting support
●Standardized ●Design support validation of test for spacecraft
documentation regarding inter procedures
faces and budgets ●Unit and
subsystem tests
Phases
0+A B C D E F
MDR PRR
Mission
SRR PDR
Requirements
Tasks
Definition
CDR
Design Definition
QR
Verification &
Qualification
FAR
Production
Launch Deorbiting
Operation
Mission analysis is already performed in the early phases 0/A of a project. From
these analysis phases result the requirements towards the space and ground
segment of the mission which are further refined in phase B up to the PDR review.
The system design – also concerning OBCs, OBSW and Operations Concept – start
after SRR. Thus over phases A-C up to CDR the following elements must be defined:
● S/C payloads and their functions
● S/C orbit / trajectories / maneuvers
● S/C operational modes
● Required S/C AOCS and platform subsystems
● Used onboard equipment and according design
● Ground / space link equipment
● Onboard functions for system and equipment monitoring and control
● Autonomous functions – e.g. for the “Launch and Early Orbit Phase”, (LEOP),
timeline execution
● FDIR functions, Safe Mode handling etc.
● Test functions
● Identification of functions being realized in hardware respectively in software
All these are essential drivers for OBC and OBSW design, the spacecraft's top level
and subsystem design as well as for the spacecraft operations concept.
Next follows the conceptual requirements definition and technology selection for the
main functional components such as
● AOCS subsystem sensors / actuators,
● power subsystem equipment,
● thermal subsystem equipment,
● data handling subsystem equipment.
And finally come the first definitions on
● elementary PL modes,
● elementary S/C modes,
● plus non functional design data such as budgets (mass, power).
The following shall be the first of four consecutive figures restating and sketching out
from top to bottom for each development phase the subsequent level of growing
design detail.
Phase B serves as first complete design definition on system level. This includes a
number of detailed analyses in various fields. Without claiming completeness of the
list the most prominent ones shall be cited including their subtasks. One is the
refinement of of the orbit definition, which includes
● nominal operations orbit,
● transfer orbits / trajectories including LEOP trajectories,
● orbit control maneuvers and
● de-orbiting / re-orbiting after end of life.
Closely associated with the orbits, maneuvers and trajectories is the definition of the
spacecraft's operational modes in nominal and failure conditions. The figure below
depicts an example of a spacecraft level mode diagram. It includes notation of
Phase B – Spacecraft Design Definition 11
Figure 2.7: Equipment operational modes versus spacecraft modes. © Astrium GmbH
Phase B – Spacecraft Design Definition 13
With this information becoming available a first definition of variable sets – so-called
data pools – for the OBSW can be defined, namely the definition of
● variables to be managed via spacecraft telecommands and telemetry,
● equipment onboard command and telemetry parameters,
● and the complementary set of data bus interface variables to be managed.
In phase B of the S/C development the OBSW architectural design already starts and
the subsequent stages are incrementally defined as OBSW is usually developed in a
stepwise approach. Concerning the large amount of design refinements performed in
the next phase C only those shall be followed further which concern the onboard
computers, the software and the S/C operations from ground respectively.
14 Mission / Spacecraft Analysis and Design
The first step in phase C is the freeze of the product tree and completing the
selection of suppliers for onboard equipment. These final decisions then allow
● the completion of interface definitions between onboard equipment (hardware,
signal types / levels and data protocols),
● the design consolidation for interfaces between OBC and onboard equipment
◊ either implemented via data buses or
◊ as low level line interfaces via a so-called “Remote Interface Unit”, (RIU)
connected to the core OBC.1
● Furthermore the design for so-called “High Priority Command”, (HPC),
interfaces can be finalized. Such HPC lines are commandable from ground
even when the OBSW has problems or is down for emergency reconfiguration.
● And with the consolidation of the electrical and data handling design via RIU
finally the onboard software variable sets (“data pools”) can be refined for
◊ ground/space TC/TM,
◊ for the core OBC,
◊ for data handled via RIU and
◊ for TC/TM data of onboard equipment like sensors / actuators /
instruments.
1
Such a RIU in most cases is connected via a data bus to the OBC and provides all required types of low level
interfaces like analog, serial, bi-level, pulse for control of simple equipment like heaters, simple sensors etc.
Phase C – Spacecraft Design Refinement 15
During phase C thus significant design input for the OBSW is consolidated and
during this phase the OBSW development is enhanced to detailed design and coding
as well as verification of first versions. The detailed roadmap is project specific.
In phase C the design of the spacecraft was completed and Engineering Models of
the diverse equipment on board (including instruments and payloads) were
developed and qualified. Phase D thereafter is devoted to the production of the S/C
Flight Model. At the beginning of this phase procurement for all flight models of the
required equipment and of the spacecraft structure and flight harness is performed by
the S/C prime contractor. During the assembly, integration and test, (AIT), program
they subsequently are assembled.
Another important step at the beginning of phase D, after project CDR is the final
selection of the launcher since for at least most conventional Earth Observation and
science satellites missions multiple launcher options exist. During previous design
phases the S/C design has deliberately been formulated for compatibility with the 2-3
most likely carriers. The primary selection of a potential launcher which is performed
during phase B already evaluates parameters like
● mass to orbit
● suitability for according orbit depending on inclination, escape velocity, and
launcher upper stage reignition requirements
● overall launcher ΔV
The final selection in phase D then is mainly driven by launch slot availability, cost
and status of launcher qualification for new types. The following figures show a
16 Mission / Spacecraft Analysis and Design
Plesetzk:
Plesetsk 62.70° N
40.35° E
Launch and early orbit phase engineering implies the detailed development of the
automated sequences on board the satellite from separation detection. These include
Phase D – Spacecraft Flight Model Production 17
● the OBC taking over control of the S/C after being deployed by the launcher's
upper stage,
● automatic position and attitude / rotational rate detection,
● automated rate damping,
● automatic deployment (antennas and solar panels),
● to establishment of ground station contact.
Such sequences are subject to tests in S/C assembly phase prior to launch and will
be treated in more detail in part IV of this book.
Part II
Onboard Computers
Historic Introduction to Onboard Computers 21
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
-- 2
+
%
%%
%
(
<
6
;
A
ACA
%
A
)A
;
<
A %
%
%
A!#
%
%
+
%
%
%
%
+
%
%(A
%
+
A
%
8
A
%
=
%
@
+
%
%
%
%
&"( &
@
)
(
%
%
+
A
(
A!#
%E.
33
2
A -F
5! C
(
$
5! ! ,+>(
*
(
6
(
"
"
/1E.;
(
+
%
%
J%
"
;
7
:
>
$
J
77
+
+
( A
(
" F
78
I: 2% 8
F/?9
8I E/1H/
,*((
"
R
&
(
/-/1H/
+
(
%
%
(
(H=
78
I:=
&
8
-./1H-
(
)
+
%
<+
"
>
-D 2
(
6
+
&
-
+
<
/D
%
L
J%%
%
8
FF?&
1@(
,*((
2
A -E
(%
=
%
%
+
&
)
)
!
#
3
%
%
6
+
<
7&
:!&#
%
A+
)
/.. C
-I
6
+
&
+
)
F1A
/FA
-HA
+
< I 2J
/D.
'
<
/
<
F
%
H
0D.=
8
FD?&
'
?+?C
9
(
(*,
+
*6
+
&<
D.1H
-/-
F1A
J+
7" (
:!"(#
%
)
=
F
-
! @ # @
FE
(
J
8
FE?
/
'
?K
9J?C
9
-H 2
W
%
(
JJ
3
+
%
%
+
A
"
)
%<
7
J:
8
)
3
.
2%
J
/
/J
+
%
/
7
:
;
+
<
!
;
"(
#8
=
7
:
"
W
+
%/
%
%.
W
J
/+%
J
%
@
3
%
%
+
+
"(
%
%
+
*
!& ) F1 A
#
%
F1
&
J
/F
HD)HD
)8
HD)HD
)
JD.1H
! #
/1I. +
+
"(
3
"(%
&
+
77
6
+
AC
&
3
(
&
-
AC D.0D
%
D.1H +
?
2
A -I
9
"J%
"
&
I
&
<
%
%
(
AC
A
AC
8"+"(*
&
%9
;
%
7 >
8
: !>8#
/.E-
T10V
(
+
E((F6
+
&7
:!#
%
(%
J
+
@
7
$
: !$#
3
11
3
%
J
3
8
FH
&3
)
$
8
FH?
&
$
,*((
-0 2
+
7 >
: !>#X
%
J
%<
@
@%
@
% @
@
8
FI?&
>,*((
C
<
/- &
= /.
=
<
<
2
A -1
(
6
(
<
&
%
9
%
(
(
6
+
'
'
8K
"
>
+
(//
-. /1H1
*
(
(
3
9
(//
)(
/1H1/1I-
6
+
(
=
9=<
3 7( &
:
!(&# % 3
%
7:!#
79
: !9#+
9
%
( /F
C 8
F0?%
(
,*((
F. 2
(&
T1V 2
%
6
%
%
&
2%
&
(&
*"
-
EH..
(I
(*6
+
<
-.D0
!"(#
FHC3 !"#
+
/H
?/E
/
+
'$3
/H3
/D
/%
/
A
//I-
3 +
%
&
)
+
<
(&
/2J
8
F1?(&
!A/#,*((
6
+
AC
+
<
3
!
7
7#
G G$0
)
(
3
)
9
(//
)
'$
%%
3
%
2
A F/
+
% *
(
%
6
%
( %
!T//V#
((6
8
F/.?(&
K
,*((
8
F/.F//
%
(
(&
8
F//?(&
@
K $
!KR#
,*((
F- 2
5! !8 ,+
+
(
6
+
) A
3
A
$
A
('3/./(
$
7 +3
:!+#
6
(
3
-.// 8
F/-?
(
+
+3 +3/F-,*((
%
7
:!#
%
2 + +
7
:
%
8 @
-.//
-H
FE.
6
+
7
'
:!'#
%
+
%
%
%
%
(
F?/ %
%
-?/%
F
(
%
6
+
'
%
+
('3/./
D'
A+
D'
%
A FH.
(/H
%
D'
9
F-
('3/./
$
AE-
8/E
2
A FF
8
F/F?('3/./,*((
(
I.
3
+
+
('3/./
<
"(!D-DA #
+
%
%%%
F.
%
+
<
%
+
6
8
%%
%
('/./
+
72
%
(9@
:2(9@
!TD.VTD/V#2(9@
!
(
TDDV#
%
)*
8"+"(*II
FD 2
+
%
%
%
K
%
!' )
@>@ #
'
!' )
@#
>
!
#
+
>
! )
$*2
#
&
! )
(
#
+
%
A
%
%
+
/.
'(*6
+
%
7
<7+
677
%
)
%
7:6
?
%
7
:
8
)%
T/HV+
<
!T/IV#8
/.
<
E/-
<
=
%
HD
&
6%
7*:
+
%=
>
+>%
;<
%G3
+
)
'A FE
8
F/D?
/.,*((
+
> /@-
?
(
6
(
'
)
>
)
)
$
*> -
(-./1II> /E
/1II+
)
>
A
%
6
+
>
?
+
7
: !#
%
%
>
FH 2
+
7(
(
:!((#
'
/. @ // ; ((
@
3 ;+
%
+
((
J '$ %
<
8 )
<
8
>
=
78
: !8# (
>
8
F/E?> ,*((
'(*6
+
@
= ++9
!((#
C
!DC/0
#
8
<
"(
!0C#%
!
8#+
'A FI
%
<
)
%
<
8
<
3
8
F/H?> 8
,*((
>
F-0EFHA
C
//E-A
@'
EIHA
@
6
+
AC
((
6
3 %8
AC
%
=
-./.!T/0V#
(
+
$
6
5!#!5 (
+
&
?
&
! -- #
)
!
#
)
%
&
/. /101 ! T/1V#
2%
<
/10/
3
/10H
+
8
F/I?&
,*((
6
+
<
+
7 : !#
%
+
7(
(
:!((#
(
>
>
3
8
&
"
+
@
(*6
+
&
! T-.V T-/V#
F
3 2
9%
!29#
%@%!29
+
29
+
+
%
8
= 3
!A$#
% 29 99
"(
!'#/0.-
8
F/0?"(!'#/0.-'
C
&89
3
%
%
! #
+
"(/0.-!T--VT-FV#
<
?
03
)/H3
!
/H3
%
03
#
+
/H
/H
!
3
3
#
0
+
!'#
/H
%
+
"(/0.-
3
8
F/1?
A
&
,*(('9
3
>
/0.-
&
% 2%
="(/0.-
=
D. 2
&
<
F- A "( !
/- A
#29
% /HA 99+
%99
/HA
+
8
99 %
0 A
!A$#
!(#
99
'$!2999#/H2J
-..2J
+
/0.-(
/0.-
%
)
+
@
(*6
C
)
> ((
%
&
(( <
+
&
((
<
- + K (
+
(%!(+(#+
(+(
/H
D3
( -1.. 8
F-.?(-1./(9$'
C
&89
!(9$#
%
/H
C
(+(
<
'' //
=
'' //@FD
D (
-1..
C
-E.
)
%
> ((
&
; (( 3
2(9@ 3
!
TD.VTD/V#
8
F-/?'$
&
(
(
,*(('9
'A D/
+
)
"(/0.-
%
$
93+3/IE.93+3/0/E
'
6
+
93+3/IE.
/H
+
J
+
/IE.
@
$(883/H83/0
(
93+3/IE.
%
)
'
2
+
/IE.
/H
/H
%
F- D0
%
8
F--? )(F/IE.
+
/-0
'
? )
A )-A
7 $
:!$#
%
0.0H
9
93+3/IE.
/IE.(
/IE.A
+
%
*
! ) > ) 2 %
G
"#
(F/IE. ) 93+3/IE. (@A
%
F'-E2J< +
%
(
(+9(
'9>
(
<
3
!
F-
#
+
=
)
6
)
=
+
93+3/IE.
D- 2
6
8
93+3/IE.
%
%(
8"+"(* >(9 !TD-VTDFV#
+
%
( ! TDDV TDEV#
% '((9(
J
/10F
$(
8(*(*@93+3/0/E!Y(0FY#
(0F
3
%
%
/11E(
6
%
3
J@(*30HE-+
%
(1E%
(-..E
+
%
2(9@
%
! &
#
$(8
%
&*$ (
&*(+
%
(
+
%
&*$
!&'9#
(
5!#!9 0
*
$
+
93+3/IE. (@A
%
+
'
'/IEF
)(F/IE/
7
(:!(#
%
%
)
'/IEI*%
'$
%
/11.;
"3/@- %
!
//#
$
*((3/0
%
'
%
%
A
=93+3/IE.
=
%
7' : !'#
=
93+3/IE.
8
+ )
AAC
<
% +
)
7"
: !"#
"
% %
)
'A DF
<
%
%
)
7)
:!#
0.)0H
"
%
%
"
'("
$*
''
(
( "
%
%
)
%
8
"
%
%
%
+
%
$(?
8
F-F?"(H...A'
?A(
6
+
"
A( "(H...
A "@H...
+
"@H...
%
A;
7
:
''
F-
"'
FF2J%
-/'
<
DD 2
)
0A
HD
!W0
#
!$#
!$#
@
+
"(H...
A(
@
'
'9
)
"%
J8
F-D?"
"@H...
'
?A(
+
"(H... 7
A :
!A#
<
/-0 A "( +
%
<
@
?
(
93+/EEF
2
3
!2#
/F1D@8
'!%
9
#
"3D--"3-F-$("+
+
"(H...
"(IE.
%
A '' IE.
)
"
8
F-E?"
''IE.
'
?A(
6
C
%
%%
(
AC
7"
: !"+#
AC
"+
+
$
>)C
C
"
% ! TE/V# >)C
"+
%
%
+
J
'A DE
%
?
/11.;
( (
%
93+3/IE.
6+
%
$
A
@
%
3
%
$)
!+("#(
'("=%
!TF/V#= (
?
+
%%
"F-!TF-V
TFFV#
3F-3
'(">I
+
' % &
"
(A &4 !
()&
(A#
+
&2!(#+%
%
"F-
!'*?+H1/+H1-+H1F#
"F-
!'*?+H1E#
(
'("
F-
! "F-?/-0
#
-D
'$
+
C
A
%
+
F-
!F-
# /H
!HD
#0<
!/-0
#
+
'
8'&(
8
F-H? "F-, (
>29 (
%
(9&'9
(
6
(9"
*
%
%%+
3
(
'"A(3/!7'6( :#
(
)
/- &
C(" (
& "
+("3G + 3G
A
&
A
$
7&
%
:
/D 8
F-I?'"A(3/, (
DH 2
6
(
AC 2C%
( =
%
(
%
C
'"A(3/
>)C"+
"F-
%
%
"+"+ !TE-V#
"+
%
$
%
("
H0K''IE.0.F0H2!2#''("! "F-
9 *#"+ ?
'G3/..F/3('
"+ @"K3('
+"*3F.3('
+'@'
(%
&*$
&*$&A
+
"
%
'("
"F-
'("
'(" >0
9 *
7:
%%
7
: !#
+
93+3/IE.
<
(2%
9 *
)
)
9 *
9 *F8+!8+
78+:#
6
+
9 *F8+!TFDVTFEV#
!
F-1?
9 *F '(" >0
0A
0A
3IED
0@/H@F-3
()'""( 8
F-0?$+H119 *F38+
,()
'A DI
F-3
"(
()"(
/H3
@!&'#
+
@
$("++(&
DC
"('
$(*
'
$("++(&
0#5#'I,D #,, #,, >>'0
///<98 />5,
& 0H " 'I,D #
B #
B >
"
#!-
(2 #8$ #8$ &
0(
" "%
+ %
+
((& ($C$
($C$
($$
/+
(* C$'$
' (
$ 1 &0, , ' , -' --
H'5#%*
0#5# B
+ ' 5#% /+
CJ
0( ' 0( "0(
8
F-1?$+H119 *F38+
,&
(A
C
(*
'
C
%
DF8
%
(
'$%
(A(
%
(" ("9
+
9 *
?(9&'9@&'989
+
9 */3D
J>29+
(
'
8'&(
Z
J7
@
%
8'&(C%
8
8'&(
('(
J
7
%$:! $#
8'&(
( "+3(G
$
8'&(
D0 2
<
6
%
%
()%
3
8'&(
'3
FF.
( %
9 *F8+ (
3
FF/
-E. 3
A
8
FF/?&"$+H11(
A
,()&
(A
( (
3
9 *F8+
(
@(+ 9
7 3
: F =
FF-
TFIV (
3
A
TF0V
8
FF-?F(
9 *F8+
,(
(%%
(
TF1V
D1
5!5
(
+
%
%
A
=
93+3/IE.
"
=
A
Onboard Computer Main Elements 51
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
52 Onboard Computer Main Elements
Figure 1.2 and in more detail figure 4.3 show the embedding of an OBC into the
overall spacecraft avionics system. The OBC is connected to the transponders for
interfacing with the ground, has bus interfaces connecting it to intelligent spacecraft
control units and to the “Remote Interface Unit”, (RIU) – in figure 4.3 called “I/O
Board” – and finally the OBC has interfaces to dedicated payload computers or
controllers. The RIU couples the OBC to all spacecraft equipment which does not
provide a high level data bus interface. The following figures show examples of real
OBCs to give an impression on state of the art machines (both ERC32 based). The
CryoSat OBC in figure 4.1 – cabled in the test bench – is an engineering model.
8
DF?'A
,"$
ED
8!
,%
+
%
A
A
%
=
2%
%
+
6<
(
A
=
%
)
?
"(@"(
A '"@ '"
%
+
'
"
+
<
)?
"@$
!"$#
8
$
@ @
"' "'
+A' +A'
*+(*@+2+ *+(*@+2+
8@
( 2'
($
2'
C
C
A[3\
@ @
E
BF /EEF A /EEF A E
BF
E &0,F E&0,F
,/, C
C
,/,
(
%
2K+ 995 995 L"(
'C
'C
(
,CE#F ,CE #F
>,)$E F
0$ 0 >,)$E F
,)$E8F ,)$ E8F
$&,,0 $&,,0$
,C 58)E F + + ,C 58) E F
,C )E9F ,C )E9F
,C#E#F ( ( ,C#E #F ,(&, ,($ ,( ,(&, $
,C#)E9F 9 9 ,C#)E9F ,(? ,(? $
,C5)E;F ,C5)E;F $
,C8E8HF ( ( ,C8 E8HF ,(&,# ,(&,#$
,C8)EHF ,C8)EHF ,(?# +A' +A' ,(?#$
,C9E8F ,C9E 8F
2K+ 2K+
,(&,/D/ ,(&,/D/$
> E#< F > E#<F ,(?/D/
C
C
,(?/D/$
'+3%(
>,)$E#9F
0 /), !0(E8F 9C 9C /), !0(E8F
,)$E5F / !,0 E5F / !,0 E5F
,C 58)E F 99
99
,/, ,/,
,C#E5F *++2+ *++2+
,C#)E F E#F E#F
,C5)E#F A A
,C8E9 F
,C8)E#F 2K+
,C9E8F
>#E F
1EHF 0,0"D/ 0,0"D/
"+( +2( +2(
"(
++(
9( 9(
/EEF "+
+
$ " /EEF "+ /EEF "+
8
?8
3.E.H3
3A3
3%
?-(
-..0 ((
%
L"(
/!,0 E5F /!,0 E5F
"$E -F
0( 0($ "$E -F
"0E5-F "0E 5-F
( E#F ( E #F
" E8F "E8F
+
* +
*
0/ E#F 0/E #F
( (
0B1E8F 0B1E 8F
0B
E 8F 0B
E8F
0BE 8F 0BE8F
0B"
E8F 0B"
E 8F
0B>' E8F 0B>' E8F
"+(
"+( /EEF "+
"( E#F "( E #F
/EEF "+
( ,3E #F ( ,3E#F
#H #H
8
DD?A )=A
,"$(&((A
8
DE?A )=@
,"$(&((A
$ "
8
?$33
%
[3\@
@A
?D (
-..0
EE
" !*>'"#
>
++"
3%
EH
"$!
DE#=
E- +
"$
A'%
A
!
#+
"((A
%
<
<
A
7(
:
!(#
% "(
%
A+
"(A
<
<
%%
%
8!# (*
(A
<
%
?
$*6
%
%
A +
*>
"
AC
AC
+
7'" :!'"#
7
'":! '"#
%
8
'"
9A
T/-.V%
7
%"( :!"(#
"(
"(
(
8
"(
B*6
7" ( : !"(#
)
A
!)
)#
AC
'" '"
"(
%
AC
"(
+
"(
=
'=
7
"(:!"(#
$
%
EI
*6
8
A<
3 +
AC
J
@
=
@
@
<
AC+
7
>:!>#
)
/F-8
%
%
A
A
AC
8
=
<
=
A
8
"(
A%
"(!T//1VT/-.V#
+
*
6
8
%
<
+
A
=
=
7
E0
8
$
:!8$#
+
8$
I
>
%
%
8
"(
%
"(
9
8
'"
=
$A
'"(
8
<
A
)
?
+
%
<
A
+
%
<
=
A"$
"$
%
+
)
+
A
7
7
(
<
&'
%
+
<
%
%
+
%
+
%
!C
7
7#
J
+
)
+
$
93+3/EEFA
! TEFV TEDV#
/1IF
$(
8
A*'
33'
E1
%
7
2
:!A2#
(9
7A:!A#F/7"+
:
!"+#+
(A
+
!
DH#
)
"+
%
+
)
=
%
"+
%
A
(
+
A "+/ "+-
A
A
8
DH?'
93+3/EEFA
+
%
(A
A
! 0.-F#
8
DI?
?C
C
9
/
!
# -!
# D
*@ F)
9
H.
+ D
%
9
+
9 %
+
%
%
/-%
8
%
A"+)
?
0
6 +
A/H
/F-/H
+
"+
/H
"
,
6 +
A
"
+
+
" +
"
/F-
A
$
6 +
A
F/
("+
+
(6+
A
.
F/
+
+
"+
"
+
"+
0,0,,
6+
A"
%
+
+
+
C
/F-
"
%
"++
%
+
C
8!5!5
B
C
( ! (#
*(((G("K(!TEEVTEIV#
+
C
3%
!9># ! /#
! -#C
/FEE C
%
%
-
8
D0?
A*'
33'
H/
+
%
%
%
!
8
"-F-"D--
#C
%
%
%
<)
<
!-..
@#
(
C
-
F
D+
%
C
%
3 3
)
33
%
C
%%C
"('
%
%
E
C
)
%
7
:
93/EEFA+
DF
9 *FAA
@
%
+
C
A
7
:
( C
%
+
A
%
<
C
)
%
A
8
D1?A
, (
8!5!8 >%$
7(*:!(*(*3#
%
%
! $#
)
% "A
&2
&
!TE0VTH.V# (*
%
%
(*
H-
+
(*
!
#3
(*7*"M:!*"M#
)
(*
%
<
!(*("*0-E#
(*3
@
9
&> 3(
*%
&
+
J
7
%
:!#=TIHVT0/V
+
79
+$
:!9+$#
< !"8#
%
+%
AC
/
D
* *+
A' *
* &
,(0
+ *
E " F
(
(
+
' '
'
!2
9 # !2
9 #
'
J
+ '
J
,
9 ' 9
!
+'
+'
+
+
9 ' 9
3
)
+
+ +
'
'
?@
+ 9C +
9 9 <
++8 ++8 %
(
9 9 !
#
J
9+$ 9+$
'
9C '
'
3
%
9+$
9 !-
# 9
3
%
"
% +
8
D/.?+
@
=+
+ HF
@ <
@
A
A'A
!
>&3 >&3
A A
'A
2 A
(
8
>"
A
>) >
)
) &
(8" (8&
'
"3
A2
'
< %
+
*"M39
*"M39
8
D//?'
HD
(
)
J
(
!( 8'&(#
%
AC
+
% 72
'
:!2'#
/
!
0I#+
2'
3 7 '
$
: !'$# (
0 A
0 A
]]AC
]] ]]AC
]]
]]3\A
]]
@ <
C
!
>&3 >&3
A A
'A
2 '
$
8
!'$#
>"
>) >
)
'
) &
L
(8" (8&
$
!'$#
'
"3
A2
'
< %
+
*"M39
*"M39
8
D/-?2
'
'$
'$
+
'$!
D/-#
-EH
0
2'%
-EH
0
2'+
'$
<
=
7'
$
:!'$#
A
'
$
HE
DF
7:
2'
'$
'$
2'
<
'$'$
DD
'$
7A$++":
A
'$
(
'
@<
=
'$
8!; 0
&
(A
'$
%
(
*!
#
'$"!#(
%
+
%
%
=
( =
A +
%A
!
#
8
D/F?"
A
,(
A
%
'$ =
%
2'
+
%
A
=
7
>:
0F /F- (
A
! '"
# +
AC
AC
3
A
A
!
@
#
(
%
%
72
'
+ : !2'+#
+
2'+
00
/F/.
'
A
%
A"
%
@ <
=
=
%
ACA
2'+
A%
<
% +
7:
AC+
AC
"F-9 *"H...%
'8) "F-9 *
&*$
&A%
2%
%
C
+
%
)
)AC
+
A%%
!$#
!#
7+(&7
!TH-VTHFV#
J
7
+(
&:!+(%
2C
!
#
7+(&7A
C
+
@+(&
%
3
7
%7
%
HI
A
%
+
)
9 *F 8+
///<98 />5,
& 0H " 'I,D #
B #
B >
"
#!-
(2 #8$ #8$ &
0(
" "%
+ %
+
((& ($C$
($C$
($$
/+
(* C$'$
' (
$ 1 &0, , ' , -' --
H'5#%*
0#5# B
+ ' 5#% /+
CJ
0( ' 0( "0(
8
D/D?$+H119 *F38+
,()&
(A
C
%
AC
7%
: !8#
AC
=
%
%
A
8
(
"$(& ( ( 8
! C
#
AC +
AC
<
%
%AC
8!THDV#
+
77
%
%
%
%
@
+
%
AC"(
A
+
%
H0
8!H
,,0B/0 ,,0$B/0
( B/0 ( $B/0
9- 0B/0 0B/0 9-
0$B/0 0/B/0
0( B/0 0( $B/0
@ @
"' "'
+A' +A'
*+(*@+2+ *+(*@+2+
( 2'
($
2'
C
C
@ @
E
BF /EEF A /EEF A E
BF
*
E &0,F E&0,F
,/, C
C
,/,
'C
'C
$&,,0 $&,,0$
,( &, ,($ ,( ,( &, $
,( ? ,( ? $
,( &,#
$ ,( &,#$
,( ?# +A' +A' ,( ?#$
,( &,/D/ ,( &,/D/$
C
C
,( ?/D/ ,( ?/D/$
0? 0? $
,", ( ,", $
,? ( ,? $
,, ,, $
,"$(&((A
0?# 0?#$
,",# +L'' '' ,",#$
,?# '' +L'' ,?#$
,,# ,,#$
8
D/E?%
"F-A
>(0( >(0(
0,0"D/ 0,0"D/
+2( +2(
++(
9( 9(
+
$ " /EEF "+ /EEF "+
8
?8
3.E.H3
3A3
!
DD#
3%
?-(
-..0 ((
%
%
-0>@E.>
A
L"(
<
E> #
%
-0>
A
$
%
!
DF#
@
%
(
<
)
)
7"$:!@#
78
9:
<
%+
%
%
E.>
+
!
A
<
FF>
!
"@$
!"$#A
%
%
%
@%#
A
%
' H1
<
FF E>
<
%
'$ =
% %
A+
%
"$
%
)
+
@
)
C
<
+
A
!'A#
A @
%
"$
@
+
)
A
=
@
%
%
A
%
@
%
!
#
$ @
!
@
<
#
A
+
%
+
%
%
(
@
)
'$
%
@)
A=
@!
#
A'A
! )
3D. ^#+
<
2
%
A
!'A#
'A
OBC Mechanical Design 71
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
72 OBC Mechanical Design
The mechanical design of an OBC in the first place seems a rather simple task
compared to the electronics. However it has to fulfill a number of non trivial
requirements and may not be underestimated. Not only the interconnection between
OBC PCBs and OBC housing but also the entire assembly of the OBC has to
withstand
● sine vibration and shock loads during launch and
● permanent temperature cycles (and resulting mechanical implications) in orbit.
First of all the chips are mounted with according soldering connections onto the
boards. The most common designs used are “Surface Mounted Devices”, (SMD), or
chip implementation as “Ball Grid Array”, (BGA), assemblies respectively.
Since an OBC, as discussed in the previous chapters, consists of multiple printed
circuit boards, the mechanical chassis architecture obviously has to follow this
concept. Therefore each OBC circuit board first of all is mounted into an aluminum
frame. An example is given in figure 5.1. Such a frame has to hold the PCB and has
to hold the according connectors. The connectors typically are connected to the
board by short flexible wiring to avoid mechanical loads onto the soldering points
when the board minimally vibrates due to launcher induced mechanical loads.
Besides the outer mounting points of the board in the frame there might exist further
intermediate fixation points which however may not interfere with electronic
components nor with circuits inside the multilayer PCB.
The entire group of several such frames has to be assembled to an overall OBC
housing which is additionally equipped with mounting stands allowing bolting to the
S/C structure. Therefore the overall chassis design becomes rather complex. Figure
6.2 and 6.3 show an OBC flight model chassis from front and rear side.
OBC Mechanical Design 73
Another important aspect of the OBC design is to assure tightness with respect to
electromagnetic emission. The chips in today’s OBCs are clocked at rather high
frequencies and so are modern bus interfaces, e.g. the already cited SpaceWire.
Even if it is obvious that data bus cabling from OBC to S/C equipment is shielded, it
has to be considered that the signal lines inside the OBC from connector to the PBC
are single wires (see figure above) – even if they are short. And principally they
induce electromagnetic emission effects. This is the reason why in the example of
figure 5.1 above the individual wiring groups from connector to PCB are placed in
dedicated frame “subcompartments”. Similar antenna effects can be induced by
longer lines between chips on a PCB – which should be avoided – and obviously also
by wiring between PCBs inside the OBC.
Even if the electronics inside the OBC do not affect each other the overall OBC must
be electromagnetically tight against electromagnetic effects induced from the external
environment (such as from solar bursts etc.) which otherwise directly affect the a.m.
wiring of connectors to board or cross PCB lines.
For those reasons the final chassis, assembled from the multiple PCB frames, must
be a closed metal on metal construction which is usually achieved by mounting the
outer plates with a closely positioned screw placement as it is nicely illustrated in the
figure below and in figure 6.3 in the next chapter.
6 OBC Development
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
76 OBC Development
OBC models differ from mission to mission. In the telecom satellite domain the
highest level of standardization can be achieved since the platforms of such S/C are
really series products concerning their platform, equipped with more or less
transponders and positioned at different geostationary longitudes.
The contrary is true for Earth observation and science spacecraft. Here normally only
a single “Proto Flight Model”, (PFM), is built, or a mini series is constructed such as
ERS-1/2, MetOp 1-3, GRACE (Constellation of 2 S/C), SWARM (Constellation of 3
S/C). For each mission the OBCs have to be adapted to a certain extent:
● At least the RIUs differ significantly between missions due to other
instrumentation on board and in most cases due to highly differing AOCS.
● But also OBC cores differ w.r.t. required authentication functions, performance
requirements, memory equipment, high priority line instrumentation (HPCs) as
well as firmware and boot SW setups.
Therefore for each science S/C, mini or large S/C series, at least one OBC has to be
built. However for an OBC which represents a design adaptation to a previous one,
usually the final “Flight Model”, (FM), cannot be built directly. If a new OBC
furthermore has to implement entirely new technologies for the first time, like a new
data bus type – such as SpaceWire – or even a new microprocessor generation, this
requires a complete set of prototypes to be implemented and tested prior to the FM.
In 1995 NASA released a 9 level classification for definition of technology maturity
(cf. [65]) which – slightly adapted – is also applied by ESA to its projects (cf. [66]).
The “Technology Readiness Levels” are defined as follows and accordingly are the
intermediate flight hardware prototypes which are to be implemented:
Table 6.1: TRL levels, their definition and OBC models (key features only).
TRL TRL Definition OBC Model
The three most important steps which a spacecraft engineer on prime contractor side
might be confronted with shall be briefly discussed here:
Development boards:
These boards serve for basic OBSW development tasks like
● adaptation of the operating system code,
● bus or other interface driver development,
● boot software development,
● algorithm performance verifications / optimizations
etc.
Depending on the supplier they are either offered as FPGA boards with IP-Core only
or with real target ASIC processor chip respectively. Examples for such boards were
already presented in the figures 3.30 and 3.31.
Development boards in most cases are equipped with additional RAM and interfaces
– such as a PCI bus interface – which ease code debugging and integration of the
board into a development computer. Figure 3.31 for example shows an OBSW
development board with a real target processor, diverse I/O interfaces and
connectors for piggy-back boards for memory extension PCBs etc.
3
In rare cases between TRL 6 and 7 an additional “Qualification Model”, (QM), is built.
78 OBC Development
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
82 Special Onboard Computers
Inside a satellite the central OBC is usually not the only computer. Besides
computers in instruments there are typical AOCS equipment components which
include considerable computational power. The most obvious components are
navigation receivers for GPS, Galileo and / or GLONASS. Another class of equipment
requiring significant CPU performance are star trackers. Modern star trackers are
equipped with their own ERC32 or even LEON processor for fast star map
identification and quaternion computation. These units however are very specific
electronic equipment.
A further class of electronic components, the “Mass Memory and Formatting Units”,
(MMFU), also called “Solid State Recorders”, (SSR), are nothing else than OBCs with
● extremely large storage memory areas,
● very performant data input channels from payload side for science data
storage
● and fast data output channels to science data transponders for downlink to
ground via X-band or Ka-band.
Memory is organized in memory banks and management is performed by SW such
that even failure of an entire bank does not lead to immediate data loss (cf. [ 71]).
Some recorders provide integrated data compression units, some suppliers offer
external separate units.
Figure 7.1: TerraSAR-X Solid State Recorder and memory board. © Astrium GmbH
Special Onboard Computers 83
SSRs by standard are built on SDRAM technology, which requires cyclic memory
refresh and thus at least their memory boards may not be power-cycled between
science data acquisition and downlink to ground. Therefore according power
buffering electronics are required for the case where the S/C encounters a power bus
undervoltage condition or similar.
Latest recorder generations are based on non volatile flash memory technology
(cf. [67]). Flash memory is a technology for storage media that is non-volatile in case
of power-off. It is used in the popular USB Sticks, camera and mobile phone storage
cards. It is a specific type of EEPROM that is programmed and erased block wise.
Since stored data is non-volatile and since devices are very compact and have no
moving parts (in contrary to the ancient tape drives on missions like Voyager), flash
memory is somewhat ideal for usage in space. Such SSRs due to the memory
persistence are robust to onboard power undervoltage situations.
However flash memory has a limited number of program / erase cycles, (P/E-cycle)
before wear begins. High quality ground based systems today reach about 1 Million
P/E cycles and infinite number of read accesses. To eliminate wear problems, flash
memory storage devices such as SSD disks for commercial computers or SSRs for
space today comprise software which monitors P/E cycles per block and statistically
balances P/E accesses over the overall memory bank.
These wear prevention
techniques based on P/E
cycle management by SW
plus bad blocks identi-
fication and management
via SW plus data stream
caching on input side and
distribution to the various
memory banks however
requires considerable
CPU power. Therefore as
example the Astrium
“Integrated Solid State
Recorder“, (ISSR), is built
on the basis of multiple
LEON2 processors. So it
becomes obvious that
these systems are real Figure 7.2: Integrated Solid State Recorder
© Astrium GmbH
computers and much
more than “external solid
state disk drives”.
Perfection is achieved
only on the point of collapse.
C.N. Parkinson
Part III
Onboard Software
Onboard Software Static Architecture 87
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
88 Onboard Software Static Architecture
Already in the introduction to chapter 7 it was explained that more S/C onboard units
than just the central OBC are in fact computers. And obviously they contain and need
onboard software for operation. Besides the computers cited in chapter 7 quite a
significant number of microcontrollers are hidden in intelligent sensor and actuator
units which including their own embedded software. Typical components providing
functions achieved in software are
● obviously the S/C platform central OBCs,
● instrument / payload control computers and payload data processors (image
compression units etc.),
● the aforementioned Memory Management and Formatting Units,
● Power Control and Distribution Units,
● and complex AOCS sensors and actuators such as
◊ the previously mentioned star trackers,
◊ GPS / Galileo / GLONASS receivers,
◊ other position sensors such as DORIS receivers,
◊ as well as fiber-optic gyros and
◊ intelligently controlled reaction wheels.
An example for onboard equipment driven by software is cited below, based on the
small university satellite cited already in figure 4.3.
Functions:
Functions: •Control SC Modes
•Receive Gnd TCs
•Control Equipment Modes
•Depacketize TCs OBC
•Switch SC Modes
•Handle TC Queues / Procedures Failure Detection,
•Parameter Monitoring •Switch Equipment Modes
Isolation & Recovery •SC Failure Handling
•Event Handling
•SC HK TM Generation
Control Algorithms
•Submit SC TM
Onboard Software Functions
S/C EQ SC Equipment
OBSW EQ
Cmds
TC/TM Cmds
&
Data RIU&
Serial Protocols
Service Replies
Handling IFReplies
Data
Archi. Analog TM/TC
Data Buses
Operating System
High Priority TCs
Functions:
•Measure / Control
Functions: Functions: •Switch Modes
•Control Bus TCs •Decalibrate Analog TM •Status Surveillance
•Read Bus TM •Calibrate Analog TC •Read OB TC
•Calibrate/Decalibrate TC/TM •Receive Bus Tcs •Generate Equipment TM
•Generate Bus TM
Application
Gnd Layer
System AOCS Platform Payload
TC/TM
Control Control Control Control
Handling
Operating System
BIOS Realtime Operating System and Hardware IF
Layer
The OBSW static architecture can be broken down to the following main elements:
● Operating system and drivers layer
● OBSW data pool
● Application layer
● OBSW interaction with ground control
● Service-based ground / space architecture
● Telecommand routing
● Telemetry downlink and channel multiplexing
● High priority command handling
● Service interface stub (SIF)
● Failure detection, isolation and recovery
The following figure is the first of a series depicting step by step the static
architecture of an OBSW from the lowest level (closest to processor and electronics)
to more and more high level and system control oriented functional blocks. Figure 8.4
depicts the operating system level.
Above RTOS and IF drivers reside the equipment handlers. They perform the
command writing from the OBSW via IF drivers to the connected S/C equipment (e.g.
to AOCS actuators, payloads, the PCDU). Furthermore they perform cyclic or on
request onboard equipment telemery acquisition – which is not to be mixed up with
space to ground telemetry.
STR2 Quaternions
STR1 Quaternions
OBSW DP
The implementation concept for the equipment handlers may be one handler per
equipment type. E.g. one handler for a set of four reaction wheels, (RWL), one for a
Equipment Handlers and OBSW Data Pool 93
set of three magnetotorquers, (MTQ), etc. In such case the handler has to serve all
equipment instances (e.g. 4 RWLs at a time). The alternative is one handler instance
of an equipment specific class. So one class of handlers for RWLs, one for MTQs
etc. and the 4 RWLs then each are served by one instance of the corresponding
handler class.
Equipment handlers on the lower end are connected to the signal line drivers to
which they have to supply the equipment command data in a driver compatible
format – which is not necessarily the format which is used on the outgoing physical
signal line, data connection or data bus. The handlers pick the 'to be commanded'
parameter values from a central data pool in the OBSW, the “Onboard Software Data
Pool”, (OBSW-DP). Vice versa when acquiring onboard telemetry from connected
equipment the equipment handler has to pick the data from the according SW
interface of the RTOS IF driver and has to place them into the corresponding
variables slots in the OBSW-DP. In both communication directions data format
conversions are usually necessary. And – as will be treated later – also certain data
consistency checks on acquired TM are to be applied during cyclic operation.
It should be noted that one equipment handler usually has to access multiple
physical signal line drivers. Using again the example of a modern reaction wheel
such an equipment handler will have to command wheel torque and to acquire wheel
speed telemetry typically via a data bus such as MIL bus. The wheel temperature
however will be acquired via an analog thermistor line. The wheel drive electronics in
most cases will have additional discrete status command lines besides the MIL bus.
As a result the data bus is a typical interface which is shared by multiple equipment
handlers – all those controlling bus connected equipment. The equipment handler to
I/O-line driver ratio is an N-to-M relation and access conflicts have to be avoided by a
well designed time sliced access approach. Therefore the equipment handlers will
again be addressed later when discussing the OBSW dynamic architecture in
chapter 9.
The cited OBSW data pool can be best understood as a ”vector“ containing – in
binary format, not in engineering units – all operational S/C variables which are
processed inside the OBSW, such as (non exhaustively):
● System time
● Sensor data like star tracker quaternions
● Actuator data like RWL speed data or commanded RWL torque data
● Equipment temperatures
● Currents and voltages
● S/C position and velocity
● S/C attitude and rotational rates
● Payload instrument statuses
It must be pointed out here explicitly that the OBSW-DP variable names or IDs
preferably should be identical to the variable name in the OBSW code and to the
variable name used in telecommand or telemetry packets.
Application functions like AOCS which are treated in the following chapter are using
such variables of the OBSW-DP and it must be avoided that they base their
computations on invalid or outdated values of OBSW-DP variables. Therefore it is
essential that each variable can be flagged as outdated or as invalid if e.g. an
94 Onboard Software Static Architecture
equipment does not respond or shows other symptoms detectable by the equipment
handler. The specific reaction to values being flagged as invalid or outdated is subject
to the application which in normal case needs the data as input.
It also must be noted here that besides the OBSW-DP containing the continuously
updated status and performance variables of the spacecraft, there exists a persistent
data memory area – the safeguard memory which already was mentioned in
chapter 4.2. It contains the S/C redundancy settings, equipment health status
parameters etc. All these are parsed by the OBSW at boot time for proper
configuration and which is cyclically updated. This “Spacecraft Configuration Vector”,
(SCV), its content and use is explained in the operations chapter 13.2.
On top of the OBSW-DP reside the applications which control the spacecraft's
● payload instruments,
● AOCS,
● power subsystem and
● thermal control.
The applications read input data from the OBSW-DP and place computed output
back into other variables of the OBSW-DP. The applications include the according
controller numerics for AOCS, power, thermal control. Each application in principle
has access to any OBSW-DP variable. Some variables are shared by all apps. such
as the system clock onboard time, (OBT).
AOCS App
Payload Ctrl. Power Ctrl. Thermal
App. App. Ctrl. App.
OBSW DP
PL
PL STR RWL PCDU Therm. .....
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl.
Each application can have a different update cycle time which will be discussed again
later in the chapter 9 OBSW dynamic architecture. Each application also internally
encapsulates the handling of the according subsystem states. To provide an example
here the AOCS shall be used again:
The mode transitions triggered inside a subsystem control are handled inside its
control application. If a reaction wheel fails, this may trigger an AOCS subsystem
mode transition to AOCS Safe Mode. In Safe Mode the AOCS for example then
performs S/C attitude control only via magnetotorquers and thrusters and it switches
off all reaction wheels. Thus in Safe Mode (versus normal mode) the function of
attitude control is still performed, but the control is based on a completely different set
of measurement and actuation parameters in the OBSW-DP.
In how far such failure induced subsystem mode transitions automatically induce top
level S/C system mode transitions or vice versa is a topic to be revisited later.
The following fields of the data field header shall still be noted:
● The Acknowledge Flag – which indicated whether the ground requests a
reception acknowledge from the S/C and
● the Service Type / Subtype Field:
◊ These fields define the function and the format of the TC – e.g. whether a
TC is a direct command to an onboard equipment, a S/C mode change
command or a time tagged command for payload operations.
◊ The various packet services will be treated in chapter 8.6.
Packet Error
Packet Sequence Packet Data Field Application
Packet ID Control
Control Length Header Data
(CRC)
Application Pro-
Header Flag
cess ID (APID)
Data Field
Sequence
Sequence
(=11 bin)
= Packet Data
Number
Version
Field Length
Count
Flags
Type
(=0)
(=1)
(=1)
ID (=PID)
Category
Process
Packet
3 1 1 7 4 2 14
CCSDS
Secondary CRC Service Service
Ack Source ID
Header Flag Flags Type Subtype
(=0)
zero enumerated enumerated enumerated enumerated enumerated
4 octets
Application Pro-
Header Flag
cess ID (APID)
Data Field
Sequence
Grouping
Number
Version
Source
Count
Flags
Type
ID (=PID)
(=0)
(=0)
(=1)
Category
Process
Packet
3 1 1 7 4 2 14
Remark 1): Idle, High Priority and Time TM(9,2) Packets do have no Data Field Header and no Packet Error Control fields
absolute Time
zero enumerated enumerated enumerated enumerated enumerated
(CDS)
1 bit 3 bits 4 bits 8 bits 8 bits 8 bits 48 bits
Also here a 6 bytes long packet header can be identified and the packet data field
with a maximum length of 242 bytes. The fields of the packet header again comprise
● the Application Process Identifier, (APID) and
● the Packet Sequence Control
as for a TC packet. The key fields of the packet data field again are the
● Data Field Header,
● the Source Data Field itself – containing the downlinked telemetry
● and finally the packet error control field – containing a checksum.
The following fields of the data field header need still to be noted:
● Also in the TM packets the Service Type / Subtype Field can be found:
◊ These fields define the function and the format of the TM including – e.g.
whether a TM is a housekeeping packet, an event induced telemetry, etc.
◊ The various packet services for telemetry will also be treated in
chapter 8.6.
● Furthermore a TM packet data field header comprises
◊ TM time stamping information such as clock sync information and
◊ TM generation onboard time.
98 Onboard Software Static Architecture
While figure 4.10 already showed the principle of multiple TC packets being packed
into a TC segment, the segment being wrapped into a so-called “Transfer Frame”
and the frame being encapsulated into a CLTU from transmission, the figure below
also shows the sequence for telemetry downwards from space to ground.
On Board
Command Directive Multiplex Source
Packets into Source Packets
System Management Layer Transfer Frames of
Virtual Channels VC 0 VC 1 VC 2
TC Application Data Transfer Frames
Multiplex Virtual
Channels into Master Channel Synchronous
Packetization Layer
Master Channel Stream of Transfer
Frames
On Ground
RF Link
Segmentation Layer Demodulate RF Physical Channel
and decode Synchronous
Segment
Stream of Transfer
On Ground
56 bit 8 bit
TC F rame:
TC Transfer Frame variable - 256 byte max.
TC Frame
TC Frame Data Unit Aggregation = OFF: n = 1 / single Packet in Segment Data Field
Frame Header Error Control
= 1 TC Segment Aggregation = ON : n > 1 / multiple Packets in Segment Data Field
(CRC)
Segment Data Field filled up by
5 byte variable length - 249 byte max. 2 byte integral number of packets
OBSW Interaction with Ground Control
TM T ransfe r Fram e
Frame Header Frame Data Field Frame Trailer TM Frame: 11 15 byte = 8920 bits
6 byte 1105 byte 4 byte
All this onboard TC unpacking from CLTUs via frames, segments until packets are
reconstructed and vice versa the TM encapsulation from VC multiplexing down to
Onboard Software Static Architecture
CADUs by means of CCSDS processors has already been looked at in chapter 4.4.
OBSW Interaction with Ground Control 101
The figure below shows the TC / TM handlers inside the OBSW static architecture
which interface to the CCSDS processor via dedicated interface drivers.
OBSW DP
Eq.
PL Eq. Eq. Eq. Eq. .....
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.
In the previous chapter the format of TC and TM packets have been presented which
however only influences architectural details of the TC and TM encoder / decoder
buffers between the CCSDS processor board and the OBSW itself. What has much
more influence on the OBSW architecture are the differences between diverse packet
types w.r.t. content.
There may for example be normal housekeeping telemetry packets from the diverse
cited applications or from the equipment handlers, but also there must be the
availability of event / error telemetry. On the TC side there must be the possibility to
command OBCS controller applications like AOCS, but also to directly command
OBC connected equipment via the according equipment handler and there must be
the possibility to patch onboard software in flight.
These – non exhaustive examples – already indicate the diversity of TC / TM to be
handled and the need for appropriate mechanisms on board. Furthermore this shows
that the TC identifiers and TM generators on board must be a mirror of the TC / TM
generation / evaluation on ground.
To avoid inventing new solutions in this area again for each spacecraft mission the
European Space Agency has developed a standard on these topics, the so-called
102 Onboard Software Static Architecture
“Packet Utilization Standard”, (PUS). The PUS is defined in the ECSS standard
ECSS-E-70-41A and defines a number of onboard services in the OBSW and the
according CCSDS packets for command / control. It also is used for the German DLR
Missions and also for latest French CNES missions.
The terminus ”Packet Utilization Standard“ is completely misleading for newcomers
to the topic because the PUS primarily does not define different types of TC and TM
packets, but it defines software services which have to be provided by the OBSW. As
example PUS Service Type 1 – “Telecommand verification service” – can be taken: It
requires that there must be an onboard TC verification service available which
reports to ground successful / failed TC reception and execution.
As a sideline PUS defines how the CCSDS packets have to be built and which
variables are mandatory for TC packets of a certain service respectively for TM
packets from a service.
For the different subtasks of a service there exist so-called “Subservice Types”, (ST),
as there are for Service 1
● a Subservice for TC acknowledge,
● and a Subservice for TC execution reporting.
The PUS standard reserves the service numbers 0 to 127 although currently only 16
numbers are used, which are listed in the table below:
The S/C developer is free to define further own services in the range of 128 to 255
according to the mission needs. The same applies to Subservices: Numbers 0 to 127
Service-based OBSW Architecture 103
are reserved, 128 to 255 are free for mission specific use. For each of the above
listed services the OBSW needs a dedicated handler which processes the service
TCs and which generates the service TM 5. Furthermore the OBSW overall kernel
must provide a mechanism to route TC / TM to / from the according service handler.
Below now the service definition tables for all the predefined services shall be treated
in brief, citing their most important features and Subservices. The column “Service
requests” cites service subtype TCs which can must processed by the service
handler. The column “Service reports” cites TM and service subtype which is to be
provided by the according service handler. As an intuitive example Service 3,
“Housekeeping and Diagnostics Data Handling” shall be used – see table 8.4:
When the ground operator wants to define a new payload housekeeping TM packet
with a number of parameters from the OBSW-DP, he can submit an according
Service 3:1 command to the spacecraft. The service 3 handler thus is informed about
such a new requested housekeeping packet type. Then the ground can submit a
Service 3:5 command which enables the TM packet generation and which defines
the desired TM packet generation cycle time. From that moment on the Service 3
handler will cyclically generate the according Service 3:25 housekeeping (HK)
telemetry and send it to the on board TM storage (and in case of ground contact it is
transmitted down to Earth).
So in this example both the commandable side of a handler and the TM generation
side are treated as well as how Subservices are to be understood and how packet
types including subtype are corresponding 1:1 to the service features. The services
now all can be treated in analogy.
Service 1 provides no dedicated service request features. For all uplinked TCs which
are equipped with an acknowledge flag – see the turquoise field in the TC data field
5
Not all services cover both TC and TM.
104 Onboard Software Static Architecture
header in figure 8.10 – the service handler provides the according TM acknowledge
packets. Please note that TC “acknowledge” does not mean “confirmation of receipt”,
but (depending on Subservice) confirmation of successful acceptance (which will fail
e.g. when the S/C or concerned equipment / subsystem is in wrong or failure mode),
TC execution start, progress and completion respectively.
The Service 3 for housekeeping and diagnostics data handling allows for in-flight
activation / deactivation of any defined diagnostic TM or HK TM as well as for in-flight
definition of new packets and for definition / change of packet generation cycle rates.
Table 8.5: Statistics for min / max values etc. Source ECSS-E-70-41A
Service 5 is again a very important one since via this service all report TM packets
are generated for events which happened on board – i.e. for any parameter
anomalies and out of bound statuses.
106 Onboard Software Static Architecture
This service controls OBSW time packet generation rate and often is enhanced by
private Subservices for OBSW time management.
Table 8.11: Onboard parameter monitoring and limit sensing. Source ECSS-E-70-41A
With the onboard monitoring service 12 the S/C operator can define monitoring of
selected parameters in the OBSW-DP including limits and Service 5 events to be
triggered in case of a single, sporadic or permanent limit exceeding situations.
In flight monitoring limits can be changed dynamically as well as new parameters can
be selected for monitoring and obviously also monitors can again be disabled.
108 Onboard Software Static Architecture
This service provides dedicated features for up- or downlink of large data, such as for
upload of a complete new OBSW image to RAM before it is copied to EEPROM or for
star tracker star map patches.
Already in chapter 7 intelligent S/C onboard equipment like GPS receivers or star
trackers were cited which in reality include their own computer with processors often
comparable to the S/C's main OBC. For such equipment meanwhile it is quite
common, that these units themselves are commandable via PUS standard. In such
case telemetry packets generated by such units – this might be housekeeping, event
or diagnostic packets – are to be routed into the TM data pool by the main S/C OBC
and are to be downlinked during ground contact. The Service 14 above allows
activation and control of such packet forwarding by the main S/C OBC.
Already in chapter 4.4 it was indicated that there exist multiple telemetry streams
which are to be downlinked, the online telemetry and telemetry coming from multiple
onboard buffers – so-called “packet stores”. Several “Virtual Channels” with different
TM have to be multiplexed according to TM priority by the CCSDS processing during
downlink. The topic was revisited in chapter 8.5 figure 8.9. The
● control of such packet stores onboard,
● their activation / deactivation,
● allocation of TM packets to the individual packet stores and
● downlink from packet stores as well as deletion control of downlinked packets
is controlled via Service 15.
Service 17 is a pretty simple service for connection tests which provides the
capability to activate test functions implemented onboard and to report the results of
such tests. This service is used mostly during the early S/C AIT phase to verify
whether the OBSW can communicate to a connected S/C onboard equipment via the
according equipment handler. It also is used during quick health tests after
environmental campaigns to re-verify S/C health status, so-called “Abbreviated
Function Tests”, (AFT). Service 17 is not used during normal S/C operations in orbit.
While onboard parameter monitoring can be controlled via Service 12 and in case of
limit violation the reporting of generated events can be controlled via Service 6, the
Service 19 is the one which links onboard events to onboard actions and which
activates / deactivates action triggering respectively.
Service-based OBSW Architecture 111
With this list of onboard services to be implemented the OBSW static architecture
diagram can be enhanced with the according service handlers as depicted in figure
8.13. Besides the service handlers themselves a central
● configurable parameter monitor,
● a central event manager,
● an OBCP manager interpreting the sequence of such procedures,
● an onboard memory manager handling the packet stores
● and a central scheduler for execution of time-tagged commands
must be implemented. This makes it evident that a considerable part of the OBSW is
to be implemented for provision of onboard status visibility to ground and for control
of all details on board. This part of the OBSW is usually called “onboard data
handling”, (OBDH), software.
Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Pckt.Forw./Retr.Srv.Hdlr.
Payload Ctrl. Power Ctrl. Thermal Fct.Mgmt.Srv.Hdlr.
AOCS App
App. App. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.
While telemetry forwarding / routing and control is performed via the Packet
Forwarding Control Service 14, the routing of PUS telecommands to PUS compatible
onboard equipment is managed by means of an “Application Process Identifier”,
(APID). The APID was already mentioned in chapter 8.5. The APID in the TC packet
defines the routing / destination of the packet on board (see also figure 8.7). Each
computer or packet terminal has its own APID which allows routing of packets by the
main OBSW. One computer or PUS equipment even can own multiple APIDs. in such
case individual SW processes in the equipment are addressable individually. This TC
112 Onboard Software Static Architecture
routing however is done by the OBSW which receives a packet from the transponder,
checks for the APID and
● either identifies the packet is directed to itself, or
● in the not unusual case that the main applications in the OBSW have different
APIDS it identifies the targeted application process in the OBSW or
● it identifies the targeted equipment occurrence (e.g. star tracker 2) and
forwards the PUS packet to the equipment over the connecting data bus.
To further precise the topic, as can be identified in figures 8.7 and 8.8 the APID
consists of two parts, namely the “Process ID“ and the “Packet Category”. As it is
indicated above the Process ID is used to route the packet inside the OBSW or to
external units. The Packet Category especially is of relevance for telemetry since the
spacecraft designer can define different Packet Categories and for each category the
OBSW has to provide a dedicated packet store. An example can be
● standard housekeeping TM
● event TM
● High Priority Telemetry.
At ground station contact the different packets are downlinked according to the
allocated TM priority.
In case a satellite is not using the PUS standard, but only the underlying CCSDS, like
many telecommunication satellites and NASA S/C, a similar routing can be achieved
by a CCSDS level “addressing”. For purely CCSDS commanded S/C the TC Frame
is unpacked and the first routing is performed on TC Segment level. The segment
header (see also figure 8.10) contains a “Multiplexer Access Point Identifier”,
(MAP-ID). The MAP-ID covers 6bits and thus allows 64 addresses for TC Virtual
Channel selection on board. The MAP based routing is CCSDS level.
When the OBSW of the S/C core OBC itself is down due to failure, the command
routing via SW – e.g. for emergency load switch off by the PCDU from ground – does
not work anymore. In such cases the “High Priority Commands”, (HPC), Level 1 still
can be used which were already cited in brief in chapter 4.5 and for which the flow is
depicted in figure 4.12. These HPC 1 commands are routed directly in hardware to
the Command Pulse Decoding Unit which triggers equipment emergency switching
via individual bi-level pulse command lines.
In European missions PUS is used as command standard and MAP-ID based Virtual
Channel TC routing is performed for the HPC 1 commands on segment level:
● Command segments with MAP-ID = 0 in the segment header contain HPC1
packets and are routed from CCSDS processing directly to CPDU and thus
are processed entirely in hardware.
● Command segments with MAP-ID > 0 are handed over by the CCSDS
processor to the core OBC's onboard software.
In PUS standard any TCs with MAP-ID > 0 are treated equally and different
MAP-ID values are not used for further equipment identification. In PUS
standard the APID on packet level is used for TC routing instead.
For purely CCSDS commanded S/C the identification of a hardware HPC 1 command
already is performed on Telecommand frame level by a dedicated frame header. And
Telecommand Routing and High Priority Commands 113
in this case the HPC itself is directly included in the command frame and is not
encapsulated in a segment. For these details however the reader is directly pointed
to the CCSDS 232.0-B-1 [80] versus the ECSS-E-ST-50-04A [84].
Independent of the applied command standard the OBSW itself also can trigger HPC
commands going to the CPDU. These are called level 2 high priority commands or
HPC 2 commands.
For the OBSW static architecture no dedicated modules are to be foreseen here.
Routing functions are either performed via the PUS packet APIDs respectively by the
CCSDS processor based on MAP-ID. Figure 8.14 depicts the full scope of TC routing
for both standard TCs as well as for HPC1 TCs using a fictional satellite example.
TC Packets
OBCPs
TC Packets
OB Scheduler
Master Timeline
TC Packets
Event / Action Mgr.
TC Packet
TC Packets Processing
OBCP Mgr.
Eq.Hdl.
Figure 8.14: TC routing from receiver to equipment via MAP-IDs, APIDS, CPDU.
● Realtime Telemetry:
This is telemetry being generated on board during established ground link and
which is directly transmitted to ground.
● Playback Telemetry:
This is housekeeping TM which was generated on board during operations of
the S/C out of sight of the ground station and which was intermediately stored
for the purpose of downlink at next ground contact.
● High Priority Telemetry:
This includes all event / action service TM and related TC execution validation
TM which is stored on board at time of ground contact start and which has to
be downlinked to ground with enhanced priority for visibility of events /
actions / recoveries which happened during last flight period and for eventual
manual failure identification and recovery by ground operators.
These three types of TM are transmitted down in parallel via one RF-link through so-
called Virtual Channels (VC). VC multiplexing from different TM buffers into one RF-
link input is performed by the already cited CCSDS processor (concerning OBC HW
see also chapter 4.4 and figure 4.11). So for the OBSW static architecture here no
dedicated modules are to be foreseen. The TM packets just have to be marked with
the according VC identifier when being stored in the OBSW's housekeeping data
memory packet stores, (HK-memory). All routing functions either are covered by the
PUS services 14 and 15 or are handled by the CCSDS processor.
For science data telemetry downlink usually a link in X-band or Ka-band is used to
allow for accordingly high transmission performance. The figure below shows an
example of S/C platform HK TM and science TM being downlinked via different
Virtual Channels.
The service interface of modern OBCs has already been mentioned. Via a “Service
Interface” the OBSW cyclically reports the following information as pairs of symbol
name and value:
● A preselected set of OBSW variables from each domain in the OBSW-DP,
● OBSW internal variables such as important memory pointers, register entries,
timing parameters, RTOS flags etc.
● Task scheduling parameters for selected applications, handlers etc.
● Data bus access flags etc.
By this means it is possible to check via the SIF and by decoding the binary output
stream
● whether all threads are properly running in the SW
● whether all SW parameters are within limits and
● furthermore S/C control parameters can directly be logged from the OBSW.
The focus during work with the SIF is the check of OBSW health, not S/C monitoring
or control. For this means the SIF already is unhandy, since all OBSW-DP variables
which are accessed are in raw data format and are not calibrated.
Unlike debug code instrumentation the SIF stub module inside the OBSW is kept
included in flight – as was stated before – and therefore it can serve for health
checks until shortly before launch via S/C umbilical connector at the launch site. It is
a fixed building block of the OBSW static architecture.
Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
Serv. IF Hdlr. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Pckt.Forw./Retr.Srv.Hdlr.
Payload Ctrl. Power Ctrl. Thermal Fct.Mgmt.Srv.Hdlr.
AOCS App
App. App. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.
Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
System FDIR OBCP Srv.Hdlr.
incl. Reconfiguration OBCP Mgr. OB Storage Hdlr.
Large Data Srv. Hdlr.
Serv. IF Hdlr. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
PL FD AOCS FD Power FD Pckt.Forw./Retr.Srv.Hdlr.
Therm. FD
Fct.Mgmt.Srv.Hdlr.
Payload Ctrl. App. AOCS App Power Ctrl. App. Th. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.
The underlying basic concept for FDIR is always to handle failures on the lowest
possible level. E.g. in case of a bus transmission error of an AOCS equipment the
Failure Detection, Isolation and Recovery 117
bus controller in the RTOS first performs a bus command retry. If it was successful,
the retry may be logged in telemetry for ground information, but the system
operations can proceed normally. In case also the retry fails the equipment controller
is informed. The Equipment controller may for example recheck the equipment mode
and reinitialize equipment commanding – possibly via the redundant data bus side. If
this also fails control is passed to the next higher level instance which might be e.g.
the AOCS control application which tries to activate the equipment redundancy if
available and if this also fails the S/C FDIR main module performs a S/C mode
transition to Safe Mode and leaves the rest to ground intervention during ground
contact.
For FDIR implementation two basic concepts have to be determined – depending on
the flexibility requested during operations of the S/C:
● One concept is somewhat “hard wired”. In this implementation concept each
lower level module tries its recovery function and in case of failure directly
triggers a fixed function of the next higher FDIR level – in the extreme case it
goes fully up the above described chain to S/C Safe Mode. Therefore however
fixed functions are to be implemented and in case of any changes, the OBSW
or its OBCPs have to be patched. Moreover it is up to each FDIR levels
OBSW code to generate appropriate status telemetry for the ground to be able
to follow what happened.
● A more flexible approach is foreseen by the service concept of the PUS. In this
case an anomaly detected by a PUS monitor triggers an associated PUS
event (which as a side effect provides an event TM packet providing visibility
to the ground). To the event an action is bound which either may induce a
recovery function itself (unit reconfiguration) or which may induce an event on
the next higher level. These monitor ⇒ event ⇒ action chains are more difficult
to implement and require thorough testing, but they are reconfigurable during
flight purely via the instruments of PUS TCs.
Therefore this concept is followed wherever a greater flexibility is required in
the FDIR dependency and reaction chains during flight.
Concerning the FDIR concept and spacecraft operations implications please also
refer to chapter 13.16.
Finally figure 8.17 depicts as top layer the OBSW “kernel” where this term is not
fixed. It shall represent all the OBSW glue logic which is necessary for
● control of the OBSW startup process sequence after boot loading with
◊ initialization of all components and
◊ interlinking of all OBSW modules
◊ startup of task operations,
● holding the OBSW internal HK data “file system” managed by the OB memory
manager,
118 Onboard Software Static Architecture
● and the control of SW tasks scheduling which will be described in detail in the
following chapter 9.
The kernel also initializes all the different threads of the OBSW, for the equipment
handlers, the applications, the TC and TM handling etc. at SW startup before the
threads themselves are released for running cyclically synchronized according to the
defined scheduling.
During system initialization the OBSW kernel parses the entries in the already
mentioned “Spacecraft Configuration Vector”, (SCV), to properly activate equipment
nominal or redundant sides and to consider equipment which was marked as “non
healthy”. More details on the use of the SCV and its full scope of content will be
provided later in the operations chapter 13.2.
The kernel also is the entity generating all the relevant log entries for later tracking
and evaluation from ground as there are
● boot reports for each OBSW boot,
● system log for tracking status and activated / deactivated equipment
● High Priority Telemetry log and
● reconfiguration log for any reconfigurations performed on board.
These log files reside usually in non volatile safeguard memory areas. Therefore they
are not included in the OBSW schematic figures like 8.17.
The kernel also is the ultimate instance to handle severe HW Traps – as far as still
possible in such case – requiring OBSW identified OBC components reconfiguration.
The kernel for such cases of a “dying” OBSW image also writes the so-called “death-
report” into the system log.
The details of the individual OBSW kernel – sometimes also called “OBSW Core
Data Handling System”, (Core DHS) – implementation are to a large extent driven by
the developer's design methods, even by features of the implementation language
and also by design guidelines and company and agency policies.
Onboard Software Dynamic Architecture 119
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
120 Onboard Software Dynamic Architecture
For all those OBSW building blocks which were presented in the previous chapter the
dynamic architecture has to be developed in a further design step. This comprises
the detailed elaboration and design of
● the internal scheduling of all RTOS threads which encapsulate the presented
building blocks,
● the channel acquisition scheduling,
● FDIR handling,
● processing of Onboard Control Procedures and the
● Service Interface data supply.
These topics shall be addressed subsequently in the following sections.
The basic design paradigm for the dynamic architecture is that all building blocks of
the OBSW as they were presented in figure 8.17 are executed cyclically –
independent of the S/C mode, the applications submode (e.g. AOCS submode) etc.
Most of these building blocks will be implemented as individual tasks / threads on the
RTOS. Task control is subject to the OBSW kernel and is to be designed to be
configurable through a tasking table so applied that changes take effect after a
simple reboot.
Not all tasks need to be executed with the same cycle frequency: E.g. a Thermal
Control Application can well be called 10-50 times slower than the AOCS Control
Application. Certain tasks may have to run at the same frequency, but with timely
staggering between each other: E.g. AOCS sensor data acquisition by equipment
handlers, AOCS control algorithm computation and AOCS actuator control via
equipment handlers.
In former OBC generations in addition a perfectly optimized OBSW tasking with
respect to CPU load management was absolutely essential due to the CPU
performance limits. Therefore up to the 1990s Ada was used without underlying
operating systems up to the MIL-STD-1750 and 31750 CPU chips generation.
With the PowerPC and SPARC chips (ERC32 and LEON) the restrictions became
more relaxed with respect to CPU load limits, but an efficient tuning of OBSW task
scheduling is still necessary to use an RTOS as OBSW baseline since this system
comfort goes at the expense of the gained CPU performance.
Still the requirement remains, that task interaction and data exchange between
building blocks may not lead to conflicts or operational blocking – independent of
● the S/C mode or submode,
● potential parallel running payload instrument operations,
● potential parallel ground contact and data handling.
During OBSW development according scheduling tables for the OBSW Kernel are
worked out for tasking design. An example taken from the Earth observation satellite
CryoSat (ERC32 CPU) is depicted in figure 9.1:
Internal Task Scheduling 121
100ms
Background Tasks:
Memory Scrub
Mil_Bus_Manager Mil_Bus_Manager
MTL_Manager Mil_Bus_Manager MTL_Manager
OBCP_Interpreter Mil_Bus_Manager MTL_Manager Mil_Bus_Manager OBCP_Interpreter
Event_Action_Manager OBCP_Interpreter OBCP_Interpreter OBCP_Interpreter Event_Action_Manager
TC_Manager Event_Action_Manager Event_Action_Manager Event_Action_Manager TC_Manager
Housekeeping TC_Manager TC_Manager TC_Manager Housekeeping
TM_Pkt_Interface Housekeeping Housekeeping Housekeeping Device_Commanding
Device_Commanding Device_Commanding Device_Commanding Device_Commanding Statistics
TM_FIFO_Monitor Mil_Bus_Usage Monitoring MTL_Update RM_Monitor
EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager EEPROM_Manager
SW Watchdog SW Watchdog SW Watchdog SW Watchdog SW Watchdog
0ms
Slot 1 Slot 2 Slot 3 Slot 4 Slot 5
100ms
Background Tasks:
Memory Scrub
The overall scheduling cycle of the OBSW in this example covers 1 second
(1000 ms). This cycle is split into 10 subcycles with 100 ms duration each, which
partly include the same execution steps and partly differing ones. E.g. a SW
watchdog refresh is performed in each OBSW subcycle, the same applies for call of
the TC manager and quite some others. The Master Timeline Manager, (MTL), which
is called “Onboard Scheduler” in figure 8.17, is called only every second interval. The
handler for PUS statistics service is even only called in subcycle 5, which means only
once per second. The same applies for the thermal control application in subcycle 9.
It also has to be pointed out here that although a task manager is identified in
multiple subcycles, this does not imply that the work performed by the manager per
subcycle is the same. E.g. the data bus manager (MIL-1553 bus in this case) is
called in each subcycle. However not during each one does it communicate with the
same onboard equipment. This subject of channel acquisition scheduling is a
separate topic treated in the next chapter.
122 Onboard Software Dynamic Architecture
What can finally be identified reading the columns of figure 9.1 from bottom to top is
that each sub-cycle foresees a free spare CPU capacity of 25-30% as contingency
for potential FDIR activities.
AUTO DATA
BUS TYPE SUBTYPE CMD_ACQ TRANS_ALIAS RETRY RT_NAME LENGTH DURATION START END
External Mgmt Y all 1 100 50 150
External Cyclic ACQ_PCDU_Nom PCDU_STATUS_ACQUISITION_REQUEST Y PCDU_N 32 150 150 300
External Cyclic ACQ_PCDU_Red PCDU_STATUS_ACQUISITION_REQUEST Y PCDU_R 32 150 300 450
External Cyclic ACQ_GPS_Nom GPS_STATUS_ACQUISITION_REQUEST Y GPS_N 32 100 450 550
External Cyclic ACQ_GPS_Red GPS_STATUS_ACQUISITION_REQUEST Y GPS_R 32 100 550 650
External Cyclic ACQ_GPS_Nom GPS_POSITION_ACQUISITION_REQUEST Y GPS_N 32 100 650 750
External Cyclic ACQ_GPS_Red GPS_POSITION_ACQUISITION_REQUEST Y GPS_R 32 100 750 850
External Cyclic READ_PCDU_Nom PCDU_STATUS_DATA_READ Y PCDU_N 32 200 850 1050
External Cyclic READ_PCDU_Red PCDU_STATUS_DATA_READ Y PCDU_R 32 200 1050 1250
External Cyclic ACQ_FOGA_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGA 32 110 1250 1360
External Cyclic ACQ_FOGB_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGB 32 110 1360 1470
External Cyclic ACQ_FOGC_STATUS FOG_STATUS_DATA_ACQUISITON_REQUEST Y FOGC 32 110 1470 1580
External Cyclic READ_GPS_Nom GPS_STATUS_DATA_READ Y GPS_N 64 170 1580 1750
External Cyclic READ_GPS_Red GPS_STATUS_DATA_READ Y GPS_R 64 170 1750 1920
External Cyclic READ_FOGA FOG_STATUS_DATA_READ Y FOGA 64 150 1920 2070
External Cyclic READ_FOGB FOG_STATUS_DATA_READ Y FOGB 64 150 2070 2220
External Cyclic READ_FOGC FOG_STATUS_DATA_READ Y FOGC 64 150 2220 2370
External Cyclic ACQ_FOGA_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGA 32 100 2370 2470
External Cyclic ACQ_FOGB_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGB 32 100 2470 2570
External Cyclic ACQ_FOGB_INERTAL_DATA FOG_INERTIAL_DATA_ACQUISITON_REQUEST Y FOGB 32 100 2570 2670
External Cyclic READ_GPS_Nom GPS_POSITION_DATA_READ Y GPS_N 128 250 2670 2920
External Cyclic READ_GPS_Red GPS_POSITION_DATA_READ Y GPS_R 128 250 2920 3170
External Cyclic READ_FOGA FOG_INERTIAL_DATA_READ Y FOGA 128 210 3170 3380
External Cyclic READ_FOGB FOG_INERTIAL_DATA_READ Y FOGB 128 210 3380 3590
External Cyclic READ_FOGC FOG_INERTIAL_DATA_READ Y FOGC 128 210 3590 3800
External Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
External Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC PCDU_CMD CMD_PWL_Lines_PCDU_Nom N PCDU_N 256 650 510000 510650
External ASYNC PCDU_CMD CMD_PWL_Lines_PCDU_Red N PCDU_R 256 650 510650 511300
External ASYNC FOGA_CMD N FOGA 128 450 511300 511750
External ASYNC FOGB_CMD N FOGB 128 450 511750 512200
External ASYNC FOGC_CMD N FOGC 128 450 512200 512650
External ASYNC GPS_CMD CMD_MODE_GPS_Nom N GPS_N 256 800 512650 513450
External ASYNC GPS_CMD CMD_MODE_GPS_Red N GPS_R 256 800 513450 514250
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX XXX
External ASYNC XXX XXX XXX XXX XXX XXX XXX XXX 950000
OBC Int. Cyclic ACQ_RIU_Nom_HK RIU_Nom_HOUSEKEEPING_ACQ_REQUEST Y RIU_N 32 100 50 150
OBC Int. Cyclic ACQ_RIU_Red_HK RIU_Red_HOUSEKEEPING_ACQ_REQUEST Y RIU_R 32 100 150 250
OBC Int. Cyclic READ_RIU_Nom_HK RIU_Nom_HOUSEKEEPING_DATA_READ Y RIU_N 512 500 250 750
OBC Int. Cyclic READ_RIU_Red_HK RIU_Red_HOUSEKEEPING_DATA_READ Y RIU_R 512 500 750 1250
OBC Int. Cyclic ACQ_RIU_Nom_SADM_Pos. RIU_Nom_SADM_POSITION_ACQ_REQUEST Y RIU_N 32 100 1250 1350
OBC Int. Cyclic ACQ_RIU_Red_SADM_Pos. RIU_Red_SADM_POSITION_ACQ_REQUEST Y RIU_R 32 100 1350 1450
OBC Int. Cyclic READ_RIU_Nom_SADM_Pos. RIU_Nom_SADM_POSITION_DATA_READ Y RIU_N 256 300 1450 1750
OBC Int. Cyclic READ_RIU_Red_SADM_Pos. RIU_Red_SADM_POSITION_DATA_READ Y RIU_R 256 300 1750 2050
OBC Int. Cyclic ACQ_RIU_Nom_FSS_1 RIU_Nom_FSS_1_ACQ_REQUEST Y RIU_N 32 100 2050 2150
OBC Int. Cyclic ACQ_RIU_Nom_FSS_2 RIU_Nom_FSS_2_ACQ_REQUEST Y RIU_R 32 100 2150 2250
OBC Int. Cyclic ACQ_RIU_Red_FSS_1 RIU_Red_FSS_1_ACQ_REQUEST Y RIU_N 32 100 2250 2350
OBC Int. Cyclic ACQ_RIU_Red_FSS_2 RIU_Red_FSS_2_ACQ_REQUEST Y RIU_R 32 100 2350 2450
OBC Int. Cyclic READ_RIU_Nom_FSS_1 RIU_Nom_FSS_1_DATA_READ Y RIU_N 128 250 2450 2700
OBC Int. Cyclic READ_RIU_Nom_FSS_2 RIU_Nom_FSS_2_DATA_READ Y RIU_R 128 250 2700 2950
OBC Int. Cyclic ACQ_RIU_Nom_ES_1 RIU_Nom_ES_1_ACQ_REQUEST Y RIU_N 32 100 2950 3050
OBC Int. Cyclic ACQ_RIU_Nom_ES_2 RIU_Nom_ES_2_ACQ_REQUEST Y RIU_R 32 100 3050 3150
OBC Int. Cyclic READ_RIU_Red_FSS_1 RIU_Red_FSS_1_DATA_READ Y RIU_N 128 250 3150 3400
OBC Int. Cyclic READ_RIU_Red_FSS_2 RIU_Red_FSS_2_DATA_READ Y RIU_R 128 250 3400 3650
OBC Int. Cyclic ACQ_RIU_Red_ES_1 RIU_Red_ES_1_ACQ_REQUEST Y RIU_N 32 100 3650 3750
OBC Int. Cyclic ACQ_RIU_Red_ES_2 RIU_Red_ES_2_ACQ_REQUEST Y RIU_R 32 100 3750 3850
OBC Int. Cyclic READ_RIU_Nom_ES_1 RIU_Nom_ES_1_DATA_READ Y RIU_N 64 180 3850 4030
OBC Int. Cyclic READ_RIU_Nom_ES_2 RIU_Nom_ES_2_DATA_READ Y RIU_R 64 180 4030 4210
OBC Int. Cyclic READ_RIU_Red_ES_1 RIU_Red_ES_1_DATA_READ Y RIU_N 64 180 4210 4390
OBC Int. Cyclic READ_RIU_Red_ES_2 RIU_Red_ES_2_DATA_READ Y RIU_R 64 180 4390 4570
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. Cyclic XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. ASYNC RIU_CMD CMD_MODE_RIU_Nom N RIU_N 128 450 500000 500450
OBC Int. ASYNC RIU_CMD CMD_SADM_Pos._RIU_Nom N RIU_N 256 450 500450 500900
OBC Int. ASYNC RIU_CMD CMD_MODE_RIU_Red N RIU_R 128 450 500900 501350
OBC Int. ASYNC RIU_CMD CMD_SADM_Pos._RIU_Red N RIU_R 256 450 501350 501800
OBC Int. ASYNC RIU_CMD XXX XXX XXX XXX XXX XXX XXX XXX
OBC Int. ASYNC RIU_CMD XXX XXX XXX XXX XXX XXX XXX 980000
Nominal Unit
Redundant Unit
The figure 9.3 depicts a channel acquisition table for such onboard TCs from OBSW
to equipment and for onboard TM from equipment back to OBSW. The example
depicts a fictional satellite for which the OBSW controls some onboard equipment via
an external platform data bus and some equipment indirectly via an internal bus-
124 Onboard Software Dynamic Architecture
coupled I/O unit (see the OBC presented in figures 4.2 and 4.5). A number of basic
concepts shall be explained with the aid of this table.
First of all it shall be explained, that all the bus access calls listed in the table may be
performed exclusively by the OBSW's equipment handlers – please also refer to
figure 8.17. In an ideal OBSW design no Control Application nor other component
shall be granted direct bus access.
Then it can be identified that the internal bus to the I/O unit and the external bus can
be accessed in parallel (please refer to the start / end times for equipment access
calls in the green parts of the table versus the blue). So these two buses really can
be operated fully in parallel.
The next concept are cyclic and asynchronous bus accesses. Cyclic accesses
typically are permanently recurring TM acquisitions from the equipment by the
OBSW. This data (like the PCDU status data acquisition) is permanently polled
except for modes where the equipment (like a payload) is entirely switched off.
Asynchronous accesses to the bus are occurring only when the according equipment
is commanded. If no commands are in the queue, the bus access interval is not
used. To make it more clear the example of the PCDU command slot shall be taken
as an example. If there is some power line to be switched onboard the S/C (e.g. for
power-up of a payload), then a PCDU command has to be executed, but this can
occur only in the slot reserved for control of the nominal PCDU which is between
microsecond 510000 and 510650 of the overall 1 second OBSW cycle or in the time
slot 510650-511300 for the redundant PCDU.
The last concept to be understood is the one of double access calls to the bus for
equipment control. E.g. the Fiber-optic Gyro, (FOG), inertial data acquisition is a
cyclic data acquisition. It is not time efficient to “call” an equipment item and to wait
for it to compute the response and to block the bus until the requested result data are
returned to the OBSW. Instead the OBSW submits an initial call to the targeted
equipment submitting the TM acquisition command – and it just gets a command
receipt confirmation from the equipment's remote terminal on the bus 6. Then the
OBSW performs interactions with other equipment in between while the previously
commanded equipment prepares the TM data in parallel. Then after a certain, fixed
time interval from the initial TM acquisition request command the OBSW can be sure
that the initially interfaced equipment has meanwhile computed all relevant TM data
and has stored it in according remote terminal registers. And thus at this point in time
the OBSW now polls the TM from the equipment's RT.
The key concept behind such an acquisition table is that is stays exactly the same,
with all the command / acquisition timing values being unchanged, independent of
● in which mode the S/C is,
● in which submode any OBSW Control Application is (AOCS, thermal, power,
payload control)
● or even whether the S/C is in normal operation or in severe FDIR conditions.
The bus access timing table is independent of the above mentioned conditions and
thus has to be properly engineered to suit the needs for all S/C operational modes /
6
The time slots usually are defined wide enough to allow for one bus acquisition retry still within the slot in case
the first bus access by the OBC failed.
Channel Acquisition Scheduling 125
cases and to consider all the equipment's timing constraints between data acquisition
requests and telemetry data availability in the RT.
Concerning FDIR handling two basic cases have to be determined, namely failures
detected by software and failures detected directly by hardware.
The handling of SW detected failures in the dynamic OBSW architecture is rather
intuitive and shall be explained with the aid of an example:
● An Earth observation satellite according to a loaded timeline has to perform an
image acquisition and beforehand has to switch from AOCS coarse pointing
mode to fine pointing mode which implies activation of the star trackers.
● In such cases the OBSW Scheduler (cf. figure 8.17) will trigger the AOCS
application to activate fine pointing mode.
● The AOCS application will trigger the Power Application to power the star
trackers, (STR), and the STR equipment handler to activate the STRs – which
includes informing the STR handler about expected telemetry after STR
boot-up.
● It shall be assumed that one STR fails completely (STR1).
● In such case the STR handler will detect missing TM from STR1.
● The STR handler will inform the AOCS application via OBSW-DP flags and
STR TM entries about the failure and the AOCS application will react initially
by canceling fine pointing mode and by information of the System FDIR and
reconfiguration handler.
● It is up to the OBSW specific implementation which level (AOCS FDIR or
System FDIR) will give the STR equipment handler the clearance for further
recovery actions like switch to the alternative bus side, activate a spare STR
or other activity.
What can be identified here is that the entire FDIR processing is well performed in
the frame of the normal bottom to top information path and the normal scheduling of
the static architecture's building blocks. No special interrupts are being raised, no
dedicated recovery threads are started or other mechanisms which would jeopardize
the entire OBSW tasking stability.
handler which triggered the access. Such retry flags may be used for
monitoring via the PUS statistics service 4.
● In case the retry failed, the equipment handler is informed about the failure
and according failure flags and RT number are recorded in OBSW-DP entries.
In such cases the situation becomes exactly a purely SW managed STR FDIR
chain from bottom via AOCS to System FDIR as described above.
More complicated are hardware detected errors which are induced by OBC hardware
components themselves. In fact only a very limited number of them can be recovered
or partly handled by the OBSW at all:
● A typical problem detected by hardware mechanisms are memory failures in
OBC PROM or RAM due to electromagnetic “Single Event Upsets”, (SEU), or
due to damage by high energetic particles. As already mentioned in chapter
4.2, modern memory chips provide hardware based “Error Detection and
Correction”, (EDAC). The memory EDAC checksum electronics include
corresponding signal lines to the OBC processor's “Line Control Block” bus,
(LCB bus). Modern processors like the LEON include an on-chip EDAC
handling of such EDAC BCH checksums. They provide an autocorrection of
single bit failures fully transparent to the running OBSW (except losing a single
CPU clock cycle) and they can detect double failures. In such double failure
cases according address entry information is placed in special CPU registers
which are cyclically monitored by the OBSW watchdog functions and which
are thus accessible for the OBSW. From this point onwards the problem has to
be handled in software, however since it means a memory chip has failed, first
of all the OBSW itself is prone to crash when accessing this memory address
and secondly the problem only can be handled by OBC HW reconfiguration.
Current onboard realtime operating systems do not provide memory
virtualization so that the RTOS cannot be advised to blank out the bad blocks.
● A further group of hardware detected failures are HW Traps provided by the
OBC processor, like e.g. the “Uncorrectable register file SEU error” of the
LEON. This type of failures is induced by errors even another “step closer” to
the OBSW since they appear directly inside the processor – like the register
SEU example here. When detected by the processor hardware the OBSW in
most cases already has computed some wrong data and the probability of a
coordinated error recovery is low. Therefore the maximum achievable in such
situations is storage of the alert in a non-volatile safeguard memory for later
ground diagnosis and to trigger OBC reconfiguration – independent of S/C
operational mode. Such functions will directly be part of the OBSW System
FDIR module.
programmed code sequences inside the OBSW. Typical applications are “scripts” for
equipment reconfiguration including verification – e.g. switch over from nominal
sensor to redundant.
● OBCPs can be uploaded to the S/C from ground,
● and thus can be changed during flight,
● without changing the OBSW's compiled binary code – i.e. without patching the
OBSW.
For OBCP processing, an execution engine inside the OBSW is required. In older
S/C missions such OBCP implementations have been based on high level language
interpreters, which is however slow and the OBCP code is not optimized with respect
to compactness.
Newer implementation (e.g. for the ESA Missions GAIA and Bepi Colombo) prefer
bytecode interpreters which require OBCPs to be precompiled similar to Java code.
Latest research approaches even investigate the application of real Java for OBCP
interpreters in onboard software – see [122]. The benefit of this implementation
technique is an improved execution performance or vice versa less CPU load and in
addition a more compact OBCP code on board (reduced memory requirements).
The following basic characteristics apply for OBCPs and their execution:
● An OBCP processing engine in most cases allows multiple OBCPs to be
executed in parallel.
● Since OBCPs are pretested on ground and since each command is checked
for validity before execution on board, they are rather safe w.r.t. OBCP
software bugs.
● But OBCPs definitely are not tested with the same quality and according to the
full scope of the development standards as the rest of the OBSW (cf.
chapter 11). For example they usually do not undergo an independent SW
verification.
OBCP implementation languages typically provide the following functions and
structures:
● Onboard TC submission to connected equipment.
● Simple types:
boolean, signed / unsigned integers, floating point, double precision.
● Arrays / vectors of above types.
● Arithmetic and logic operators:
+, -, *, /, %, &, |, etc.
● Execution control statements and loop constructs:
if … then … else, for … , do ..., while …
● Procedures and functions for structuring code into ”subroutines“ or similar.
● PUS parameter monitor control.
● Tracking functions for event occurrence.
● Event trigger functionality.
● Dedicated OBCP onboard data pool parameters which can be modified by S/C
TC, and which can be observed via S/C TM.
● OBCP management functions in the OBCP handler (cf. figure 8.17) such as:
load, start, stop, suspend etc.
128 Onboard Software Dynamic Architecture
These functions can be applied to running OBCPs via PUS or can be used for
management of one OBCP by another.
● Onboard TC submission to OBC connected equipment.
● Onboard equipment TM packet reception.
● TM packet variable evaluation.
It was already explained that via the Service Interface a set of variables of the
following scope are supplied to an external IF connector of the satellite:
● A set of OBSW variables,
● OBSW internal variables like timing parameters, RTOS flags etc.
● and task / thread scheduling parameters.
All these are parameters available in the OBSW-DP. The only topic is in how far the
parameters online can be selected for export to the SIF or in how far the parameter
preselection is precompiled into the OBSW.
Usually an OBSW has a basic scheduling frequency to which the main elements
(Applications, Handlers) run synchronously or run in frequencies which are integral
multiples higher or lower. E.g. the base frequency can be 10Hz, an AOCS App. runs
with 10Hz while the Thermal App. runs at 1/10 Hz, i.e. 100 times slower. The SIF
handler typically is scheduled with the main cycle frequency of the OBSW or a
minimally lower multiple to limit CPU load induced. The data rate to the SIF is
relatively high. This allows that each entire SIF output data set represents a
consistent insight into OBSW status at a point in time.
Concerning the OBSW dynamic architecture the SIF handler is only a topic with
respect to the induced CPU load. Otherwise it is a simple straightforward process not
interacting with other OBSW blocks.
Onboard Software Development 129
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
130 Onboard Software Development
Onboard Software development is a very complex task which by far is not only
difficult due to code implementation challenges but it also implies a lot of spacecraft
systems engineering effort beforehand. The entire OBSW development comprises
the steps:
● Software functional analysis
● Software requirements definition
● Software design
● Software implementation and coding
● Software verification and testing
Each of these topics is worth being addressed separately and is worth being treated
in an individual chapter below.
Chapter 2 already sketched out how the S/C design and the corresponding OBC and
OBSW IF design evolve together (cf. tables 2.2 and 2.3). During S/C development in
Phase B a detailed definition of all onboard functions has to be worked out and it
must be elaborated which functions will be implemented in SW and which ones in
HW respectively. A so-called “Function Tree“ for the OBSW has to be established.
During OBSW Functional Analysis it thus has to be considered
● which functions within the S/C OBSW are required and
● an allocation of these SW functions to the diverse S/C operational modes.
In the first part of Phase C, when the detailed onboard equipment type and supplier
selection is made and when the functional and interface documentation of all this
equipment is provided by the suppliers, the scope of this OBSW Function Tree has to
be refined with all the details on equipment control protocols, necessary equipment
modes switching and equipment FDIR functions to be implemented into the core
OBC's onboard software. At the beginning of the OBSW design activities then the
Function Tree finally comprises all
● commandable functions of the OBSW kernel,
● functions for processing TCs,
● functions for generating TM,
● controller application functions (AOCS, power, thermal, payload),
● equipment IF handler functions,
● surveillance / control relevant functions,
● error / failure diagnostic / failure handling / recovery functions, (FDIR).
Furthermore for the S/C subsystem level, it has to be considered that within each S/C
operational mode the subsystems (e.g. payload) themselves can be operated in
different subsystem modes and may require different active functions for their control.
Figure 10.1 below depicts an excerpt from such a Function Tree as an example.
Onboard Software Functional Analysis 131
In addition the Function Tree has to reflect all control and data handling interaction
functionality between subsystem functions and the S/C command and control – such
as management of boot sequences for intelligent payloads, time / position / velocity
synchronization functions of AOCS algorithms with GPS / Galileo / GLONASS
receivers on board etc.
8
/.-?@
,(
&2
+
)
AC8
+
AC <
A
C
AC
?
"<
AC
8
<
@
<
AC%
<
AC
<
AC
<
8"
<
AC % <
AC
<
AC%
%
<
+
<
3
AC 8
+
/.F
)
AC8
+
AC<
"<
/FF
"<
*3
<
8
+8
(
8
/.F?8
<
)!)# ,(
&2
/FD %
+
)
)AC<
%
AC
<
"3F-3-H 3-0
)
8
/.D?AC<
=)
,(
&2
"<
/FE
(AC<
%
+
AC
+
)
%AC
%
<
%
%
=
<
<
!#
%
+
)
C
2
%
AC
(
3
AC
63
%
A
%
(
%
?
6
AC
< (
%+
3
<
AC
C'EC+
'+F6
2(9@ !TD.VTD/V#
&
8"+"(*C
(9&9'3
<
AC
IEI
+
+
F6
+
(9&9E0%
J
$
%(
AE-A-$23H.A2
J
/1IF
%93+3/E01
%
% =
93/IE.
>(9 !TD-V TDFV#
%
'3
<
AC
H56
%
(9&9'((9(!TDDVTDEV#(
)
2(9
C
(0F
/FH %
% ( 1E P ( -..E 63
'3
< ( 0F
63
<(1E
6
!TDHV#
% %
"
A
9
$(
/1I.
% )
AC
>
%
+
%
!
#8AC
3
63
<
..6
WW!TDIVTD1V#
63
% A6
(+L+
/10.
63
=
%
AC
%
% %
%
WW
6
% %
78
9:
$
%
& !T//1VT/-/V#8AC
WW63
<
-!5!
*2"
,
+
1
+
7(
+
<:!(+#
%
/1I.! T0IV# +
$(
8
(+ %
%
+
8
8.!T00V#
(+@ 8.
%
(+
%
8)?
8
/.E
%
!2CWC#
8
/.H
)%
=
AC
%
(
/.I
AC!(/
/.HT(/V
/./#
* @
% 8
+
C
)
@
/FI
$3D/
.F/--../ .1?F.
+'
3+)
(
%
%
(. %
$"$ %
$3+$
"&
'- !S2'-#
CB
EKC F
$
"$
+
( (.I (.0 <
<
$3'$
+ +"
$3+$
$3"$
(3.- (3.1
A
$"$ 2'/W2'-
8
/.E?(+%
@
2CC ,(
&2
$3D/
-0./-..- //?D-
!S'# +'
' %
E
%
$3+$
!S2'-#
'F
+' (/
(
E !S'# %
H
"(9 %
$"$
+A8$ W8$ -
D 3+)
D
F +A
+A"(9
/ 8$ F +A'+
+A" /
+
-
" 'H +
(-
"
+''
*%
I
(
+!ED#Y
%Y
%
!S'#
A
H
$"$
I
+ +"
(F
+
Y
%Y8 '-.
8
/.H?(+%
@AC,(
&2
/F0 %
8
/.I?(+
AC,(
&2
+
*
(+
C
%
AC
(+ @
8.
<
)
63
-!5!# C
+
L
%
"
72
63
:!2#
!T01V T1-V#
)
=)
%
/.02
C
(0F8"+"(*
+
2
%
/10.
( 2
%
/F1
( cAC
(
+9
A'
2c'c
%(
2c+
c
+
(
+ +
+
+
' + 2
L
cc+
>
cc
f
g
f+9cg 2Kc
+9c$c
%
cc
f2Kg
fA'cg
c
f %c(
cg f
cg
%c(
c$c
+
c
f&c+cg $ fcg
%
+
)c+ +
' ( "9
+
'
f(c
g +c
f(cg (
A ( +
L
+
cc
9
c+cc
f+
ccg +
cAc
+c'cc
f$c%
c
g
cAc$c
+
c
f$c%
cg 8$ +c88c
c
2c
cAc c9c
c
2c
cAc+
c "+c9
c
f
cg fc )c+g c+c(
fc+cg
*% fcc9c+g fc+c'g
f
cAcg f
cc+g
( "9
f c9c+g
cA f+
c+g
+c c8c %
&c+
c
(<
9
' +c'cc
"9
cA &c*)c8$c'
f""
g
&c+
c+c'
&c$c
c %
f&g cc&
fA"+"
g fcg
"2 + 9
f"
%cg *% +'
"9
$3" +
A >(3 A
8
/.0?2
),(
&2
(
27
:
%
!(+#2)
C<
%
(
2
7
:?
%
+
8
2
3 :%
7
+
%
%
(
72
63
:
6
2
2
%
6
6
+A
2%2
6
%
2
6
%
C
/D. %
6
AC
WW
$
%
!T//1VT/-/V#<
6
%
6
7$
9:!$9#
?
C
C%
J
C
!
6
#
C
$9!T1FVT1IV#%
J
%
%
%
*%
7:$9
+
$9
%
%
3
'YY
)
+
$9
$9
%
J
$9(
$9
!WW%b# )$9(
+ "
(
$9 +
7
:
3
3$9
7:
<
+
:
7
+
2
$9+
$9>-
)
?
6
'
(
$9-.
%
%
?
(
%
$
+
?
<
%%
+
$9
)
7
':
/D/
!
" #
6!#
"
c
/?"
c
/
c
? cS/-
c?
c? $
%
c? /
W%
c!#
W%
c!#
W%
<c!#
W+
!# /
(
%
.a A
%
&
J
@
J
+
c!#
c!#
8
/.1?$9
" #(
$9
?
6
6 *
?3
?
?3
?3
6$ * 60B
8
/./.?$9
/D- %
" #
6 %
%@
%
3-...
8%9(*
+@+38
+@+38
%
A
A
A' AC
8+ A
8
/.//?$9
$!%
" #
%%
3-...
+@+38 8
A
A' AC
8
/./-?$9
/DF
&'
" # 6
I
R
?S-...
ACcc'c(
>
?RT.V
@A
"D--
%
[[
\\[R3\"C9/?c
c3\3\/.\
>
+ A
>
+ A
@A
(*A
%
8
/./F?+
$96
"
" #'
8)
%
%c AC
%c??/
- "D--
WA?
%
%
W
J!#
8
/./D?$9
+
!#
/DD %
(%
" # (
%
%
)8
'
>
@
-... A
+
A'
)
" '"
'
\ [S
8
/./E?$9
%
)
" # $
/
AC
(+
AC%
- "
+'
7)7
$
3-...
7)7
+
F 2C(+
(+
8
/./H?$9
/DE
" #
!
<
#
!
)
#
!
(
'('
)*+,-
%
&
'('.,-
&
/%"01
23
"
& &
& &
&
'('4*2,-
'('4*2,-
/%"01
23 /%"01
43 /%"01
43
&
&
& '('4*2,-
&
/%"01
23 &
/%"01
23 /%"01
43
&$
8
/./I?$9
+
)
/./I
;3
+
+
%
+
6
<
%
8"
<
;
8"
?
$9
63
%
2
+
%
%
%
+
%
!#+
6
+
+
)
%
*
)
%
%
?
)
/DH %
A
!
J
#
+
*
" #<
=
+'
7(7 ?
8 ?ACcK ?'$c
'
!
#
.?+c.0/E -?
!7
+
7#
/?
F?
('
9C
D?('
AC
h
AC
'$
E?
(K
!
#
+)
8
/./0?$9<
" #
%
A3
-?!#
?AC
.?
!#
(
/?A!# K
'$ A /?!#
(
F?+!#
.?
D?+!#
!
%
#
3-...
8
/./1?$9
/DI
"
" #+
%
%
+
<
%
%
$9
+
A3@3A
@3A?
<
+
@3A?2C
C
C
@3A?"
C
AC?C
<
+C
?AC
AC?"
C
AC?(
/..
8
/.-.?$9
$9
$9
?
(
C=
+
<
AC
<
AC
%=
=
"+
/D0 %
! 6
#
AC
<!$9#
!%
#
7;;:
!
J
@
#
%
%
AC
%
)
(8
)
J @
<
<
C
%
8
/.-/
%
@
!(
AC#
%AC%
2C@C
AC
AC
@<
+
)
T10V
+
%
>3
%
+
>3
%
>3
=
?
+%
<
=
=
%
+
@
+ %
@
3%
AC%
(
3%
=
=
%
+
%
>
+
/D1
(
AC AC@A
>
8
/.-/?8
%
+
2C@C
3
<)
@
%%
AC
%
@
?
=
=
%
+
+
Y(
97
+
%
=
=
+
Y
97
@
=
A
=%
3
AC
@
+
%
+
Y
97+
2C@
C
+
AC
@%
+
AC
@
/E. %
+
Y2
97!2+9#
<
%
3
+
%
@ %
2+9
@
8
/.-- %
%%
@
AC%
<
%
3+
+
"C9
2
A
+
'
A
9
A8 >
9
A8!'# '$
"/3
'
@
8
'
@
8
(
'
8!
%#
8
/.--?
=T10V
%
2C
+
@%
%AC%
8
T10V
%
6
<
3
WW
-!9!
$
+E$F
(
%
+
%
Y8
>
A
Y
>
+
/E/
@
K
<
<
(
<
@
+ 9(*
8
/.-F?8
>
A
=8>A
+
8>A
A
(
'=
A
<
%
)
%
%
+
Y(
9Y
;
@<
%
A!
#<
8>A
)
%
3Y>
8
Y!>8#+
AC
3%
=
/.-D
8>A
>8
<
A
A;
)
AC
+
AC
'$A
>8;A
+
A;
+
>8
Y
9Y
%
AC
@
! %@ @#
<
A
K
<
<
<
+@+
@
8
+ 9(*
8
/.-D?>
8
=>8
+
<
8>A)
)
AC3'
>8
AC
A
%
%
+
%
)
%
<
%
=%
K%
/H
3A<
=
=
%
AC %
+
%
%
J
/.-D
3
<
+
Y
Y
'
%
>8
<
>8)
?
+
AC%
%
AC
%;
%
AC
@
! %@ @#
<
A
K
<
<
<
+@+
@
8
+ 9(*
8
/.-E?
/ED %
+
%
=
%
+
AC
6
+
Y+ @+8Y!+@+3
8 #
/.-F /.-E
%
%
A
+
+@+38
% %
A ;
+ %
%A%
+@+38
8
@
%
YY
%
(
%
+
'$
%
8
%
+
<
%
+
AC
%
%
%
G9
)
6; @
AC
+%
8
/.-H?>8
%3
,(
>
+
/EE
+
ACC
)
%
+
AC
AC
+
%
A
AC
%
8
/.-I?%),(
( )
%
3
Y%
Y!8#
'$
C
!TEEV#+
AC
%
8
/.-0?%
)
A ,(
-!9!5 C**,E,$F
+
7
9:8
)
+
)
)
%
+
Y +
Y!+A#
+
)
@
+
%
AC
A
<
%+
%
8
/.-1?&
3>+A
)
,(
(
AC
7
9:
+
A
@
! %@ @#
<
A
A
K
@
<
<
' +@+ <
83 83
@
+ 9(*
8
/.F.?8
7
9:
6 <
A
@3
%
(
A
<
/.F/
/E0 %
@
K
A <
<
' +@+ <
83 83
@
+ 9(*
8
/.F/?+A
A
+
%
@3
A
AC
/.F.
%
!
#%
+
<
/.F/%
A@
+
/.F-
A /=)=
%
8
/.F-? /+A,(
>
+
/E1
8
/.FF? !(6#,(
/H. %
A
+A
)
A
Y
9Y
3+
3
AC
%
%
%>8
7(
+
7!(+#
-!9!8 /
(E/(F
+
7
9:
)
Y2
9Y
7
8
:! 8#
<
@
! %@ @#
K
A <
<
' +@+ <
83 83
@
+ 9(*
8
/.FD?
8
+
8%%
+A
<
%
+
)
<
8)
<
<
&'
% +
A
<
78: +
8
/.FE?&
3>%
8
%
38
%
A =
)
=
%
+
<
9
%
3
=
;
A
A
%
AC
A <
%
@ <
%
<
8
@
%
%
/H- %
8
/.FH? /8 ,(
() 8
%
/.FH
/6(
8
8
/.FI?
<
8
,(
&2
>
+
/HF
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
166 OBSW Development Process and Standards
The goal of software development processes and software coding and development
standards is to achieve a sophisticated software design quality with respect to
maintainability and a high SW operational reliability under all nominal and system
failure conditions. For satellite OBSW it has to be considered in addition that the
satellite design typically is only single failure tolerant and that the satellite cannot be
contacted again without running OBSW – except for the limited scope controllable via
High Priority Commands.
Such high software quality can be achieved by
● guidelines for good design and coding practice,
● requirements towards a thorough test concept comprising unit, integration and
system level tests,
● extensive testing enforcing e.g. full branch coverage and node coverage
testing
● and in addition by an independent software verification and validation (ISVV)
from an external partner who has not been involved in SW implementation.
To guarantee consistent and high quality SW engineering, SW technical design, SW
implementation, SW test and verification, SW documentation and maintenance
dedicated software development standards apply for S/C OBSW development.
Such software standards
prescribe diverse de-
velopment guidelines for
software in a space
project – here to be
interpreted for the OBSW,
respectively for all soft-
ware elements in the
OBC or in other S/C
equipment. Prescribed in
such standards are usual-
ly the development ap-
proach, the development
phases, the review mile-
stones and the documen-
tation to be delivered for
each of the milestones.
Several software standard
families exist:
ECSS Standards:
For European space
projects there exists an
Figure 11.1: Family of ECSS standards. © ECSS
entire suite of standards
Software Engineering Standards – Overview 167
for spacecraft development – not only dedicated to OBSW development – the so-
called ECSS standards. These standards are elaborated and published by the
European Cooperation for Space Standardization, (ECSS). This commission includes
members from the European Space Agency, diverse national agencies and industrial
partners. Relevant for software development and thus for OBSW are especially the
standards:
● ECSS-E-ST-40 Software engineering, and
● ECSS-Q-ST-80 Software product assurance.
Please also refer to figure 11.1. The ECSS standards are a family of cross-
referencing documents which is very exhaustive but also sometimes unhandy to
read. The completely revised ECSS standard set which is available since the end of
2010 has actualized all major parts and has improved the precision of the standards
a lot. More details on the ECSS standards are included in chapter 11.3 below which
explains in more detail the content and the intention of such a SW engineering
standard at the hand of the ECSS-E-ST-40C.
Figure 11.2: Galileo Software Standard as a closed single book standard. © ESNIS
The Galileo Software Standard comprises a common requirements set for all
software development, integration and test phases in the frame of the Galileo
navigation system program. Furthermore operations and maintenance topics are
treated as well as the full scope of software product assurance topics. GSWS is a
common standard for the:
● Space Segment, (SS), which encompasses all elements on board the Galileo
navigation satellites .
● Ground Control Segment, (GCS), comprising all components inside the
ground stations for control and housekeeping of the 30 satellites.
● Ground Mission Segment, (GMS), comprising all components inside the
operator stations by which the Galileo payloads of the satellites are operated.
This includes signal generation, security codes handling, cyclic code updates,
leap time corrections of the atomic clocks aboard etc.
Software Engineering Standards – Overview 169
● Test User Segment, (TUS), comprising all elements for test of Galileo
receivers and car navigation systems under realistic conditions before full in-
orbit availability of the spacecraft.
Further reading and Internet pages concerning software development standards is
provided in the according subsection of this book's reference annex.
For the stepwise approach, the required review milestones, the required
documentation as well as for document structures and content and for the product
assurance each software standard has its own “Engineering Requirements”. Some
software standards replace the IRR by a "Test Readiness Review", (TRR).
For the S/C system engineer the problem always exists that such software standards
are written by the authors only having in mind the pure OBSW and accordingly
relevant topics. General hardware / software integration problems, electronics and
electric topics and S/C design problems also affect the OBSW and are mostly not in
focus of these standards and have to be managed by an OBSW system engineer
“translating” consequences from system design and equipment unit design into
OBSW functional requirements. The same applies for any design changes that arise
throughout the entire S/C development.
As already indicated the SW standards also focus on the SW development process
with its milestones, documentation and the like. The ECSS comprises the following
main sections – please also refer to figure 11.4:
Software Standard Application Example 171
5.2.3 Software related system 5.2.5 System requirement review 5.9.3 Operational testing
verification
5.3.2 Software life cycle 5.3.4 Software project 5.3.6 Review phasing
management review description
5.3.8 Technical budget and
margin management
5.3.3 Joint review process 5.3.5 Software technical 5.3.7 Interface management
reviews description
As it can already be identified from the topics treated in the ECSS_E-ST40 above,
the requirements in such SW standards are largely focused on the production and
172 OBSW Development Process and Standards
This example depicts the requirement number, title, text and expected output
documents at dedicated review milestones. At the start of a project the developer
must provide a compliance matrix stating in how far one intends to be fully, partly or
non compliant to all of these engineering requirements. All deviations must be
justified.
At project end one must provide a compatibility matrix stating achieved compliance
and which documents, review minutes, product assurance reports etc. prove the
compliance. An example for such an engineering requirement is given below.
Part IV
Satellite Operations
Mission Types and Operations Goals 179
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
180 Mission Types and Operations Goals
S/C Operations is the domain of controlling a S/C from ground ”to perform its work“ in
orbit or for deep space probes during target approaches – under nominal and failure
recovery conditions respectively. For enabling this – besides a suitable ground
infrastructure – a suitable operations concept for has to be engineered and according
functionality has to be designed and implemented on board.
The ECSS-E-ST-70 and its subparts detail – concerning the above topics – which
tasks of operations engineering are to be performed in which of the S/C development
phases.
Operations Engineering to provide a S/C control policy and its implementation (via
TC/TM definitions etc.) associated to engineering requirements and constraints.
Spacecraft manufacturer support to the “Launch and Early Orbit Phase”, (LEOP).
These tasks of operations engineering are closely integrated into the spacecraft and
mission analysis and design process as it was sketched out in chapter 2. These tasks
will be treated in more in chapter 13. The explanations however will not be allocated
closely to the S/C development phases but will instead be structured concerning
engineering tasks under the responsibility of the S/C manufacturer and the tasks later
under responsibility of the operations center and finally the launch and LEOP
activities of both partners together.
Depending on the basic type of S/C mission the operations goals differ slightly and so
do the applied operations infrastructures. A short classification is given below.
Mission Types and Operations Goals 181
Figure 12.3: Mars Express and its planet approach trajectory. © ESA
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
186 The Spacecraft Operability Concept
The spacecraft operability concept, which will be explained in this chapter, covers
diverse engineering topics such as the S/C modes, system autonomy and the like.
They have already to be considered during S/C system conceptualization and have
to be refined subsequently over the development phases – as was already
expressed. And obviously these topics have to be treated during OBSW require-
ments definition, OBSW design and testing. However since these topics become fully
visible not before the start of S/C operations tests and since these topics are only
partly covered by OBC hardware or OBSW, they are treated in this part IV of the
book in a consolidated manner. The main operability concepts to be worked out and
to be defined for S/C operations are:
● The spacecraft commandability
● The spacecraft configuration handling
● The system operability concept
● The PUS tailoring
● The onboard process IDs definition
● The spacecraft mode concept
● The downlink concept
● The mission timelines
● Spacecraft operational sequences
● Spacecraft authentication
● The spacecraft observability from ground
● The spacecraft science data management
● The onboard synchronization functions and science and housekeeping data
timestamping – called “datation”
● The data downlink
● The redundancy concept
● The satellite onboard autonomy
● The spacecraft FDIR concept
● The satellite's operational constraints
● Flight Procedures and their testing
Two important documents related to these topics are generated during the S/C
engineering from phase A to D. One is the “Spacecraft Operations Concept
Document”, (SOCD), developed during phase B and the other one is the “Space
Segment User Manual”, (SSUM), which is also called the “Flight Operations Manual”,
(FOM). This is a multi-volume document which, in its final issue, is provided by the
S/C manufacturer to the S/C operations center crew. It comprises sections on:
● Mission phases and purposes
● System design summary
● System-level autonomy
● System-level configurations
● System-level budgets
● Satellite or ground station interface specifications
● System level operations
● System-level modes
● Mission timelines
● System-level failure analysis
● Platform subsystem descriptions
● Payload definitions
The Spacecraft Operability Concept 187
The key thing to understand is, that the user manual is not produced as a final sum-
up at end of S/C AIT phase, but it is prepared already in a first issue during the
engineering phase C. Already at CDR a first issue (some volumes still may be drafts)
is and needs to be available. The AIT team – when starting system AIT activities after
CDR – is depending heavily on similar information as the ground control team later –
e.g. concerning S/C and subsystem operation. Therefore already the AIT team
directly extensively this SSUM during AIT – and contributes to SSUM improvement.
The TC routing from receiver to equipment via MAP-IDs, APIDs, CPDU has already
been treated in chapter 8 – particularly in 8.7 and according visualizations are
covered by figures 8.10 and 8.14 and shall therefore not be repeated here.
The scope of commandability covers both the setting of parameters on board as well
as the startup or shutdown of onboard functions as they are defined during OBSW
design in the function tree – please refer back to figure 10.3. The engineering task in
this field is to design the S/C commanding in such a way, that whichever single failure
in whatever unit arises, the system can still be recovered.
The commandability concept is completed by definition of commands to the satellite
Authentication Unit. The authentication topic is treated in chapter 13.14.
The first part specifies which is the nominal equipment to be used on board. If not
advised otherwise (e.g. via contradicting health info or via a ground command) the
PCDU and OBSW always will power and boot / initialize the onboard equipment
listed as “Nominal” – in most cases all A sides of redundant units.
The Safe Mode vector part specifies which units are to use in case of S/C fall back to
Safe Mode – in most cases all B sides of redundant units.
The third part contains the vector of units being identified as healthy. In case a unit is
flagged as non healthy any attempt to activate it (e.g. due to a mode change) would
trigger a refuse and an FDIR case. This is to prevent the S/C being commanded to
modes (e.g. by a loaded timeline) while a required sensor or payload was ruled out
as non healthy before. The health vector content can only be reset / changed from
ground.
Here the first part lists which equipment is powered which is an important status
information for TM. Furthermore when the OBSW intends to take an equipment (e.g.
an instrument) into operation, it first must advise the PCDU to supply power to the
unit, then it must check power availability and then it can start TM acquisition from
the unit and controlling the equipment.
This leads over to the second part of the vector. It lists from which units cyclic
housekeeping TM acquisition is running – which does not yet mean they all are
operational. E.g. a star tracker may be powered and booted, TM is cyclically
acquired, but it is not yet operationally used by the AOCS control algorithm.
And consequently the third part of the S/C status in the SCV lists the equipment
which currently is in use operationally.
The SCV – particularly the status part – is continuously updated during operations
and the configuration part is directly affected in case an equipment was identified to
have failed. An “equipment” in the context of the SCV can also be a data bus.
An important aspect still has to be mentioned concerning the SCV with respect to its
updates. The updating of entries always has to be performed by a safeguarded SW
algorithm to avoid that in case where the OBSW fails just at the time of SCV update,
corrupted or contradictory entries are included in the SCV. E.g. for each entry a write
flag is set before writing the update which is deleted again after successful write. In
Spacecraft Configuration Handling Concept 189
such case if the OBSW fails during write, the write flag remains set and after reboot
the write flag for the SCV entry still is visible, indicating to the rebooted SW that this
entry is presumably invalid and has to be reverified by additional measures (query
power status by TM from PCDU or other).
Since the management of such safeguard memory parameters and of the SCV are
not part of the standard PUS services, for these tasks dedicated private services are
to be defined in most cases during the S/C engineering phase. Which directly leads
over to the next topic in the context of operability – PUS tailoring.
The Packet Utilization Standard, its services, the services defined by standard and its
openness to mission dependent tailoring already have been mentioned in
chapter 8.6. However in practically each space mission subservices of the standard
set can be identified which are not needed and other services and subservices can
be identified which aren't covered in the standard repertoire. Such additionally
requested services can be introduced by the spacecraft supplier as mission specific
ones in the numbering scheme between 128 to 255.
PUS tailoring is an essential task throughout the entire spacecraft development and it
covers multiple aspects:
● On the one hand the services from the standard set have to be selected and
tailored for the spacecraft platform control.
● In case where additional services are identified for platform control, they have
to be defined on top. Examples are services for the SCV vector management
as explained in the previous chapter, or a function monitoring service. While
the standard PUS includes service 8 for function commanding, it does not by
default include one for monitoring since function implementations on board
can be too different between missions.
● A further aspect of PUS tailoring comes into the game during S/C
development phase C when the onboard equipment is selected. It was already
mentioned that modern high-end platform equipment such as GPS / Galileo
receivers, star trackers and the like are commandable via PUS TC and TM
themselves. When selecting such equipment from a supplier, the S/C platform
overall PUS however mus comprise at least all those services from the
equipment which are intended to be used. By this effect the overall PUS
service set for the spacecraft – which has to be reflected in the ground
segment's satellite TC / TM database – becomes a superset of the original
platform services + deltas induced by selected equipment.
● A further driver for services can become the combined handling of payload
science data plus platform geolocation and / or attitude data via the science
data downlink (X-band or the like). Such additional platform information – often
called ancillary or complementary data – is downlinked together with the
science data via dedicated service packets to later ease mission product
generation on ground.
190 The Spacecraft Operability Concept
From this it becomes obvious that the PUS command set which the flight operator
later finds in the FOM is only the final outcome of a detailed engineering process
covering the services, the TC and TM packets and the variables and parameters
included in all the service and subservice packets.
For addressing the diverse onboard software processes – either inside the OBSW or
in other intelligent units – unique process IDs have to be allocated for the entire
spacecraft to allow a proper routing of uplinked TCs and to allow for each TM packet
the identification of the submitting unit and software process. For placement of the
Process IDs in the TC and TM packet headers the reader is referred back to figures
8.10 and 8.11. A Process ID allocation table for a fictional spacecraft is depicted in
the table below.
Ground
Others
7F Idle Packets
The OBSW internal task scheduling with allocation of dedicated time slices for the
diverse application processes and the data bus channel acquisition scheduling
already have been treated in the chapters 9.1 and 9.2. The technical implementation
background can be found there.
The primary driver for onboard task scheduling is usually to achieve the appropriate
controller performance. E.g. the frequency for AOCS application process scheduling
and accordingly the frequency for AOCS sensor data acquisitions and AOCS actuator
control is primarily driven by the required AOCS control precision.
Indirectly these settings however also have implications on the operations. For
example they automatically prescribe the maximum application TM packet generation
rate and thus the resolution of TM that is available for FDIR debugging cases.
Therefore the task scheduling and channel acquisition are usually designed during
early phase C of spacecraft development and are later revisited during verification of
the FDIR operability concept – which is treated detail later in chapter 13.16.
192 The Spacecraft Operability Concept
The first topic for engineering of the spacecraft mode concept is the freezing of the
spacecraft's operational phases to which later the spacecraft modes and subsystem
operational modes will be allocated. The operational phases for an Earth observation
satellite e.g. are the:
Pre-launch Phase:
This phase covers the final launch preparation activities. It still belongs to the
satellite test program. The principal activities comprise:
● The activation of the S/C with external power supply
● The final check-out for flight
● Switching the power subsystem to satellite internal power supply via battery
● Configuration of the S/C and the instruments into launch configuration
Commissioning Phase:
This phase – with a duration of approximately 10-15 days for the platform and
several months for the payloads – is targeted
● towards verification of the proper platform performance (e.g. pointing and
geolocation accuracies),
● to testing of all platform modes – particularly of AOCS and
● towards taking all payloads into operation,
◊ as well as performing their in-flight calibration
◊ and performance verification
◊ together with the payload ground segment.
During the diverse operational phases the S/C can be in different system operational
modes. Obviously not all system modes are relevant for all operational phases. In
addition during one single satellite system mode the S/C subsystems may be
switched to diverse subsystem modes. One of the basic decisions during the
engineering phase concerning the operations concept is the selection between a
closed or open S/C mode concept.
The mode concept starts with defined modes of subsystems. AOCS modes for
example can include
● an AOCS Safe Mode (only ES, SS and MTQs active) and
● one or more nominal modes such as a fine pointing mode with a.m.
components active + GPS, STR, RWLs etc.
Please also refer to figure 13.1 depicting a S/C and an AOCS mode diagram. Further
submodes can be defined depending on the use of the nominal or redundant
equipment chain. Similar modes are definable for each subsystem.
In this example (simplified from CryoSat 1) the following dependencies between S/C
and AOCS modes exist:
● In S/C Off & pre-launch mode, AOCS mode is OFF.
● In S/C launch mode AOCS is in standby mode.
● In S/C separation mode AOCS is in rate damping mode.
● In S/C nominal mode AOCS can be in coarse or fine pointing mode.
● In S/C orbit maneuver AOCS is in orbit control mode.
● For S/C Safe Mode the redundancy setting of key equipment are listed in the
Safe Mode box.
In a S/C featuring a so-called “closed mode concept”, subsystem modes are
constrained to the main S/C operational modes. E.g. when commanding the S/C to
Safe Mode automatically AOCS and all other subsystems also will transit to their
subsystem Safe Mode, and e.g. for AOCS equipment it would not be possible to
manually activate and command GPS or STR from the ground. S/C with “closed
mode concept” need a dedicated FDIR / Safe Mode for service, where ”everything“ is
commandable, but which requires authenticated access.
By contrast in a S/C implementing a so-called “open mode concept” the entire S/C is
also commandable to an overall target mode via a single TC – e.g. S/C including all
194 The Spacecraft Operability Concept
subsystems to Safe Mode – however in each mode ground retains full controllability
of all S/C equipment without bypassing main system control. In addition also
redundant versus nominal equipment activation is still freely selectable.
Figure 13.1: S/C operational modes vs. subsystem modes. © Astrium GmbH
The selection of a mode concept in most cases depends on the mission type and the
the decisions of platform operators in the ground control center:
● While the open mode concept gives more flexibility it requires more caution
since there are less mechanisms in the system control which block switching
to non-optimal configurations or which block reconfiguration of essential
equipment during payload operations.
● Closed mode concepts are better suited for missions with a higher level of
onboard autonomy, since the autonomy master control typically is
implemented as some type of state machine or rule chainer and it will not be
able to handle the high number of free switching permutations which an open
mode concept allows.
The Spacecraft Mode Concept 195
◊ As example for Galileo, the navigation payloads are operated via a payload
OPS center and the platform via experts in a dedicated platform FOC.
Therefore an open mode concept was chosen.
◊ Classic Earth observation satellites like ESA Sentinel 2 (cf. figure 12.1)
also rely on an open mode concept. Sentinel 2 in addition offers a full
command / control symmetry – i.e. the S/C could also be booted on the
launch pad in a configuration applying all equipment and buses on the
redundant side or in any nominal / redundant mix. Systems like this are
very flexible during FDIR operations but require a significant effort during
ground testing.
◊ Closed-mode concepts are often chosen for military satellites due to
implemented higher level of onboard autonomy partly fully automatically
processing mission product requests.
If the S/C is launched with booted OBC – which is the case for most commercial and
agency missions – the initial transfer of the satellite from OFF to pre-launch mode
and then to launch mode is performed via AIT control procedures. During launch the
S/C in launch mode is passive which means
● the OBC tracks / records S/C power and thermal conditions
● the OBC tracks / records position and attitude as far as possible
● but obviously until separation from launcher all actuation activities of the
AOCS are disabled.
This state sometimes is also called “standby mode” instead of “launch mode”.
After release from the launcher the satellite has to deploy antennas and – if not
body-mounted – the solar panels, it has to stabilize its attitude, to reduce rotational
rates and has to acquire the initial attitude with the solar array pointing to the Sun,
antennas pointing to Earth etc. During this “initial acquisition mode” or “rate damping
mode” further orbit correction maneuvers may be required. After successful
finalization of the initial acquisition mode the S/C is ready for being made operational,
i.e. for its operational modes which highly depend on the mission. In figure 13.1
above “coarse pointing” and “fine pointing mode” are cited as nominal operational
ones. In addition a nominal mode for orbit correction maneuvers is to be foreseen.
Non-nominal
Safe Mode X X X X
Failure Modes X X X X X
196 The Spacecraft Operability Concept
For all modes after launcher separation the FDIR functions described in the OBSW
section will be used for failure handling and, depending of the problem, either can
keep the S/C in operational condition or will trigger a dedicated “Safe Mode” or other
failure modes.
Already in the SOCD a first description of each subsystem mode is elaborated which
is finalized then up to satellite SSUM. Such subsystem mode descriptions comprise:
● Which subsystem control loops are executed
● Which parameters and states are initialized
● Which measurements can be processed – for AOCS e.g. attitude and position
sensors
● Which actuators can be commanded
● Which propagation algorithms are running – such as AOCS position and
attitude forward propagation algorithms
● And all these including subsystem operational constraints – such as for AOCS
the maximum duration permitted for a rate damping.
Finally during operations engineering the allocation of equipment states versus the
diverse satellite modes shall be worked out. Here the reader is referred back to
figure 2.7. As chapter 2.3 explained, such tables are already worked out during S/C
engineering phase B and they go down to equipment realization level (e.g. MTQ1,
MTQ2, MTQ3), not only to type level (MTQ).
The mission timelines which will be treated subsequently here are the
● LEOP timeline,
● Commissioning timeline and
● Nominal operations timelines.
The definition of these timelines – particularly of the LEOP timeline already in S/C
development B – require the identification of the envisaged launch vehicle since
especially the LEOP timeline is driven by the launcher characteristics to a very large
extent.
Timelines for deep space probe flight phases are highly dependent on the mission
characteristics, on the celestial body constellations, resulting launch windows, swing-
by maneuver constraints and the like and therefore exceed the scope of this
introductory book.
Mission Timelines 197
The next steps for S/C operational concept definition are the selection of a Launch
Vehicle, the launch site and launch setup (single launch, double launch, piggy back
launch, trajectory injection with S/C coupled kick-stage) and from there resulting the
launch sequence with separation and the launch timeline with ground station
visibilities and contact times.
The satellite operations concept has to to be closely aligned with this Launch and
Early Orbit Phase, (LEOP), of which the first parts largely are driven by the launch
vehicle itself. This comprises the exact definition of the timely sequence and orbit
position and flight vector definitions for the
● launch / lift-off itself,
● ascent phase,
● separation of S/C from the Launch Vehicle,
● potential OBSW auto boot and S-band receivers auto-activation in case of
cold start conditions (in many cases required for piggy-back launched S/C),
● OBC / OBSW properly taking over S/C control,
● auto activation of the S-band transmitter,
● start of AOCS control for rate damping and attitude acquisition respectively,
● auto start of deployments (antennas, solar array),
● execution of potential automated attitude correction maneuvers,
● ground contact acquisition at first ground station visibility,
● and the verification of orbit correctness and command of additional orbit
correction maneuvers from ground respectively.
During LEOP flight phase then – based on the accordingly defined telemetry
packets - these phase goals are monitored by the FOC. For a qualitative
representation of a launch sequence please refer back to figure 2.10. The figure
below depicts an example with timings, altitude specifications etc.
US CE US CE US CE
switching-off switching-on switching-off
t=875 s t=4063 s t=4085 s
Separation
HF jettisonning h=226.1 km h=500 km h=500.0 km
of two SC US drift pulse
t=190 s V=7820 m/s V=7509 m/s V=7590 m/s
t=6000 s performance
h=123.7 km Нп=180 km Нп=180 km Нп=500 km
V=3647 m/s На=500 km На=500 km На=500 km
=15
q=0 kg/m2
Separation of
Separation of US from stage
stages I and II II and CE start
t=319 s
t=136.1 s h=227.4 km
h=66.0 km V=5680 m/s
V=3165 m/s q=0 kg/m2
=22 t – present time from moment of launch
q=73 kg/m2 h – altitude above common Earth ellipsoid
Stage II
Stage I booster V – relative velocity
Maximum velocity booster q – velocity head
head - trajectory inclination angle
t=66 s HF doors Нп – perigee altitude
h=11.4 km Launc На – apogee altitude
V=605 m/s h
=46
t=0
q=6392 kg/m2
This figure 13.2 depicts the time sequence for ascent up to shortly after S/C
separation from launcher. It is furthermore essential to plan the S/C ground station
visibilities for the nominal LEOP ground stations and for additional stations which
could reach the S/C in emergency cases up to finalization of S/C attitude and orbit
acquisition. Such ground station visibility plans are usually depicted in strip chart form
as shown below and are worked out already during S/C engineering phase. They
cover the time window from launch up to end of the LEOP phase i.e. covering several
days.
Carnicobar
Cuiaba
Goldstone
Kiruna
Kourou
Mauritius
Okinawa
Perth
Poker Flats
Redu
Santiago
Svalbard
Villafranca
Wallops
Troll
0 3 6 9 12 15 18 21 24
Time from launch (hours)
Figure 13.3: Ground station visibility events during the first orbits. © ESA/ESOC
Preferably the S/C separation from launcher is performed within the visibility range of
a ground station. In this case the OBC / OBSW can activate the S-band transmitter
directly after launcher separation and ground can start acquiring telemetry and can
observe the S/C during processing of its initial procedures.
segment to be involved in this characterization process since on the one hand the
FOC has to participate in S/C position measurements and the PGS has to provide
the capability to deliver user data products from payload instrument science data
downlinked via X-band or other. The main tasks to be performed during this phase
are:
● To take all S/C platform equipment into operation – e.g. to activate AOCS
sensors and actuators which have not yet been used during end of LEOP or
during coarse pointing mode such as GPS, star trackers etc.
● To measure that AOCS performance suits the needs of mission product
generation.
● To measure orbit positioning accuracies from ground via lasers and onboard
retroreflectors and to cross verify against AOCS reported position data.
● To take payloads into operations and to acquire first raw images for getting
operational profiles such as onboard thermal data sets to be used for
quantitative calibration.
● To calibrate payloads by flyover of fully characterized targets to quantitatively
calibrate onboard sensors.
● To support geophysical verification of science data products.
As already mentioned above, this phase is quite extensive and entire payload
characterization usually takes several months.
During the normal operations phase timelines are uplinked cyclically to the S/C for
execution by the onboard scheduler inside the OBSW – cf. figure 8.17. These
sequences of time tagged commands cover the following scope of activities:
● Payload commanding to observe the desired ground targets with the
requested payload instrument settings. This can imply both time tagged
payload operation as well as position tagged payload operation.
● If required corresponding AOCS maneuvers for the observation are included in
the timeline such as slew maneuvers, roll-over maneuvers for bidirectional
reflectance measurements of a target etc.
● Secondly X-band downlinks will be scheduled according to onboard science
data storage resources and downlink bandwidths of the available stations.
● The S-band transmitter is usually commanded to off, out of reach of FOC
ground stations.
The payload operations timelines vary depending on whether the measurement
execution has to be performed automatically without X-band ground station visibility
and mission product data are to be stored on board, or, whether science data can
directly be downlinked during payload operations due to an X-band station visibility or
due to visibility of a relay satellite – presuming the S/C is equipped with a relay
communication equipment (typically an onboard laser communication terminal).
200 The Spacecraft Operability Concept
Orbit correction maneuvers in principle can also be commanded via uplinked timeline
and can be executed automatically, but in most cases it is preferred to perform these
under direct ground station visibility conditions. The same applies for RWL unloading
operations etc.
Generation of these operational timelines are daily work of the Flight Operations
Center once the satellite is in operation. They have to reflect the
● user requests for mission products (potentially competing requests),
● the available onboard resources (particularly mass memory and power,
● the orbit conditions which drive Sun and eclipse phase constraints,
● cyclically required orbit correction maneuvers and
● cyclically necessary system re-calibrations.
During S/C engineering and particularly during OBSW design and the tailoring of all
the TC and TM sets and the PUS services the entire scope of functions and features
has to be considered for performing these tasks.
For simplification reasons an important operational concept so far has been left aside
deliberately – both during OBSW discussion and during the discussion of operational
phases. It concerns the operational sequences which are the type of sequential
functions performed by the spacecraft automatically in the relevant flight phases. The
most common autosequences which can be found in all Earth observation satellites
are the:
● System initialization sequence
● LEOP Autosequence
● System reconfiguration sequence (OBC reboot)
● Unit switch-on sequences
● Unit reconfiguration sequences
● Nominal instrument operations sequence
● Transition to Safe Mode sequence
● Recovery sequence from Safe Mode to a nominal operations mode
● Orbit control maneuver, (OCM), sequence.
It depends on the S/C OBSW design whether orbit control maneuvers are
implemented as timelines of time tagged commands, as OBCPs or as parametrized
functions – i.e. as OCM-sequence.
● The system initialization sequence serves for initial boot of the power unit and
OBC and for read-out of the "Spacecraft Configuration Vector", (SCV), by the
OBSW.
● The LEOP Autosequence serves for autocontrol of the S/C during launch,
separation and all initial flight steps until ground can take over at first contact
or even beyond with ground just monitoring onboard activities. In most cases
this sequence is implemented as OBCP which allows updating it until short
before launch in case of launch window or ground station changes etc.
Operational Sequences Concept 201
● The system reconfiguration sequence is applied when the OBC or some of its
subcomponents failed. The OBC or according subunits have to be switched
over to the redundant side – which except for the hot-redundant CCSDS
receiver side requires OBC reboot. It sets the according SCV entries and
triggers the PCDU to shut down and re-power the OBC components
accordingly. The initiating alarm pattern which caused the failure leading to
reconfiguration must be traceable in the system log by ground.
● Unit switch-on sequences are applied for taking equipment into operation
which was not used in the S/C mode before – e.g. for activation a GPS
receiver and star trackers before transiting from a low level AOCS coarse
pointing mode to a fine pointing mode.
● Unit reconfiguration sequences serve to switch-over onboard equipment to
redundant units such as switching an internally redundant GPS from nominal
to the redundant side. Such unit reconfiguration sequences also have to be
available for switching tetrahedron assemblies of RWLs from 4-unit to 3-unit
mode. The same applies for gyroscope tetrahedron assemblies.
● The Nominal Instrument Operations sequence is used by the time-tagged
commands uplinked for payload instrument control and the generation of
mission product data. The instrument control is achieved by uplink of a
command sequence setting the desired payload operational parameters and
by uplink of a time-tagged or position-tagged function command for triggering
the desired instrument mode transitions sequence to perform the observation.
● The transition sequence to Safe Mode serves for controlled transition of the
S/C to the Safe Mode which in most cases also applies switchover of all
equipment to the – presumably healthy – redundant side.
● The recovery sequence from Safe Mode to a Nominal Operations Mode
obviously serves for the inverse, the switch-back of the S/C to a healthy
operational mode (which then may leave the originally troublesome unit
causing the Safe Mode back on the redundant path).
The operational sequences either can be implemented as “Onboard Control
Procedures”, (OBCP), as described in chapter 8.6 or as fixed OBSW functions
implemented in the OBSW kernel. Both designs have their pros and cons. Important
is, that in both cases these operational sequences have to be designed and have to
be implemented during S/C engineering phase – and obviously have to be
documented for the operators first in the “Spacecraft Operations Concept Document”,
(SOCD), and later in detail in the “Space Segment User Manual”, (SSUM), also
called “Flight Operations Manual”, (FOM). The explanation in the SOCD and FOM is
usually based on classic program flow charts which may include IF / Then forks, Do /
While loops etc.
The nominal path – without forks and loops – of the operational sequences can also
be depicted in a more simplified graphical visualization with time as x-axis and the
S/C configuration vector elements as y-axis. The switch-down or switch-up flow then
can be visualized as bar chart as shown in figure 13.4 for a LEOP Autosequence.
This visualization shows better what is involved in the sequence and how it is
controlled in the flow.
202 The Spacecraft Operability Concept
An important aspect is that these operational sequences must fit seamlessly together
and must allow varying start conditions. For example:
● The System Initialization sequence is applied on ground when booting the S/C
for launch.
● Thereafter the LEOP Autosequence takes over until stabilized coarse pointing
mode is reached.
The end state of the initialization sequence thus must fit seamlessly to the start of the
LEOP Autosequence – including all equipment states, parameter limits, the data bus
acquisition sequence etc.
On the other hand in case the S/C had to be entirely rebooted in orbit and has to be
recovered by High Priority Commands,
● the system initialization sequence is also the initial one switching on the core
power supply and the OBC
● but the detailed nominal or redundant equipment side selections and boot-up
of the data buses, elementary AOCS components and others according to the
Spacecraft State Vector settings is taken over by the recovery sequence.
So in the second case the initialization sequence and recovery sequence must fit.
Covering all sequence interfaces during the design phase and considering all
equipment and the states thereof is a non-trivial piece of engineering work which is
both the input for the OBSW on the one side and for OBCP design or ground
command sequence design respectively. During S/C operation in orbit the ground
controllers must have a good understanding of the detailed effects triggered by these
command sequences.
System Authentication Concept 203
The relevant TM packets, logs and the like may not all be used during each S/C
operational phase or may not all be available during all phases (e.g. some AOCS
equipment may be off during LEOP phase or Safe-Mode). Therefore the overall
concept must be engineered to provide sufficient observability during all phases and
all operational modes.
A complex additional perspective is the monitoring and ground observability of
commanded or automated state transitions on board. TM packets are not only to be
defined for flight variable value tracking – like AOCS position vector, but also
dedicated switch packets are to be engineered to track the successful status or mode
switching of diverse equipment.
Even more complex is the topic of tracking the status of running OBCPs or OBSW
functions. Such functions can be fixed preprogrammed sequences of the OBSW or
can be Onboard Control Procedures triggered by TC, a timeline, an event or similar.
This topic is called “functional sequence monitoring”.
● By this means also standard monitors and events can be bound to the
execution progress of a functional sequence – for example to step counters.
● Other functional sequence or system parameters may be linked to such
intermediate status data, such as the AOCS controller during LEOP
autosequence waiting for the solar array deployment signal and then for the
solar array locked state signal before starting rate damping control.
● Since the execution engines for OBCPs or the binary code of fixed coded
sequences is spacecraft specific, no standard PUS service is available for this
means. Therefore in addition to the execution engine for the functional
sequence the OBSW has to provide a dedicated PUS service handler which
can collect the intermediate transition of the function’s state, can evaluate and
can report them in packet format.
The proper execution of all defined onboard functions – please again refer back to
figure 10.3 – must be observable after finalization of the observability concept
engineering task.
The synchronization and datation concept concerns the onboard time generation,
distribution and the coherent synchronization of all OBSW processes and the
onboard equipment which requires timing information or time progress information.
The key elements for this concept already have been mentioned in previous chapters
such as
● the physical quartz based oscillator clock module(s) on the processor board(s)
(see chapter 4.1),
● the onboard system clock time, (OBT), in the OBSW (see chapter 8.4) and
● a GPS or Galileo atomic clock time reference from an according receiver
which provides more exact time and better stability concerning clock drifts
compared to the quartz based clock modules.
Availability of time information on board is an essential function for OBSW task
control and for time stamping of S/C TM packets – housekeeping TM, event TM and
HPTM and science data packets.
For some operational modes such as Safe Mode the standard oscillator based OBC
clock precision may be sufficient while for certain operational modes with payload
instrument data generation, a very high timing precision for proper geolocation is
necessary and requires a GPS or Galileo atomic clock reference in most cases.
● At OBC boot-up – may this be on the launchpad for hot operations launch or
during LEOP for cold start launch or in orbit after any reboot of the OBC – at
first only the internal oscillator clock is available.
● It starts as a counter running up and the absolute time must be set by TC.
● The clock module of an OBC in most cases implements at least 2 hot
redundant oscillators – in most cases even 3 of them with a synchronization to
Synchronization and Datation Concept 207
each other and in case of 3 oscillators a 3:2 voting mechanism to detect and
rule out a damaged oscillator.
● The clock module then distributes the time information to the OBC's operating
system and thus it becomes accessible to all OBSW threads.
● If there are further equipment units on board which require absolute time
information (such as payloads or star trackers), such timing information must
be made available to these units via onboard TCs over data bus.
● As soon as GPS / Galileo precision time becomes available on board via a
booted receiver, it must first be verified, that the time information provides a
certain quality before switching over the OBSW to the GPS or Galileo signal
source by default.
● When the GPS / Galileo time signal is accepted, the OBSW is clocked in sync
to this signal by the following mechanism:
◊ The GPS / Galileo time signal is a signal arriving at certain time events –
e.g. as a “pulse per second”, (PPS), strobe – once per second.
◊ The “Numerically Controllable Oscillators”, (NCO), on the CPU boards
themselves freely perform a large number of oscillations between two such
GPS / Galileo time events.
◊ Then the electronic controlling these oscillators computes the deviation
between the GPS / Galileo Δt and the one theoretically resulting from the
number of NCO oscillations and re-adjusts the oscillator division factors to
align with the progression of time in the GPS / Galileo signal.
◊ The mechanism is comparable to adjusting the internal clock of a Linux
operating system – based on the PC motherboard quartz – via NTP time
signal.
◊ It also must be assured that as soon as the GPS / Galileo timing precision
again becomes weak – for whatever reason – the system automatically
switches back again to standard oscillator controlled mode – which might
imply falling back from higher measurement modes to idle or Safe Mode.
◊ During operations phase ground must always be informed about the time
source currently being applied via according TM packets.
● The datation concept further implies the time stamping of science data
packets with distributed OBT. Usually this time stamping is performed either
directly in the payload instrument or in the MMFU when storing the data.
● A further topic is the system synchronization on board:
◊ Synchronization of processes inside the OBSW is already provided by OBT
availability – from whatever source – to the operating system.
◊ Synchronization of external equipment is performed by distribution of a
time clock strobe – usually in the form of a “Pulse Per Second”, (PPS),
signal. The PPS signal is generated by the same source as the OBT
itself – i.e. from the processor board quartz oscillators – independent of
whether they are clock master or GPS / Galileo slave at that time.
208 The Spacecraft Operability Concept
The science data management concept can only be treated on a somewhat generic
level here since the space missions differ too much with respect to payload
instruments, data stores on board, application of ground link types etc. In the overall
operational concept the following aspects must be elaborated during engineering
phase together for both space and ground segment:
● Which payload instrument data are generated on board,
◊ from how many payloads,
◊ in separate measurement modes or in combined observation modes?
And obviously the topic of datation as it was treated in the previous chapter is also a
key issue for the science mission product:
● Concerning time stamping at time of generation for storage on board.
● And concerning TM packet stamping for downlink data stream generation on
board and package sequence re-assembly on ground in case where multiple
links are used in parallel or changing links are used.
The payload mission product data links between the satellite and ground are the:
● X-band Mission Data Downlink for the transmission of
◊ payload mission product data
◊ system complementary data
Some missions furthermore also offer the feature of platform housekeeping data
being transmitted via:
● X-band Satellite Housekeeping Data Downlink
In the frame of the engineering task for these links the according Virtual Channels
must be allocated. The configuration of the data processing in the FOC and PGS
then must be designed in a complementary fashion.
An example for the allocation of Virtual Channels to S-band and X-band data for a
fictional satellite is provided already in figure 8.15.
This is usually achieved today by use of lock-in amplifiers which detect the phase
and then keep the phase synced to the detected carrier frequency by a phase locked
loop oscillator.
The Doppler frequency shift however can also be used for spacecraft position
detection in case it is not yet or no longer precisely known. For example for small
satellites launched as piggy-back payload often the exact achieved orbit altitude and
position is known with limited precision only before the first ground contact. After cold
start boot in orbit furthermore no precise position information is yet available from
GPS / Galileo receivers. Similar situations exist for professional satellites after severe
failures and later recovery of the system in orbit.
To detect position via measured Doppler effect a method called “ranging” can be
applied:
● When a S/C is expected to come into sight, the G/S transmits a carrier wave
towards its assumed position and this carrier is sweeped up and down the
ideal frequency slowly – varying in a sawtooth profile.
● When the S/C transponder identifies this signal it transmits back a carrier on
the S/C G/S frequency with the same Doppler shift it has identified.
● The signal arriving back on ground thus shows
◊ sweep
◊ plus uplink Doppler effect
● By tracking this double Doppler effect the G/S infrastructure can compute the
S/C velocity component towards G/S and can ”measure“ distance (ranging).
● As soon as this up / down loop is closed the G/S mediates the link signal
carrier to the target frequency (sawtooth variation stops) and so does the S/C
transmitter to ground.
● As soon as the S/C receiver then has stable carrier without losses and sweep
it ”locks“ the receiver to be 'in service' and sends a carrier lock signal via
dedicated lines to the OBC CCSDS processor.
● The latter then automatically transmits a carrier lock TM packet down to G/S
which thus identifies the S/C being ready to accept telecommands.
For satellite spacecraft it can be stated that practically each example of today's
implementations already features some onboard autonomy functionality. The same
applies for transfer vehicles like ATV with their autonomous docking functions etc. For
satellites the autonomy is largely focused to perform operations during orbit periods
without ground contact. The S/C must be to manage the already mentioned failure
cases without ground station contact even if this implies payload instruments to be
switched off. At least a transition in a stable Safe Mode has to be guaranteed. But
there exist even higher levels of autonomy which can be achieved – and and which
have to be tested in case they are part of the design. However before discussing
these, some terminology definitions shall be introduced to define functionality and
features – also because the term “autonomy” sometimes is used very imprecise in
diverse contexts.
Automatic They run directly and straightforward and are initiated according
functions to a master schedule or by a control program, (OBCP executor),
and they are checked against a schedule (operations timeline).
Autonomy Event Events are triggered either through anomalies (violation of limits,
error flag activations) or through the occurrence of a predefined
status modification (e.g. position reached, attitude reached).
Autonomous Such systems involve both spacecraft (e.g. satellite) plus ground
spacecraft system segment and are distinguished by a wide independence from
permanent human interventions. The distribution of intelligent
functions for autonomy between space segment and ground
segment is not prescribed.
Autonomy is a key system level technology for spacecraft and it can have two basic
different characteristics:
Basic characteristics of this type of spacecraft systems – like space probes, landers,
rovers, transfer vehicles – are:
● The level of autonomy typically being able to be designed between
◊ simple executions of macro command sequences, – “sophisticated
OBCPs” and
◊ an intelligent onboard available mission planner.
◊ but instead they specify the geometrical observation target, the spectral or
other mission product characteristics they desire and they specify the
delivery date of the mission product.
Only the precise testing of the overall scenario can prove the “process improvement”.
Required here are again system environments to simulate detailed scenarios.
However besides pure spacecraft simulation in this case the functionality of the
ground segment elements – including the user request based mission planning – is
to be included in such verification scenarios.
Kernel Root
Evt.Act.Srv.Hdlr.
Event Mgr. Evt.Rptg.Srv.Hdlr.
OBCP Srv.Hdlr.
Autonomous OBCP Mgr. OB Storage Hdlr.
System Large Data Srv. Hdlr.
Serv. IF Hdlr. Ctrl. OB Mem. Mgr. Mem.Mgmt.Srv.Hdlr.
Time Mgmt.Srv.Hdlr.
Param.Monitor OB Scheduler OB Sched.Srv.Hdlr.
OB Monit.Srv.Hdlr.
Subsytem Auton. Pckt.Forw./Retr.Srv.Hdlr.
Fct.Mgmt.Srv.Hdlr.
Payload Ctrl. App. AOCS App Power Ctrl. App. Th. Ctrl. App. Test Srv.Hdlr.
Statistics Srv.Hdlr.
OBSW DP HK Srv. Hdlr.
Dev.Cmd.Srv.Hdlr.
Eq.
PL Eq. Eq. Eq. Eq. ..... TC Verif.Srv.Hdlr.
Hdl.
Hdl Hdl. Hdl. Hdl. Hdl. TM Enc. TC Dec.
Figure 13.6: Autonomous system and subsystem control integrated with monitoring,
event handling and scheduling.
The redundancy concept elaborated also during the S/C engineering phase has an
essential influence on the operability of the spacecraft in nominal and failure cases.
During development phases B and C the redundancy types for each equipment and
major subunits must be frozen. It must be defined – inline with features of the to be
procured equipment – which units
● are internally redundant,
● or externally redundant
and furthermore it must be distinguished between
● hot redundant equipment,
● cold redundant units
Redundancy Concept 217
OBC
Data Bus
S-band Transmitter
Power
PCDU
FOG
Reaction Wheel
Thermal
Payload Instruments
Depending on the redundancy design, the according TM and TC for each redundant
unit equipment must be made available for operations on ground. Units separate
from each other – even when operated in cold redundancy – must be commandable
individually and telemetry must be uniquely identifiable as coming from the nominal
or redundant source.
In case of PUS commanded intelligent units, internally redundant (cold redundant)
units may be addressable by the same APID from ground. In contrast thereto re-
using the example above where 2 star trackers out of 3 will be used at a time, these
3 star trackers are individual units and each one of them requires a separate TM and
TC set with separate APID.
A further topic is the coupling between units or subunits respectively. This concerns
whether the OBC processor module A is only coupled to safeguard memory A or
whether both A and B units are cross coupled. Such design decisions later essentially
drive system commandability from ground. While in the above example of the OBC
processor module and the safeguard memory the choice for full cross coupling will be
obvious, such decisions are less trivial e.g. for payload sensor coupling to payload
data handling chain equipment, MMFU and the like.
The design of the system redundancies and cross couplings directly has high
influence on the spacecraft commandability concept as was presented in
chapter 13.1 and even more on the spacecraft observability concept, see
chapter 13.10.
Concerning the redundancies available on board and the operational preselection the
following basic principle of “Health overrules redundancy preselection” is explained at
hand of an example:
● In the above table there are 2 operational STR occurrences needed during
operations – to be selected out of 3 available ones.
● Assuming operation is performed with STRs 1 and 2.
● In case of necessary reconfiguration – e.g. STR1 to be deactivated and STR3
to be taken into operation, the SCV health entry information overrides
redundancy reconfiguration.
● If STR3 were marked “non-healthy” – in the SCV, this reconfiguration
approach would be rejected. See also chapter 13.2.
“Failure Detection, Isolation and Recovery”, (FDIR), was already explained as key
functionality of the OBSW. Obviously not all failures are subject to onboard
220 The Spacecraft Operability Concept
identification and not all failures are subject to onboard recovery. The FDIR concept
to be worked out for the spacecraft during the engineering phase follows some basic
requirements and principles, implements a certain failure hierarchy – specifying
furthermore on which level the failure is to be fixed – and finally it implements a
consistent approach for the functionality transferring the spacecraft to Safe Mode and
how to recover from there. A properly defined Safe Mode with full S/C observability is
essential for FDIR operations. The Safe Mode must also assure a proper balance of
the S/C produced and consumed resources (mainly power) since the diagnosis of
failures plus recovery in most cases will not be possible within one ground contact (in
particular not for polar orbiting Earth observation satellites).
Typical requirements for FDIR design at the beginning of the S/C system engineering
phase request that:
● A clear hierarchy is to be defined, which type of failure is to be identified and
managed on which level FDIR level.
● The S/C must be able to reach its Safe Mode autonomously.
● The Safe Mode, if triggered, shall not limit ground in any way w.r.t. spacecraft
observability and commandability.
● Ground may also be allowed to submit commands which are blocked for the
OBSW or are not allowed in that sequence for the OBSW.
● Ground must be able to perform a detailed status analysis and failure event
history analysis for unique failure identification.
● Ground may alter operational limits to avoid future Safe Modes – e.g. in cases
of failures triggered by equipment degradation.
● Obviously – but not trivial to realize – the transition to Safe Mode itself shall
not endanger the S/C, i.e. for example shall not require potentially hazardous
commands or command sequences.
● Also in Safe Mode the OBC shall be running and shall allow for OBSW patch
and dumps and memory patch and dump functions.
● For all failures imagined during S/C engineering it must be assured that they
clearly can be distinguished due to their symptom sets.
potential failure these chains of failure detection and resulting failure handling – at
least failure isolation, preferably also including recovery – must be elaborated. Such
a design is typically achieved by following the design guidelines cited below:
● Failure detection must be based both on parameter monitoring on unit and on
system level and as a complement on functional monitoring level. This implies
that onboard monitoring permanently must check whether parameters are
within appropriate ranges and whether all relevant processes are running,
mode transitions are properly performed etc.
● Usually the FDIR concept provides both basic approaches:
◊ Fail Operational – where a redundant equipment directly can be called into
operation without endangering failure escalation (e.g. in case of a heater
failure, a thermistor failure, an X-band modulator or amplifier failure).
◊ Fail to Safe Mode – which transfers the S/C to Safe Mode.
● For the Fail Operational case the failure isolation will be performed by
removing the failed equipment from the operational functional chain by
reconfiguration to the redundant one. The failed unit then is listed in the SCV
as non-healthy unless reset by ground intervention.
● Onboard reconfigurations are based on OBSW functions or dedicated OBCPs
according to the changed settings in the SCV and the recovery function /
OBCP being triggered.
● The Safe Mode must be properly defined. Safe Mode is usually the mode
operating the S/C with equipment that has the maximum redundancy and
consumes the minimum amount of resources. Besides the Safe Mode there
may exist other safeguarding S/C configurations which are subject to the
individual S/C design.
● By which means Safe Mode can be triggered – OBSW functions, limit
exceeds, HW alarms etc. has to be carefully engineered. OBSW triggered
Safe Mode must be armed against accidental function triggering (arm and fire
principle).
● The transition to Safe Mode usually clears all HW interfaces and SW
functions. In most cases this is achieved by switching over the entire S/C HW
to its redundant side – which then automatically makes use of the redundant
set of physical units, interconnections and cabling. And in addition by OBC
reconfiguration and resulting reboot things like loaded timelines, running
OBCPs or functions etc. are all cleared. This prevents the OBSW to resume
interrupted functions or timelines during or after FDIR process.
● Each OBC processor board keeps its own OBSW image in NV RAM. One
OBC processor running one image keeps the S/C stable in Safe Mode. PUS
Service 6 is applied for OBSW patching and Function Service 8 is used for
triggering reconfiguration functions or OBCPs respectively which reconfigure
to the other processor with the patched OBSW image or which reboot the
same OBC processor with the patched image.
● Obviously there are some additional constraints such as for example Safe
Mode triggering during LEOP phase may not trigger deployments nor AOCS
222 The Spacecraft Operability Concept
● The lowest level comprises the handling of failures entirely on unit level, either
because it is feasible – such as EDAC error handling – or because the
equipment by default provides this feature, or because a certain FDIR function
on lowest level is extremely time critical – such as reaction to short currents or
overvoltage. This level also comprises data bus failures invoked by
electromagnetic effects and the like.
● The next higher levels 1 and 2 cover failures being handled on OBSW level,
either on subsystem control level or requiring upper system level. Examples
are also indicated in the figure. On this level of above equipment there are
monitors available for limit check of unit parameters but also subsystem level
abstract verifications such as for example a plausibility check of GPS provided
position against internal solution from orbit propagator functions.
● Level 3 then comprises failures which need hardware reconfigurations via the
OBC's reconfiguration unit. These include the monitoring and reaction to HW
alarms and the like.
FDIR Concept 223
● And finally level 4 comprises the failures that cannot be handled on board the
S/C itself without ground intervention at all.
Each level of FDIR handling function can escalate the failure to the next higher layer
in case the problem cannot be isolated or recovered on its level. E.g. many system
level failures may lead to hardware alarms triggering reconfigurations on Level 3 –
such as power failures or OBC watchdog failures.
Vice versa failure recovery is always performed from higher to next lower level. E.g.
in case of a 2 out of 3 redundancy for star trackers as in the example of table 13.8, if
star tracker 3 so far is off and star tracker 2 reports failures or shows failure
symptoms the AOCS subsystem FDIR level can reconfigure the S/C to using STRs 1
and 3 for further operation.
Again it must be remembered, that a simple equipment reconfiguration to its
redundant occurrence – triggered on whatever FDIR level – and keeping the rest of
the S/C on nominal side can only be applied with restrictions. Depending on root
cause – this approach might lead to killing the redundant unit too. Therefore this
method is avoided in all severe FDIR cases and the entire S/C is reconfigured to
Safe Mode which – as was cited – usually reconfigure the entire S/C including buses
and power lines to the redundant side.
Having explained the FDIR hierarchy the Safe Mode shall be described again in a bit
more detail. Since transition of the S/C to Safe Mode breaks all onboard functions
and thus all mission product generation by means of the above cited hierarchical
FDIR approach the cases for Safe Mode triggering shall be limited as far as possible.
The need for automated Safe Mode triggering is also driven by how fast ground is
able to identify failure symptoms and ground is able to trigger isolation and recovery
activities. The possibilities in this area for a permanently visible geostationary satellite
differ significantly from those for a polar orbiting LEO spacecraft.
The guidelines for a Safe Mode configuration are as follows:
● OBC will preferably operate on the redundant side – including OBC HK mass
memory unit and safeguard memory for SCV and including CCSDS
processing unit.
● OBSW is operational in Safe Mode controlling S/C in a way to assure attitude
stability and sufficient power generation by solar array pointing.
OBSW in particular will also perform S/C limit monitoring with dedicated Safe
Mode settings.
● The main data bus on board will be operating on the redundant side.
● The OBC I/O unit, (RIU), will be operating on the redundant side.
● The Power Control and Distribution Unit, (PCDU), will at least be operated on
its redundant controller side. PCDU LCL bank redundancy switching is usually
only applied in case of failures in the PCDU itself.
224 The Spacecraft Operability Concept
● Perform all S/C system mode transitions to a nominal mode including AOCS
subsystem to a nominal AOCS mode
● Preparation of nominal S/C operations by resource reconditioning, loading of
new mission timeline etc
While the previous chapters treated the satellite operations functional design and the
functional behavior here in short the topic of operational constraints shall be tackled.
In general all operational constraints are highly S/C design specific. They can be
broken down into
● S/C platform operational constraints and
● payload instrument operations constraints.
For both classes operational constraints arising from
● resource limits or
● functional dependencies
can be identified.
The resource limit constrains are intuitively to understand. Optical payloads for
example may be only operated in sunlight conditions. In eclipse phase their operation
– except for dark image calibrations – makes no sense. On the other hand the overall
payload operational time between two ground station passes may be limited due to
the limited amount of science data storage resources on board. Multiple payloads
here also might compete for the memory resource. As another example Synthetic
Aperture Radar instruments or radar scatterometers – especially when operated in
eclipse phase – are typical payloads with operational constraints due to their high
power consumption.
Constraints due to functional unit interdependencies for example might be that due to
the common use of A/D input converters of the MMFU two payloads may not be
operated in parallel. Or – even if this is not a desirable case – there might be
constraints limiting the MMFU to perform in parallel science data recording and
playback data streaming via X-band to the PGS. Another type of common operational
constraint is that during certain S/C AOCS modes – like target rollover bidirectional
measurements or for some S/C even spin stabilized Safe Mode – no X-band
downlink is possible due to the antenna pointing angle limits or even the rotating
solar array interference with the necessary antenna pointing direction.
Operational constraints increase as soon as a data routing “equipment” like a data
bus or OBC I/O unit (RIU) has a failure. The details of remaining operational flexibility
is then highly dependent on the engineered redundancy concept.
226 The Spacecraft Operability Concept
A spacecraft has usually different data links for platform control and for science data
downlink8. The flight control systems and the data processing systems for platform
and payload differ to a certain extent. Common for both – platform and payload
control – is, that for a standard satellite all commanding is performed via one TC link.
Also all S/C housekeeping telemetry – for both platform and payload – is downlinked
via a common S-band TM link to the platform control station – the “Flight Operations
Center”, (FOC). This allows full operational observability of the system's status,
health and resources.
Payload science data are downlinked to the “Payload Ground Segment”, (PGS). In
some cases also a copy of the platform HK data is downlinked to the PGS. In such
case however the platform data usually serve to cross verify payload timestamping
and geolocation parameters as well as to cross verify proper platform health during
the entire mission product generation to avoid science measurement
misinterpretations. Such complementary or ancillary data have already been
mentioned.
The CCSDS standard protocol for telecommand and telemetry transmission was
already treated. In the ESA ECSS compliant missions the transmitted information is
encoded in PUS conformal TC and TM packets respectively.
● A TM packet contains a set of onboard variable values in its packet body and
in packet header the submitting unit / process as well as packet generation
time is included. This already requires
◊ the definition of which packets exist (e.g. for one equipment, a S/C
subsystem and finally on system level),
◊ the definition of which packet comprises which variables in which data
format and which calibration characteristics
8
In some missions they can even be served by different ground segments. An example for such a configuration is
the European satellite navigation system Galileo.
Flight Procedures and Testing 227
))
))
)
))
))
)
TC/TM
Database
(SRDB)
However it is very cumbersome to command via low level commands transitions like
the satellite switching from LEOP mode after launcher separation to a nominal mode
with lots of onboard units to be activated and their telemetry to be checked. To ease
the command and control of the S/C for the ground staff two layers of abstraction are
introduced.
● On board OBSW functions are introduced which can be triggered / activated
from ground via the already cited PUS Service 8 (Function management
service).
An example could be a function for activation of a payload from ground where
the OBSW executes the detailed steps from power supply switch via payload
controller boot control, initial PL onboard data bus TM verification, power
consumption control etc.
228 The Spacecraft Operability Concept
● Flight Procedures are another means for increasing the level of commanding.
Flight Procedures are somewhat the complement to OBCPs. While an OBCP
is a sort of “command script” executed on board, a Flight Procedure is a
“command script” implemented in the ground control system.
An example could be a flight procedure which submits the function commands
for the AOCS to switch from idle mode to fine pointing, for the data handling
subsystem (MMFU etc.) to prepare for science data recording and for the
payload to switch on – all for preparation of a payload instrument
measurement on board.
Flight Procedures can comprise both low level commands to S/C units, higher level
commands to S/C subsystems and system level commands and they can trigger
onboard functions and OBCPs. Any command defined in the SRDB (and thus
implemented in the OBSW) can be included in a Flight Procedure. Both individual
commands and entire flight procedures can be commanded from a S/C ground
control console.
An example for a ground control system – here a SCOS 2000 from ESA / ESOC – is
shown in figures 13.10 and 13.11. They show both TC/TM log windows as well as
graphic parameter displays, so-called synoptic displays.
Figure 13.10: Command log of S/C (here during OBSW test on SVF).
© IRS, Universität Stuttgart
Flight Procedures and Testing 229
Figure 13.11: Command log of S/C (here during OBSW test on SVF).
© IRS, Universität Stuttgart
example the one depicted in figure 13.12. They provide different views on the task
flow and offer the operator to select commands and according parameters as they
are defined in the SRDB.
Figure 13.12: Definition and test of flight procedures via the MOIS flowchart editor.
© IRS, Universität Stuttgart
As already indicated there are Flight Procedures defined for the S/C system level
control, for subsystem control and equipment control. An example structure could be
structured as follows:
● System Level procedures
● Subsystem level procedures:
◊ Data Handling Subsystem procedures
◊ Electrical Power Subsystem procedures
◊ Attitude and Orbit Control Subsystem procedures
◊ Reaction Control Subsystem procedures
◊ S-band Subsystem procedures
◊ Thermal Control Subsystem procedures
◊ Payload Data Handling and Transmission Subsystem procedures
DSL
To assure full compatibility with both “Flight Operations Center”, (FOC) and the
“Payload Ground Segment”, (PGS), multiple so-called “System Validation Tests”,
(SVT), are carried out during the subsequent integration of the spacecraft. In this
232 The Spacecraft Operability Concept
context ”system“ refers to the entire assembly, space + ground segment. SVTs are
tests conducted by the agency being connected via a high performance data link
(DSL or similar) and via S-band and X-band SCOE to the S/C which is physically
located in the manufacturer's integration hall. During SVTs the spacecraft
commanding is performed via the same Flight Procedures and low level TCs as later
used for S/C in orbit. TM is acquired also by FOC and PGS and is evaluated by the
mission ground segment accordingly.
Mission Operations Infrastructure 233
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
234 Mission Operations Infrastructure
As input to mission operations the PGS usually collects the user requests from the
S/C users – in the Earth observation and science domain called “principle
investigators” – and the PGS prepares the initial mission planning and hands it over
to the FOC for integration into the overall satellite mission timeline. The PGS usually
has no command uplink to the spacecraft.
The Flight Operations Infrastructure 235
Via the ground communications system the FOC is connected to the S-band antenna
ground stations and the PGS is connected to the X-band science data link antenna
ground stations. The antenna ground stations are positioned at “strategically”
important points all over the Earth to achieve optimum S/C visibility. No space agency
owns antenna stations at all important positions on the globe. Therefore – to support
especially the LEOP phase of a new S/C but occasionally also during commissioning,
normal or FDIR phases - the space agency may procure the use of other agencies or
commercially operated stations for a limited period.
The ground station visibility ranges as function of S/C altitude and link budget are
known already at S/C design phase. With the orbit analysis performed during S/C
engineering phase the station visibilities for a full orbit repeat cycle are computed –
see figure 2.3 and figure 14.3.
Based on this information the FOC can activate the antenna ground stations
accordingly for each S/C contact which is especially important during the LEOP
phase to be able to properly track all S/C activities such as deployments, equipment
activations, mode transitions and the like.
236 Mission Operations Infrastructure
Figure 14.4: Flight Operations Center control room example. © ESA / ESOC
The Flight Operations Infrastructure 237
The more challenging the mission, the more sophisticated the FOC infrastructure is
designed. For a standard Earth observation scientific satellite there will be typically
one to two user work places, with 2-3 screens each, per main functional domain, i.e.
for:
● Overall system control
● Data handling control
● AOCS control
● Power control
● Thermal control
● One for the entire payload data handling chain (PLs, MMFU, X-band)
● One per payload instrument9
The following figure depicts the functional domain driven workplace allocation in the
main control room, (MCR), by example of an ESA / ESOC mission – CryoSat-2.
There are workplaces which provide overview and key parameter visibility to the
spacecraft operations manager and the flight operations director and furthermore the
workplaces monitoring detailed information for the individual subsystem controllers.
Figure 14.5: Mission control room and workplaces – schematic. © ESA / ESOC
9
Payload operations normally is not yet part of the LEOP phase but the according operations workplaces are
mentioned here already.
238 Mission Operations Infrastructure
Representative, who is normally located in the Main Control Room next to the Flight
Operations Director, provides the authority from the agency project management.
The industry team and the specialists from the agency project team are usually
located in a so-called “Project Support Room”, (PSR), and have visibility and
parameter read access to the operations being performed by the flight control team in
the mission control room.
Figure 14.6: Project Support Room with S/C Supplier Workstations. © ESA / ESOC
Figure 14.6 shows the workstations placed in the Project Support Room where the
following shall be cited – again as example from the CryoSat2 mission:
● A dedicated workstation for the star tracker supplier since for this mission it
was a new and mission critical element.
● An AOCS analysis workstation for e.g. computation of data for orbit correction
maneuvers.
● A workstation for the satellite geodesy system “Doppler Orbitography and
Radiopositioning Integrated by Satellite”, (DORIS) – cf. [123].
● A dedicated workstation for OBSW runs, testing etc.
This assistant expert team monitors S/C health in parallel to the Subsystem
Operations Engineers and provide expertise in case of any unforeseen deviation
240 Mission Operations Infrastructure
from expected behavior of the S/C. Anomaly situation treatment is performed under
the management guidance of the Flight Operations Director. Any failure detection or
isolation and recovery activities are to be signed off by quality assurance before
command submission.
During the platform commissioning phase and even later during the payload
commissioning phase the level of support is subsequently reduced but key members
of the agency project team and the industry team remain on-site at the MCC until the
S/C is declared ready for nominal operations.
Besides the FOC / PGS control and monitoring infrastructure, the ground communi-
cations system and the antenna stations, the ground infrastructure comprises a
significant number of additional tools which are not directly involved in daily S/C
command and control. Out of these the three most important shall be cited:
Spacecraft Simulator:
Figure 14.7: The system simulation environment SIMSAT by ESOC. © ESA / ESOC
failure conditions of the satellite, for symptom analysis and for pretest of recovery
activities. Furthermore it serves for verification of OBSW patches before uplink.
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology,
© Springer-Verlag Berlin Heidelberg 2012
244 Bringing a Satellite into Operation
For the mission operations team it is an essential task to familiarize oneself with the
ground segment infrastructure, the Mission Control System, its control consoles,
databases etc. The ground operations team must be in the position to exercise all
nominal and contingency operations for LEOP phase, the commissioning phase and
the routine operations phase. Satellites are usually operated in two shifts per day.
The A or prime team is the one which has already participated in the System
Validation Tests and in verification of the S/C Flight Procedures. This team handles
the critical operations sequences. The B or secondary team is subsequently trained
up. This can comprise less experienced operations engineers or operations experts
from other missions.
Training for a two shift team plus backup personnel may consist of:
● Classroom training and facility familiarization.
● Training and simulation sessions performed before launch:
◊ S/C Operations controlling the real spacecraft (e.g. in SVT) or the S/C
simulator.
◊ The first simulations are “nominal” to allow all team members to become
familiar with the sequence of operations to be performed.
◊ A series of simulations of the critical phases with an increasing level of
complexity for all teams follow.
◊ Anomalies on the simulated satellite, ground segment facilities, launcher
and ground stations are injected in increasing numbers and levels of
difficulty, culminating in parallel failures of different systems.
◊ Shift handover, both in nominal situations and also in the case where
anomalies have prevented one team from completing all of the planned
operations.
◊ Routine operations over several days are trained – with simulated S/C – to
allow the spacecraft controllers and subsystem operations engineers to
validate the systems and procedures to be used after the LEOP phase.
The ground segment infrastructure and the antenna station network are
included – partly as simulations – via so-called Mission Readiness Tests, to
validate the ground stations using an already flying satellite as the target.
● Participation and training of all external partners.
● Verification of event sequences (uninterrupted).
● Usually two launch rehearsals one or two of them performed with:
◊ Full included FOC
◊ Potential antenna stations
◊ A simulated S/C to achieve first acquisition operations to be performed
following the countdown activities
◊ And the launch site interface – personnel, data lines from launch site to
FOC, Go / No-Go flags transmission, launcher and AIT.
Mission Operations Preparation 245
S/C Simulator
System Models (Env./Dyn./etc.)
Equipment
Model
OBC
Simulator Kernel
Model Equipment
Model
Equipment
Model
Equipment
Model
TM/TC
Simulator I/O
Frontend
GCS LAN
familiarization – the so-called “System Validation Tests”, (SVT) are performed during
S/C integration phase D. “System” here refers to the overall mission system, i.e.
including both ground segment as well as space segment.
● As cited in chapter 13.18 during the SVTs the S/C – positioned in the clean
room at manufacturers premises – is commanded remotely from the FOC.
● Multiple such tests are performed with increasing functional test scope – the
ECSS standards require 4 of them, SVT 0 to SVT 3.
● In the higher ones also payloads are operated and payload science data are
recorded as far as possible under clean room conditions – significant
limitations will exist for example for radar instruments.
● Payload science data playback from MMFU via X-band link (excluding RF
part) is then streamed to PGS for verification of compatibility of PGS tools with
X-band data stream and formats.
A number of activities are to be carried out during the so-called “Launch and Early
Orbit Phase”, (LEOP). For these activities the S/C prime contractor supports the
operations team as defined in the catalog of their phase E tasks. Days before launch
in the FOC all control stations are rechecked and flight operators prepare for the
launch date. Shift plans are frozen and last organizational topics are
clarified.Although the details for the LEOP phase activities differ highly from mission
to mission – especially for the Earth observation and science domain – the general
activities include the following:
● Final pre-launch check of the ground systems including FOC, antenna ground
stations and communication links.
● The launcher fairing being closed and the S/C being connected to ground via
the umbilical connector.
● Final pre-flight check of S/C at the launch site.
● Final pre-flight check of the launcher Go / No-Go signals.
● In case of launch with running S/C OBC continuous monitoring of proper auto-
boot of S/C OBC and OBSW into launch mode during the early phase of the
count down is performed.
● During the ascent phase of the launch no signals are available.
● In the case launcher separation happens during ground station visibility, LEOP
tasks comprise monitoring of the execution of the post-separation
configuration operations, performed autonomously by the satellite in the frame
of the LEOP Autosequence which are:
◊ In case of cold launch (OBC off) first of all a proper boot of OBC / OBSW.
◊ Ground connection establishment with the OBSW – automatically
transmitted TM from S/C.
Launch and LEOP Activities 247
◊ Establishing command link with the satellite and starting the orbit
determination via radiometric data (ranging).
◊ Performing auto-deployments and initiation of ground controlled
deployments for antennas, solar arrays respectively.
◊ Control of attitude stabilization respectively its monitoring in case of auto-
sequence based attitude acquisition.
◊ Via the last two steps verifying that the satellite configuration is as
expected after launcher separation w.r.t. approximate orbit, attitude and
correct deployments.
● In case of launcher separation out of ground station visibility the first step in
vicinity of a station is a TC based ground contact establishment and to
command the downlink of the LEOP autosequence TM packet history. for
verification of proper S/C status.
● Further following steps – which for some missions are already part of the
commissioning – comprise the commanding required for transition of the
satellite into the higher operational modes needed for payload activation and
for commissioning operations.
For example switch on of AOCS units, power subsystem equipment and of
thermal subsystems needed for payload operations.
● Furthermore the detailed verification of correct orbit and the preparation of
potentially necessary orbit acquisition maneuvers in most cases is still counted
as part of the LEOP phase.
For a simpler Earth observation satellite these LEOP activities all together sum up to
two to tree days. For more complex missions or satellites with specific orbits – like
e.g. the Hubble Space Telescope – these tasks can consume a few weeks all in all.
The same applies for constellations like TerraSAR-X / TanDEM-X or navigation
constellations like GPS / Galileo / GLONASS.
All these activities from the countdown tasks, the activities performed only seconds
after launcher separation to those performed days after launch are precisely planned
beforehand. Each shift performs their scheduled operations.
The plans take into account the availability of ground station visibilities, plus any
constraints coming from the different support facilities.
Then on day 0 during countdown, step by step the launch resource criteria – also
called Go / No-Go criteria – are checked, namely the data links to antenna stations,
to the launch site, the telemetry channel from the launcher to FOC etc. Please also
refer to figure 15.4.
Finally the S/C is launched and after upper stage separation it starts executing the
key parts of its LEOP autosequence. At first successful ground contact essential
telemetry is downlinked and the operators get first visibility of the status. A S/C
telemetry monitoring desktop example is provided in figure .15.5
Launch and LEOP Activities 249
If all goals of the LEOP phase plan have been successfully achieved the Flight
Operations Director will declare the LEOP completed and the Commissioning Phase
can start.
The key task of the commissioning phase is the subsequent taking into operation of
the so-far unused platform equipment and of all payload instruments, to verify all
operational modes and to perform for both the platform and the payload all calibration
and performance characterization tasks.
The distribution of S/C platform and payload commissioning tasks between LEOP
phase and a dedicated S/C commissioning phase is highly mission specific. On the
one hand, the LEOP phase might not even cover all AOCS modes and use all AOCS
equipment. An example is TerraSAR-X where the reaction wheels were first activated
during platform commissioning – not yet during LEOP phase. On the other hand
LEOP already might include initial payload switch-on and checkout and X-band data
downlink.
For payload instrument commissioning the detailed tasks are again highly dependent
on the instrument characteristics and mission type and have to be analyzed
individually per mission. Payload calibration methods are:
● Calibration via flyover of reference targets and comparison of received to
expected results. This is a typical method for Earth observation satellites.
● Radio signal quality measurements. This method is essential for telecom
satellites.
● Pointing to reference targets and calibration of sensor with previously acquired
target characteristics from previous missions etc. This method is typically
applied for space telescopes and the like.
● Platform characterization may imply previously performed specific platform
equipment operations, such as STR characterizations or GPS geolocation
characterization.
● The commissioning phase in many cases includes the calibration /
characterization of ground processing facilities in the PGS for higher level
mission product data from the raw measurements.
Similarly to the LEOP the S/C commissioning phase is planned in detail before
launch, but the planning is generally at a higher level and the activities are not
usually time critical and are subject to change depending on the satellite performance
and operations during the LEOP phase. The commissioning phase may last from
several weeks to a number of months, depending on the S/C type, orbit, number and
type of payloads etc.
An example for such a commissioning phase planning is given in the figure 15.6
below.
Platform and Payload Commissioning Activities 251
After platform and payload commissioning the S/C supplier's tasks are done and the
normal operations phase with continuous mission product generation starts under
sole responsibility of the operations team.
This example (cf. [109]) depicts a combined ground / space architecture of the ESA
study “Autonomy Testing” where the design of a potential onboard mission planning
function for payload operation was analyzed.
The idea behind this is that users “only” needs to transmit their observation requests
(“user requests”) to the combined system consisting of space segment (simulated
satellite) and ground segment (simplified ground station). The customer requesting a
mission product defines
● by which payload,
● in which operating mode,
● with which settings,
● they want to have which target area observed
● in which time window.
It was analyzed in how far it would make sense to implement parts of the mission
planning and overall system timeline generation (ground + space) on board the
spacecraft to shorten mission prediction response times. In such cases the satellite
constantly has to collect customer requests from the various sequentially visible
ground stations and is equipped with an intelligent mission planning system. This
system generates a detailed timeline comprising all commands for all involved
platform subsystems – mainly AOCS – and the involved payload(s).
Stimuli / Environment
TT & C Subsystems Payloads Observations Simulation Test Infrastructure:
Simulated Satellite and Space Environment:
- SSVF simulator
- Spacecraft model, and environment
models derived from SSVF
EMCS
Figure A2: Onboard autonomy test infrastructure: "Autonomy Testbed". © Astrium GmbH
256 Annex: Autonomy Implementation Examples
The prototype from the ESA “Autonomy Testing” study consisted of:
● A Core EGSE acting as a simplified ground station
● A satellite simulator
● An onboard computer board as simplified single board computer
● An onboard software with a macrocommand interface (somewhat like OBCPs)
running on this board
● A mission planning algorithm which created an activity timeline from the cited
user requests including all macrocommands to the onboard software.
The onboard software executed the spacecraft macrocommands in the generated
mission timeline and thus controlled the simulated satellite. In this autonomy testbed
complex scenarios were tested which comprised:
● Nominal operational cases in which user requests were uplinked, processed
and the results were downlinked at the next ground station contact.
● Furthermore scenarios which lead to planning conflicts on board and where
the user requests could only be partially satisfied within the operating period.
● And finally scenarios during which manually injected equipment failures
occurred and where initially a suitable error recovery needed to be identified
and to be performed – followed by a replanning of the activities since after
error recovery the satellite had already missed some of the observation
targets. See also figure A4.
Generate
Recover TL for Recover failed
failed Queue - Generate Queue -
Execute Rest Cont. TL‘s Execute Rest
Failure Diagnose
Execute
Cont TL‘s
Exec TL1
STL1.......N
Onboard ..... ..... .....
!!
Exec TL3
Uplink Uplink Uplink Uplink
TL1 TL2 TL3 TL4
(STL1...N) (STL1...N) (STL1...N) (STL1...N)
Downlink Downlink
Status 1 Status 2
with missed
....
UR‘s from
STL2
Orbit Set 1 Orbit Set 2 Orbit Set 3 Orbit Set 4
In spring 2006 NASA launched the deep space probe “New Horizons” to explore the
trans Neptunian objects Pluto and Charon. It represents probably the highest level of
onboard autonomy ever flown to date.
The onboard software of New Horizons is based on a case based decision algorithm
and a rule chainer algorithm. In place of onboard control procedures as used in
conventional satellites here structures are implemented applying Artificial Intelligence
techniques to control the nominal approach maneuvers as well as the error recovery.
Cases are implemented on the lower processing level to identify abstract symptoms
from parameter measurements and above these cases a rule network is
implemented for situation analysis and system control.
The following figure provides a sketch of a small extract from the overall rule network
– here for the handling of an error during Pluto approach. The failure can either be
handled or results in the space probe going to Safe Mode – depending on the
detailed conditions. The rule network implements a forward chaining method for
processing.
For explanation of the figure below also please refer to [110] and [111]:
● The Rxxx-identifiers represent rules.
● The Myyy-identifiers represent macros which are executed by the activated
rules.
● All spacecraft commands initiated by rules are encapsulated in such macros.
● The transition times for the rules / macro execution are depicted as well (some
cover several days due to spacecraft coast or approach phases).
Annex: Autonomy Implementation Examples 259
● For the rules / macros the onboard processor executing them is shown (in this
extract from the rule network P3 and P5 are cited)
● and in the rule identification information is contained (for details see [110]):
◊ The rule priority
◊ The rule persistence
◊ The methodology how the rule result is to be handled by the inference
system, when the rule result is obviously outdated
◊ The state during the loading of the rule into memory (active / inactive).
References
262 References
General:
[1] Tomayko, James:
Computers in Spaceflight: The NASA Experience
http://www.hq.nasa.gov/office/pao/history/computers/Part1-intro.html
[3] http://www-pao.ksc.nasa.gov/kscpao/history/mercury/mr-3/mr-3.htm
[5] http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19780012208_19780
12208.pdf
[6] N.N.:
Project Gemini – A Chronology (NASA report SP-4002)
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19690027123_19690
27123.pdf
CDP1802:
[22] N.N.:
CDP1802 datasheet,
http://homepage.mac.com/ruske/cosmacelf/cdp1802.pdf
[23] N.N.:
RCA 1800 Microprocessor
User Manual for the CDP1802 COSMAC Microprocessor
Am2900:
[24] N.N.:
The Am2900 Family Data Book
http://www.bitsavers.org/pdf/amd/_dataBooks/1979_AMD_2900family.pdf
MIL-STD-1750 compatibles:
[25] N.N.:
MIL-STD-1750 A
http://www.xgc.com/manuals/m1750-ada/m1750/book1.html
References 265
[26] N.N.:
Dynex Semiconductor MA31750 Processor (Datasheet)
http://www.dynexsemi.com/assets/SOS/Datasheets/DNX_MA31750M_N
_Feb06_2.pdf
[27] N.N.:
UT1750AR RadHard RISC Microprocessor Data Sheet
http://aeroflex.com/ams/pagesproduct/datasheets/ut1750micro.pdf
RS/6000 – RAD6000:
[28] http://en.wikipedia.org/wiki/IBM_POWER
[33] N.N.:
SPARC Series Processors ERC32 Documentation,
http://klabs.org/DEI/Processor/sparc/ERC32/ERC32_docs.htm
[34] N.N.:
LEON2 and 3 VHDL Code (under LGPL),
http://www.gaisler.com/
[35] N.N.:
LEON Processors,
http://www.gaisler.com/cms/index.php?
option=com_content&task=section&id=4&Itemid=33
266 References
Diverse:
[39] Weigand, Roland:
ESA Microprocessor Development
Status and Roadmap
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta
HAL/S:
[40] Highlevel Assembler Language / Shuttle – HAL/S:
http://en.wikipedia.org/wiki/HAL/S
JOVIAL:
[42] JOVIAL (Jules Own Version of the International Algorithmic Language):
http://en.wikipedia.org/wiki/JOVIAL
[43] N.N.:
MIL-STD-1589C, MILITARY STANDARD: JOVIAL (J73)
United States Department of Defense. 6 JUL 1984
http://www.everyspec.com/MIL-STD/MIL-STD+(1500+-+1599)/MIL-STD-
1589C_14577/.
Ada:
[44] Ada:
http://en.wikipedia.org/wiki/Ada_(programming_language)
C:
[46] Kernighan, Brian W.; Ritchie, Dennis M.:
C Programming Language,
Prentice Hall,
2nd edition, 1988
ISBN: 978-0131103627
C++:
[47] Stroustroup, Bjarne:
The C++ Programming Language
Addison Wesley, Reading, Massachussetts,
2nd Edition, 1993,
ISBN: 0-201-53992-6
Assembler to C:
[50] Patt, Yale; Patel,Sanjay:
Introduction to Computing Systems: From bits & gates to C & beyond,
McGraw-Hill,
2nd edition, 2003
ISBN: 978-0072467505
VxWorks:
[51] http://www.windriver.com/products/vxworks
RTEMS:
[52] OAR Corporation:
http://www.rtems.com
MIL-STD-1553B:
[53] MIL-STD-1553B:
Digital Time Division Command/Response Multiplex Data Bus.
United States Department of Defense, September 1987.
http://www.sae.org/technical/standards/AS15531
[54] N.N.:
MIL-STD-1553 Tutorial and Reference from Alta Data Technologies
http://www.altadt.com/support/tutorials/mil-std-1553-tutorial/
SpaceWire:
[55] ECSS SpaceWire Standard Homepage:
ECSS-E-ST-50-12C – SpaceWire – Links, nodes, routers and networks
References 269
http://www.ecss.nl/forums/ecss/_templates/default.htm?
target=http://www.ecss.nl/forums/ecss/dispatch.cgi/standards/docProfile/
100654/d20080802144344/No/t100654.htm
Subpages:
ECSS-E-ST-50-53C SpaceWire – CCSDS packet transfer protocol
ECSS-E-ST-50-52C SpaceWire – Remote memory access protocol
ECSS-E-ST-50-51C SpaceWire protocol identification
[57] http://en.wikipedia.org/wiki/SpaceWire
[59] Davis, Robert I.; Burns, Alan; Bril, Reinder J.; Lukkien, Johan J.:
Controller Area Network (CAN) schedulability analysis: Refuted, revisited
and revised,
Real-Time Systems
Volume 35, Number 3, 239-272, DOI: 10.1007/s11241-007-9012-7,
http://www.springerlink.com/content/8n32720737877071/
[60] http://en.wikipedia.org/wiki/Controller_area_network
JTAG / ICD:
[62] http://en.wikipedia.org/wiki/JTAG
[63] http://en.wikipedia.org/wiki/In-circuit_debugger
270 References
Service Interface:
[64] Wiegand, M.; Schmidt, G.; Hahn, M.:
Next Generation Avionics System for Satellite Application,
Proceedings of DASIA 2003 (ESA-SP-532) pp. 38 ff, 2-6 June, 2003,
Prague, Czech Republic
http://articles.adsabs.harvard.edu//full/2003ESASP.532E..38W/0000038.0
01.html
[66] http://en.wikipedia.org/wiki/Technology_readiness_level
[68] http://en.wikipedia.org/wiki/Solid-state_drive
[70] http://www.everspin.com/products.html
[88] N.N.:
Overview of IDEF0:
http://www.idef.com/idef0.htm
[89] http://www.esa.int/TEC/Software_engineering_and_standardisation/TEC
KLAUXBQE_0.html
[96] N.N.:
OpenAmeos – The OpenSource UML Tool,
http://www.openameos.org/
ECSS Standards:
[100] ECSS-E-ST-40C Space Engineering – Software
[101] ECSS-Q-ST-80C Space product assurance – Software product
assurance
DO-178B:
[102] RCTA/EUROCAE:
Software Considerations in Airborne Systems and Equipment
274 References
[103] http://www.rtca.org/downloads/ListofAvailable_Docs_WEB_NOV_2005
.htm
MIL Standards:
[106] MIL-STD-2167A,
Military Standard, Defense System Software Development,
Department of Defense, Washington, D.C., February 29, 1988.
[108] http://www.esa.int/SPECIALS/Proba_web_site/SEMHHH77ESD_0.html
Maryland, USA,
57th, International Astronautical Congress,Valencia, Spain,
October 2.-6., 2006
[111] http://www.nasa.gov/mission_pages/newhorizons/main/index.html
[115] http://en.wikipedia.org/wiki/Tcl
[116] http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
References Diverse
[117] N.N.:
Radiation Resistant Computers:
http://science.nasa.gov/science-news/science-at-nasa/2005/18nov_eaftc/
[118] N.N.:
AMBA on chip bus architecture:
http://www.arm.com/products/system-ip/amba/amba-open-
specifications.php
[120] Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sandi A.; Witt,
Rouven; Röser, Hans-Peter:
Common board design for the OBC I/O unit and the OBC CCSDS unit of
the Stuttgart University Satellite "Flying Laptop"
Data Systems in Aerospace,
DASIA 2011 Conference,
17 - 20 May, 2011, San Anton, Malta
Index
A C
Actel ProASIC.........................................47 C .............................................42, 136, 254
Actel RT-AX............................................47 C++.......................................................136
Ada.............33, 42, 46, 120, 135, 136, 254 Calibration.............................................250
Aeolus.....................................................45 CAN bus..................................................47
ALGOL..................................................135 CANaerospace........................................62
Algorithm in the Loop....................149, 151 Cassini....................................................41
AMBA bus...............................................47 CCSDS...........................................62, 154
AMD 2900 ..............................................40 CCSDS packet......................................102
Analog sensor equipment.......................56 CCSDS processor...................63, 101, 114
Analog spacecraft control.......................23 CCSDS standard..............................62, 95
Antenna effects.......................................73 Channel Access Data Unit................63, 95
Antenna ground station.........................235 Channel acquisition table.............123, 124
AOCS mode..........................................193 CISC........................................................43
AP-101....................................................32 Classroom training................................244
Apollo program.................................24, 29 Clock module........................................207
Application Process Identifier..................... Clock strobe..........................................207
.............................................95, 111, 187 Closed-loop...........................................160
Application Specific Integrated Circuit....44 CMOS memory.......................................36
ARINC 825..............................................62 Code inspection....................................135
ARM............................................46, 47, 54 Code instrumentation..............................67
ASIC................................................78, 156 Columbus Software Development
Assembler.....................26, 30, 33, 37, 135 Standard............................................168
Assembly, Integration and Testing........160 Command and Data Subsystem.............38
ATLAS.....................................................41 Command Link Transfer Unit............62, 95
Attitude acquisition................................197 Command Pulse Decoding Unit.................
Attitude and Articulation Control .............................................64, 112, 187
Subsystem.....................................36, 38 Commissioning phase..........................250
Attitude and Orbit Control System..........56 Commissioning Phase..........................192
ATV........................................................211 Compact PCI...........................................47
Authentication.......................................203 Consultative Committee for Space Data
Autocode...............................................148 Systems.........................................62, 95
Autonomy..............................................163 Control and Data Management Unit.........6
Autonomy testbed.................................256 Control console.............................152, 153
Controller Area Network..........................61
B Controller in the Loop...149, 156, 157, 160
Ball Grid Array.........................................72 Controller network...................................58
Bepi Colombo.................................61, 127 Core Data Handling System.................118
Bit failure...............................................126 Critical Design Review..............................8
Bitslice arithmetic logical unit..................40 CryoSat.......................................................
Boot.......................................................246 ......................45, 52, 120, 158, 193, 237
Boot loader..............................................91 Current free encoding.............................59
Boot memory.....................................54, 56
Boot report............................................118 D
Breadboard Model..................................77 Data bus............................................54, 58
Built-in self test........................................37 Data downlink.......................................209
Bus controller....................................54, 59 Data management autonomy...............213
278 Index