Professional Documents
Culture Documents
A Combined
Data and Power
Management
Infrastructure
For Small Satellites
Springer Aerospace Technology
123
Editor
Jens Eickhoff
Institut für Raumfahrtsyteme
Universität Stuttgart
Stuttgart
Germany
Innovation is the key for success in technical fields and thus cannot be underes-
timated in space engineering domains working at the cutting edge of feasibility.
Therefore, agile industry is continuously supporting innovative developments
extending technical limits or abandoning classical design paths. It is particularly
difficult to implement such innovative approaches straightforward in commercial
satellite projects, in agency funded productive systems like operational Earth
observation missions, or in navigation constellation programs.
The ideal platforms for ‘‘innovation verification’’ missions are governmental or
academic technology demonstrator programs. These demonstrator programs allow
for rapid prototyping of new capabilities and satellite architectures while keeping
capital expenditures as low as possible.
Technology development partnerships between industry and academic institu-
tions are extensively stressed and often cited. But for partnering in leading edge
areas like the Combined Data and Power Management Infrastructure (CDPI),
described in this book, such academic/Industry partnerships are extremely rare to
find since they require substantial know-how, both on the industry and the uni-
versity side.
However, once established, such cooperations provide a competitive edge for
both the university as well as the industry partners. For industry the advantage is
twofold because it allows the qualification of new space technology and in parallel
students and Ph.D. candidates are educated to a knowledge level which is far
above the normal standard for graduates.
For Astrium the University of Stuttgart has become a strategic technology
partner over the past decade due to the extensive small satellite programs that have
been established at their premises. These programs together with the state-of-the-art
facilities at the Institute of Space Systems, hosted at the ‘‘Raumfahrtzentrum
Baden-Württemberg’’, go far beyond typical university CubeSat programs. The
FLP satellite, for example, will qualify industry relevant flight hardware on board a
university satellite for the first time in the German context.
For this reason Astrium has invested significantly in the development of this
SmallSat program and in particular in this CDPI, both through direct sponsoring as
well as through provision of manpower. The other consortium partners, Aeroflex,
v
vi Foreword
vii
viii Preface
ix
x Acknowledgments
In 2011, during the CDPI development, the 10-year-old daughter of one of the
authors was diagnosed to be suffering from a breakdown of blood cell production
in the bone marrow—a variant of blood cancer. She luckily could be saved and
recovered by means of a bone marrow transplant. Due to this experience the
authors decided to sponsor with the royalties of this book the German and
international bone marrow donor’s database:
xi
Contents
xiii
xiv Contents
3 The I/O-Boards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1 Common Design for I/O and CCSDS-Boards . . . . . . . . . . . . 43
3.2 The I/O-Board as Remote Interface Unit . . . . . . . . . . . . . . . 44
3.3 The I/O-Board as OBC Mass Memory Unit . . . . . . . . . . . . . 46
3.4 I/O-Board Hot Redundant Operation Mode . . . . . . . . . . . . . 46
3.5 I/O-Board RMAP Interface . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5.1 Board Identification for I/O-Boards . . . . . . . . . . . . 47
3.5.2 I/O Board Interface RMAP Addresses . . . . . . . . . . 50
3.5.3 Returned RMAP Status Values . . . . . . . . . . . . . . . 50
3.6 I/O Circuits, Grounding and Terminations . . . . . . . . . . . . . . 51
3.7 I/O-Board Interface Access Protocols . . . . . . . . . . . . . . . . . 54
3.8 I/O-Board Connectors and Pin Assignments . . . . . . . . . . . . . 56
3.8.1 Connectors-A and C (OBC internal) . . . . . . . . . . . 56
3.8.2 Connector-B (OBC internal) . . . . . . . . . . . . . . . . . 56
3.8.3 Connectors-D and E (OBC external) . . . . . . . . . . . 57
3.9 I/O and CCSDS-Board Radiation Characteristic . . . . . . . . . . 58
3.10 I/O and CCSDS-Board Temperature Limits . . . . . . . . . . . . . 58
3.11 4Links Development Partner . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.5 Telemetry Encoder . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.5.1 Telemetry Encoder Specification . . . . . . . 66
4.2.5.2 Virtual Channels 0, 1, 2 and 3 . . . . . . . . . 67
4.2.5.3 Virtual Channel 7. . . . . . . . . . . . . . . . . . 68
4.2.6 Telecommand Decoder . . . . . . . . . . . . . . . . . . . . 68
4.2.6.1 Telecommand Decoder Specification . . . . 68
4.2.6.2 Software Virtual Channel . . . . . . . . . . . . 69
4.2.6.3 Hardware Virtual Channel . . . . . . . . . . . . 69
4.2.7 SpaceWire Link Interfaces . . . . . . . . . . . . . . . . . . 70
4.2.8 On-Chip Memory . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.9 Signal Overview . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Telemetry Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3.2 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3.2.1 Data Link Protocol Sub-layer. . . . . . . . . . 73
4.3.2.2 Synchronization and Channel Coding
Sub-Layer . . . . . . . . . . . . . . . . . . . . . . . 73
4.3.2.3 Physical Layer . . . . . . . . . . . . . . . . . . . . 73
4.3.3 Data Link Protocol Sub-Layer . . . . . . . . . . . . . . . 73
4.3.3.1 Physical Channel . . . . . . . . . . . . . . . . . . 73
4.3.3.2 Virtual Channel Frame Service . . . . . . . . 74
4.3.3.3 Virtual Channel Generation:
Virtual Channels 0, 1, 2 and 3 . . . . . . . . . 74
4.3.3.4 Virtual Channel Generation:
Idle Frames—Virtual Channel 7. . . . . . . . 74
4.3.3.5 Virtual Channel Multiplexing . . . . . . . . . 74
4.3.3.6 Master Channel Generation . . . . . . . . . . . 75
4.3.3.7 All Frame Generation . . . . . . . . . . . . . . . 76
4.3.4 Synchronization and Channel Coding Sub-Layer. . . 76
4.3.4.1 Attached Synchronization Marker. . . . . . . 76
4.3.4.2 Reed-Solomon Encoder. . . . . . . . . . . . . . 76
4.3.4.3 Pseudo-Randomizer . . . . . . . . . . . . . . . . 76
4.3.4.4 Convolutional Encoder . . . . . . . . . . . . . . 76
4.3.5 Physical Layer . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3.5.1 Non-Return-to-Zero Level Encoder . . . . . 77
4.3.5.2 Clock Divider . . . . . . . . . . . . . . . . . . . . 77
4.3.6 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3.7 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.7.1 Descriptor Setup. . . . . . . . . . . . . . . . . . . 78
4.3.7.2 Starting Transmissions . . . . . . . . . . . . . . 79
4.3.7.3 Descriptor Handling After Transmission . . 80
4.3.8 Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.3.9 Signal Definitions and Reset Values . . . . . . . . . . . 82
xvi Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Abbreviations
General Abbreviations
a.m. Above mentioned
cf. Confer
i.e. Id est (that is)
w.r.t. With respect to
Technical Abbreviations
ADC Analog-to-Digital Converter
AES Advanced Encryption Standard
AIT Assembly, Integration, and Test
ASIC Application-Specific Integrated Circuit
ASM Attached Synchronization Marker
AWG American Wire Gauge
BBM Breadboard Model
BCH Bose–Chaudhuri–Hocquenghem
BCR Battery Charge Regulator
BoL Begin of Life
CAD Computer-Aided Design
CADU Channel Access Data Unit
CC Combined-Controller
CCSDS Consultative Committee for Space Data Systems
CD Clock Divider
CDPI Combined Data and Power Management Infrastructure
CE Convolutional Encoder
CE Circuit Enable/Chip Enable
CL Coding Layer
CLCW Command Link Control Word
CLTU Command Link Transfer Unit
CMOS Complementary Metal Oxide Semiconductor
CPDU Command Pulse Decoding Unit
xxi
xxii Abbreviations
Jens Eickhoff
1.1 Introduction
J. Eickhoff (&)
Astrium GmbH—Satellites, Friedrichshafen, Germany
e-mail: jens.eickhoff@astrium.eads.net
A number of design ‘‘paradigms’’ have formed the baseline of the overall archi-
tecture development. Starting with the OBC these are the following (Fig. 1.1):
• The overall OBC should be implemented as a sort of ‘‘stack’’ of individual
Printed Circuit Boards (PCB), each of them mounted in a coated aluminum
frame. All the PCBs should be of single Eurocard size.
1 The System Design Concept 3
• All I/O into and out of the OBC to/from other spacecraft units should be routed
via connectors on the individual PCBs top side. All interfaces routing signals
between the individual PCBs should be routed via front connectors.
These two concepts led to an overall box design later which is depicted in
Fig. 1.2. The inter-board cabling is encapsulated in the OBC’s front compartment
which is closed by a front cover.
Fig. 1.2 Final OBC frame stack—CAD model. IRS, University of Stuttgart
4 J. Eickhoff
Since this approach requires some buffers and memory chips on the board
anyway, these boards via additional memory and according enhanced IP Core
functions are designed to handle the storage of OBSW housekeeping data, the
S/C state vector and the S/C configuration vector.
• The fourth type of boards are the CCSDS protocol decoder/encoder boards
which also are coupled to the Processor-Boards via SpaceWire interfaces and on
the other side are interfacing the spacecraft’s transceiver units. These boards
perform the low-level decoding of the uplinked telecommands and the low level
encoding of the telemetry to be downlinked during ground station visibility.
The approach for these boards was to use the identical PCB and FPGA as for the
I/O-Boards, and to just equip the CCSDS-Boards with only a limited number of
external interfaces, namely those to the transceivers, to the PCDU for High
Priority Commands (HPC) and those for cross coupling of the Command Link
Control Word (CLCW) interfaces (see later in Sect. 4.2.2). Obviously these
CCSDS-Boards comprise a different loaded IP Core since they perform a
completely different task.
The processing IP Cores in the FPGA on the CCSDS-Board and the software
libraries which are available in the RTOS tailoring and which are running on the
Processor-Board were designed in a common architecture.
When looking at the the above mentioned design paradigms, a reader who is
familiar with classic OBC design immediately will identify some elementary OBC
functions missing in the so far described board types. These functions have been
implemented through a sort of functional merging of OBC and PCDU and they
will be explained in the following sections.
Onboard
Reconfi - Memory A Computer
guration
Unit A
Decoder CPU A
Encoder Data Data Bus A
Board A Bus
Trans- Ctrl A
ceiver Data Data Bus B
Decoder Bus
Encoder Ctrl B
Board B CPU B SC SC SC
Reconfi - Equipm. 1 Equipm. 2 Equipm. 3
guration
Unit B Memory B
PCDU
Battery Ctrl A
Solar PCDU
Array Ctrl B Power Control & Distribution Unit
Fig. 1.3 Conventional infrastructure for data management (OBC) and power management
(PCDU). Astrium—see [48]
1 The System Design Concept 7
Examples for this type of hardware failures are e.g. a bus controller which is
sporadically not responding and requiring retries by the OBSW or the failure case
where the OBSW receives mass memory EDAC bit error messages. In such case
the OBSW can send so-called High Priority Commands (HPC) to the active
Reconfiguration Unit for DH/Ops and Power FDIR which then triggers the
reconfiguration to the redundant OBC bus controller—or in this case the entire
I/O-Board. In other error cases the Reconfiguration Unit may be forced to switch
over to the redundant Processor-Board. In the latter case the OBSW has to be
rebooted on the target Processor-Board.
To sum it up the triggering of the reconfiguration process is initiated in these
cases by the OBSW, the diverse reconfiguration steps with redundant subunit
8 J. Eickhoff
Failure type 2 covers all those errors which coincide with a crash of the OBSW. In
such case a Reconfiguration Unit outside the Processor-Board has first to detect the
OBSW failure and then has to perform the reconfiguration.
In the simple case of an OBSW crash due to a bug or an IC Single-Event Upset
(SEU), the OBSW has to be restarted by the Reconfiguration Unit. The failure
detection usually is performed by means of the processor cyclically sending
watchdog signals to the Reconfiguration Unit and the latter starting FDIR action in
case of the watchdog signal exceeding a timeout.
The reconfiguration activities reach from simple processor soft reset to com-
plete Processor-Board or/and I/O-Board redundancy switchover, depending on the
symptoms detected by the Reconfiguration Unit. The number and type of watch-
dogs and timers obviously has to be designed appropriately for the Reconfiguration
unit being able to determine between different error cases.
Such an automated reconfiguration with broken OBSW is only possible for a
limited type of failure cases. The determination of the complex ones and moreover
the root cause analysis requires intervention from ground. Which leads over to the
next failure class described below.
As summary it can be stated that the Processor-Board shall feature watchdog
lines to the Reconfiguration Unit for detection of OBSW crashes or I/O-Board
signal routing failures which then can be used for OBSW restart or hardware
reconfiguration—the latter only in very limited cases.
to trigger LCL relay on/off switching, according to what the targeted unit’s status
shall be.
This allows ground to activate deactivate onboard equipment completely
independent from any OBSW to overcome severe failure situations.
The final main type of failures are those leading to a power shortage on board. In
such case—independent of whatever the root cause was—the S/C equipment is
shut off in several steps. The first units being disabled are payloads and payload
data mass memory units. In further sequence then—if the OBSW apparently is
unable to manage the problem—the platform units including OBC are shut down.
An in case even these measures do not overcome the problem the PCDU finally
deactivates itself. For these cases a PCDU is equipped with an auto-activation
function for its controller as soon as the S/C power bus again supplies sufficient
voltage—e.g. due to the satellite returning from orbit eclipse phase to sun phase.
With increasing power availability the PCDU subsequently activates further S/C
platform units, first unit being the OBC, respectively in Safe Mode the redundant
OBC (depending on the latest settings in the Reconfiguration Unit). The latter then
activates platform AOCS units to achieve a stable SC safe-mode attitude acqui-
sition and potential ground contact through activation of the transceivers.
As summary in these cases reboot and configuration are initiated by the PCDU
controller and again PCDU controller and Reconfiguration Unit together manage
the recovery.
Onboard
Memory A Computer
Decoder
Encoder
Board A CPU A
Data Data Bus A
Bus
Trans- Ctrl A
ceiver Data
Bus Data Bus B
Ctrl B
Decoder CPU B
Encoder
Board B
Memory B
Combined
Battery Controllerl A
Solar Combined
Array Controller B
Power Control & Distribution Unit
Fig. 1.4 Combined Data and Power Management Infrastructure. Astrium see [48]
The FLP target satellite from the University of Stuttgart is the first one to fly
such a CDPI architecture with a Common-Controller. In this first implementation
the combined controller physically resides in the PCDU housing and is an
implementation of the standard Vectronic Aerospace PCDU controller with
enhanced firmware functions defined by the Stuttgart FLP team and implemented
by Vectronic Aerospace. More details on the PCDU and its controller functions
can be found in Chap. 8.
In normal operational cases the OBSW still can command the ‘‘PCDU’’ con-
troller—i.e. the Combined-Controller—to switch on/off spacecraft equipment
power lines. To avoid to overload the above figure the routing of these links have
been cut off (in comparison to Fig. 1.3). The same applies to the omitted data bus
lines to the diverse spacecraft equipment which still were depicted in Fig. 1.3.
The reconfiguration now is triggered by the Combined-Controller either via
soft-reset command lines to the individual OBC subunits (e.g. CPU) or via power
resets respectively via power down of the defective unit and power-up of the
component’s redundancy.
As example the Combined-Controller (in the PCDU in the current implemen-
tation example) can separately power each OBC decoder/encoder board, each
Processor-Board with CPU, NVRAM, RAM, clock module and finally also each
OBC I/O-Board. The apparently ‘‘additional’’ cabling needed between CC and these
elements—compared to a classic architecture—was hidden inside the reconfigura-
tion unit of the OBC in the design of Fig. 1.3 and thus is no real additional effort.
1 The System Design Concept 11
The advantages of the CDPI design with one Common-Controller instead of the
triple made of the OBC’s Reconfiguration Unit, the Command Pulse Decoding
Unit and the PCDU’s internal controller are as follows:
1. Only one single critical controller IC (FPGA or ASIC) has to be designed—
respectively a single chip firmware has to be implemented.
2. Only for one single controller chip intensive tests have to be performed.
3. Complex tests for proper interaction of a classic OBC Reconfiguration Unit
with the PCDU controllers for handling of the diverse failure cases are sig-
nificantly simplified. Some specific cases are completely obsolete.
4. For the OBSW design compared to the classical architecture only minimal
adaptations are necessary for triggering system reconfigurations—so no addi-
tional effort is implied also in this field.
5. The OBC CCSDS decoder/encoder board architecture also is not touched by the
new concept implementation—except for their I/O cabling.
6. The analog CPDU as subunit of the OBC’s Reconfiguration Unit which in the
conventional architecture serves to submit the analog pule commands to the
PCDU controller, is completely obsolete in the new concept. The according
class 1 High Priority Commands (HPC), can be directly submitted to the
Combined-Controller from the OBC CCSDS decoder via a normal digital link.
An according analog LCL control electronic in side the OBC also is obsolete.
12 J. Eickhoff
By this presented overall system design approach for the Combined Data and Power
Management Infrastructure (CDPI), all necessary flight relevant functions (see also
[10]) are covered—although not all allocated to the classic boards or component:
• OBC power supply -[ on OBC Power Boards
• OBC processor and internal data bus -[ on OBC Processor-Boards
• OBC memory—non volatile for boot loader and operating system and OBSW—
and volatile as work memory -[ on OBC Processor-Boards
• OBC packet stores for Housekeeping Telemetry (HK TM), spacecraft state
vector and spacecraft configuration vector -[ on I/O-Board
• OBC digital RIU function—coupling of all non SpaceWire digital equipment I/
O to the OBC - [ on I/O-Board
• Interface to the spacecraft transceivers decoding TCs and encoding TM -[ on
CCSDS-Boards
• OBC analog RIU functions—coupling all analog interface control and analog
parameter measurements to the OBC -[ implemented in PCDU and com-
manded/controlled from OBC via OBC-PCDU UART interfaces.
• OBC reconfiguration functions -[ implemented in the Common-Controller. In
this implementation realized by the PCDU controller with enhanced firmware.
• The OBC HPC interface functionality to implement implicitly the functions of
an OBC’s CPDU -[ implemented in the Common-Controller firmware and
accessible through additional UART interfaces via the OBC CCSDS-Boards
• Bus power management functions -[ implemented in the PCDU Common-
Controller
• Equipment power supply switching and overvoltage/overcurrent protection -[
implemented in the PCDU Common-Controller
• Power bus undervoltage FDIR functions (DNEL functions) -[ implemented in
the PCDU Combined-Controller.
An overview figure of the CDPI with all external I/Os and the interlinks
between OBC and PCDU part is included in Fig. 1.5. Box internal cross connects,
like between Processor-Boards and I/O-Boards are not included for not over-
complicating the figure here. The same applies for cross couplings of redundant
elements within OBC respectively PCDU. All these later are treated in more detail
in Chap. 6.
OBC
UART SIF N
UART SIF R
Legend:
JTAG Debug
OBSW Load N OBC Pwr Boards
JTAG
Debug OBC CCSDS Boards
OBSW Load R
OBC I/O Boards
1x PPS In
1x PPS Out
OBC CPU Boards
FOG
2x IIC
1 The System Design Concept
24x RS422
HK 4x Logic Out
Memory 2x Logic In
Cmd / Ctrl
For diverse
S/C Equipment
Safeguard PCDU
UART UART
Memory Cmd/Ctrl N …... …...
Detection
Digital IF 1
Separation
Digital IF m
Analog IF 1
Analog IF n
UART PCDU UART
Cmd/Ctrl R
TC In1 UART
NRZ-L via RS422 HPC N UART
TM Out1 SA In 1
NRZ-L via RS422
…
…...
PCDU Batt Pwr IF3
Pwr R
Pwr N
Relay 1
Relay 2
Bistable
Bistable
LCL 0/1
LCL 2/3
LCL 4/5
LCL 6/7
LCL 8/9
LCL 76
LCL 10/11
OBC Core N
OBC Core R
I/O Board N
I/O Board R
CCSDS N
CCSDS R
13
1.6.1 Processor-Boards
At time of concrete development of the FLP OBC and PCDU the initial problem
was to find a supplier providing a suitable CPU board for the OBC. The initial idea
was to base the development on one of the diverse available CPU test boards from
Aeroflex Gaisler AB and implement necessary modifications—since these test
boards were not designed to be flight hardware (Fig. 1.6).
By coincidence the author met Jiri Gaisler, the founder of former Gaisler
Research—today Aeroflex Gaisler AB—at the Data Systems in Aerospace
conference in May 2009 in Istanbul and Jiri Gaisler was aware that Aeroflex
Colorado Springs just had started the development of a Single Board Computer
(SBC), based on their LEON3FT processor UT699.
The LEON3FT (cf. [49] and [50]) architecture includes the following peripheral
blocks (please also refer to Fig. 1.7):
1 The System Design Concept 15
IEEE754 LEON3FT
FPU SPARC V8 Debug Serial/JTAG 2x SpaceWire 2x SpaceWire CAN
Support Debug Links Links 2.0
Mul & 2 x 4kB 2 x 4kB Unit Link RMAP
Div D-cache I-cache
AMBA AHB
AMBA APB
PCI Ethernet
Memory AHB/APB Intiator / MAC
Controller Bridge IrqCtrl UART Timers I/O Port Target 10/100
• LEON3FT SPARC V8 integer unit with 8 kByte instruction and 8 kByte of data
cache
• IEEE-754 floating point unit
• 8/16/32-bit memory controller with EDAC for external PROM and SRAM
• 32-bit SDRAM controller with EDAC for external SDRAM
• 16-bit general purpose I/O port (GPIO)
• Timer/watchdog unit
• Interrupt controller
• Debug support unit with UART and JTAG debug interface
• 4 SpaceWire links with RMAP
• Up to two CAN controllers
• Ethernet
• cPCI interface
• Debug support unit with UART and JTAG debug links.
What stands out here is that the system has the processor itself, the interface
controllers for CAN bus, Ethernet, cPCI and especially SpaceWire implemented
all on the same chip. Furthermore, the chip provides a debug interface. All these
additional on chip elements are connected to the CPU core via an internal AMBA
bus which was originally developed by ARM Ltd. for the ARM processor family.
A block diagram of the overall Processor-Board design intended at that time is
depicted in Fig. 1.8. This system was designed to include:
• The processor
• SRAM memory for OBSW computation
• Non volatile RAM (NVRAM), which could be used as EEPROM for storing the
initial boot OBSW image
• The SBC concept includes SpaceWire interfaces to couple the other foreseen
OBC board types to it and
• it included RS422 interfaces
• and a clock module.
16 J. Eickhoff
Processor-Board generated PPS output. All details on the final design ‘‘as
implemented’’ can be taken from the Processor-Board description in Chap. 2 of
this book and from Fig. 2.5.
Since Aeroflex Colorado Springs was not permitted to deliver any bootloader
nor software/RTOS for the Processor-Board due to ITAR, the solution was the
partnership with Aeroflex Gaisler AB, Sweden, applying their RTEMS SPARC
tailoring running on the LEON3FT.
On the side of satellite ground command/control the university uses a SCOS system
licensed from ESOC and thus applies the CCSDS spacecraft TC/TM control
standards [23]. Therefore a TC/TM decoder/encoder functionality according to
these standards was necessary. As already being in contact with Aeroflex Gaisler it
was the obvious choice to utilize Aeroflex Gaisler’s CCSDS TC/TM processing
infrastructure for the decoder/encoder boards (cf. [62]). For the students developing
the satellite this avoided the huge effort for programming according functionalities
for these complex frame management, compression and bit-stream error correction
functions (see [10]) which definitely would have exceed the team’s resources.
• The CCSDS Telecommand Decoder implements in hardware the synchroniza-
tion, channel coding sub-layer, and part of physical layer. The higher layers are
implemented in software as libraries, integrated into the Aeroflex Gaisler
SPARC V8 tailoring of the RTEMS realtime operating system. This software
implementation of the higher layers allows for implementation flexibility and
accommodation of future standard enhancements. The hardware decoded
command outputs and pulses do not require software and can therefore be used
for critical operations. The CCSDS telecommand decoder provides the entire
functionality of reassembling TCs from NRZ-L CLTU level coming from
18 J. Eickhoff
56 bit 8 bit
TC F rame:
1 The System Design Concept
Fig. 1.11 Telecommand packet definition. IRS, University of Stuttgart, template Astrium
Legend: (=<value>) ::== fix numerical value
20
TM Transfer Frame
Frame Header Frame Data Field Frame Trailer TM Frame: 1115 byte = 8920 bits
6 byte 1105 byte 4 byte
4 byte 0 byte
8 byte 4 byte
J. Eickhoff
Fig. 1.12 Telemetry packet definition. IRS, University of Stuttgart, template Astrium
1 The System Design Concept 21
The bridge between the Processor-Boards and the spacecraft platform and payload
equipment is realized via intermediate I/O-Boards (Fig. 1.13). These I/O-Boards
mimic the digital interface function of a Remote Interface Unit (RIU) in a com-
mercial S/C. The I/O-Boards are connected to the Processor-Boards via a
SpaceWire connection running an RMAP protocol. Two redundant I/O-Boards are
available and are cross coupled to the two redundant Processor-Boards. For the
development of the I/O-Boards the university selected 4Links Ltd., UK, as partner
due to their extensive experience with SpaceWire equipment and software.
Another reason for selecting 4Links was the excellent experience of the author
with 4Links SpaceWire test equipment on ground.
For the power supply boards—which are described in more detail later in Chap. 5
—it was decided right from the start to manufacture these at the IRS by the
electrical engineering team. These Power-Boards mainly have the task to convert
the satellite power bus voltage of 24 V down to the required voltages for the
different OBC data handling boards—which is 3.3 V nominal (Fig. 1.14).
During the design evolution the Power-Boards were adapted in addition to route
some of the OBC internal signals which are available e.g. on the Processor-
Board’s front connectors to the top of the overall housing assembly—please refer
to Fig. 1.1 respectively 1.2.
Furthermore, a logic circuitry was mounted to route the active pulse signal from
the multiple redundant GPS as a single output line towards the Processor-Boards.
1 The System Design Concept 23
As already cited in Sect. 1.3 the PCDU supplier for the satellite mission was
already selected at start of the overall CDPI architecture design, however the cited
functions for
• overall analog equipment command and control,
• for the OBC reconfiguration and for,
• the HPC interface functionality to implement implicitly the functions of an
OBC’s CPDU,
were specified during the engineering process to the supplier in accordance with
the CDPI concept as explained in Sect. 1.4. All the PCDU functional upgrades
from a standard PCDU controller to the Combined-Controller were implemented
by Vectronic Aerospace into the PCDU firmware.
Details on these functions and features, on redundancy handling, cross cou-
plings etc. all can be found in Chap. 8. Already in the overall EM Satellite Test
Bed (STB) these functions were available and the corresponding PCDU EM is
shown in Fig. 1.15.
Fig. 1.15 Power Control and Distribution Unit—EM. IRS, University of Stuttgart/Vectronic
Aerospace
The entire CDPI electronics subsequently was tested on an EM Satellite Test Bed
(STB) and later on a satellite FlatSat setup. Since for EM the units became
available step by step and partly as EM, partly as breadboard and the overall STB
setup was non-redundant, these units were mounted into a 19’’ rack together with
power supply equipment, debugging equipment, a spacecraft simulator. Together
24 J. Eickhoff
Fig. 1.16 Satellite test bed for OBC EM/EBB integration tests. IRS, University of Stuttgart
All Flight Model components of the CDPI, namely the OBC boards and the PCDU
have been manufactured under cleanroom conditions. The same applies to the
assembly of the OBC from the individual frames and the integration of OBC and
PCDU into the overall CDPI. These steps have been performed by the IRS in the
cleanroom facilities of the ‘‘Raumfahrtzentrum Baden-Württemberg’’ in Stuttgart.
Figure 1.17 depicts the CDPI assembly Flight Model of the FLP satellite in the IRS
cleanroom. The setup here consists of the 2 separate housings of OBC and PCDU,
ready for connecting cabling of:
• OBC Telemetry output and Telecommand input lines to/from satellite trans-
ceivers respectively bypass
• Control lines from OBC to PCDU for normal control of power functions and for
control of the PCDU’s analog RIU functions
• OBC watchdog lines to the PCDU Common-Controller
• HPC lines for PCDU commanding from ground via CCSDS decoders.
The input power to the PCDU power is provided here by a laboratory supply
which later is replaced by the solar array and the battery for flight. The onboard
battery is not yet mounted in this setup. For the tests of this assembly on unit and
system level please refer to Chap. 9.
1 The System Design Concept 25
Fig. 1.17 CDPI FM units forming the start of the satellite FlatSat testbench. IRS, University
of Stuttgart
1
‘‘Low-cost’’ in the sense of a low cost platform versus industry or agency project price ranges.
26 J. Eickhoff
• The OBC design can be extended with external SpaceWire interfaces by adding
SpaceWire router boards, one for the nominal and one for the redundant OBC
side.
• The OBC design with its inter-board cabling and the external interfaces can be
adapted to a classical PCB/backplane design if required. However looking back
to the individual, parallel board engineering processes and the decoupling of
design and production cycles during the prototype development, the frame based
approach with an inter-board harness showed some significant benefit con-
cerning late design freeze for the boards.
• The overall OBC architecture also is flexible to exchange the CPU board by
future multi-core LEON chip implementations such as the GR712 LEON3 dual-
core SBC which is under development at Aeroflex Gaisler AB.
• Optionally the CDPI infrastructure can also be enhanced with an integrated
payload data management unit for future missions.
The engineering team is highly interested to apply the CDPI system concept
also in further missions.
Chapter 2
The OBC Processor-Boards
Though not typically used in home PCs, SBC computers are very typical in
satellite applications as well as many industrial and aerospace computer systems.
They are a much more specialized type of computer and in the case of satellite and
military applications, are designed for more extreme environments.
In general, most SBCs used for satellite or automation purposes will implement
specialized operating systems called RTOS (Real Time Operating Systems). One of
the main concepts behind an RTOS is the idea commonly referred to as ‘‘deter-
minacy’’. Determinacy is the ability of a computer system to calculate with great
accuracy the response time of any given process. Most satellite implementations of
processors require the tasks performed by the processor to be very deterministic.
Fig. 2.2 Processor-Board engineering model and flight model. Aeroflex Inc.
The processor on an SBC is the heart of the board and is chosen based on the
performance requirements of the program. In this case the term ‘performance’ is
used in a much broader context than one would usually think. For a satellite,
30 S. Stratton and D. Stevenson
All SBCs require memory in order to process data and perform tasks. Two dif-
ferent types of memory fulfill different functions inside an embedded computer:
Non-volatile Memory:
The first type of memory is known as non-volatile and is named as such due to the
fact that when the power is removed from the board the device retains the data
stored inside its cells. Non-volatile memory is essential because it typically
contains the operating system the processor will use when it boots on power-up.
In a home PC the non-volatile memory is fulfilled by the boot EEPROM and the
2 The OBC Processor-Boards 31
hard disk drive though in recent years we are seeing more and more solid state
devices take the place of a hard disk. Since there are no hard disk drives that are
qualified to fly in space, the non-volatile memory in a satellite needs to be of the
solid state variety. Some types of non-volatile memory include Flash-Memory,
EEPROM, FRAM, and MRAM. These devices will retain data when not powered.
Volatile Memory:
The second type of memory on an SBC is referred to as volatile memory. Volatile
memory will not retain data when the power is removed and therefore can only be
used by the processor when it is powered up and performing its intended function.
Common varieties of volatile memory include SRAM, Synchronous SRAM,
Dynamic RAM or DRAM, and Synchronous Dynamic RAM or SDRAM. These
are all examples of volatile memory. For satellite applications, the most common
form of volatile memory is Static Random Access Memory (SRAM).
During the course of designing a processor system, requirements that are not easily
fulfilled using a microprocessor, will often require implementation in discrete
logic or most likely a Field Programmable Gate Array (FPGA). The FLP program
has some OBC Processor-Board requirements that are not easily implemented with
software on the LEON3FT. So the design team decided to implement these
functions on a radiation tolerant FPGA (Fig. 2.4).
The FPGA is typically a better choice over discrete logic because an FPGA will
usually take up less space on the board and will also most likely use less power.
The process of choosing an FPGA for a satellite system is similar to the process of
choosing a microprocessor. Electrical, temperature as well as mechanical and
radiation performance need to be considered prior to making a choice of FPGA.
For the FLP satellite program the Aeroflex UT6325 in the CGA484 package was
chosen for its radiation performance as well as for its relatively small footprint and
ease of implementation. The device is well suited for housekeeping functions and
other tasks that would be very difficult if not impossible to implement using
discrete logic devices. The device is readily available and has good flight history.
32 S. Stratton and D. Stevenson
Spw4 10 Mbs
RAMS and ROMS Signals
4MB 8MB
EDAC Address /Data EDAC
Ethernet
NVMEM SRAM
SRAM CE
UT6325
One Second Pulse
FPGA
Leon 3 Addr 20 to 22
NVMEM CE
Fig. 2.5 OBC processor-Board block diagram (flight model). Aeroflex Inc.
One important aspect that will be covered in more detail in later sections is the
fact that the FPGA is handling the CE signals to both the volatile and non-volatile
memories as explained in more detail in the following section.
2 The OBC Processor-Boards 33
All microprocessors require memory to perform their desired function. The OBC
Processor-Board memory interface was implemented based on requirements pas-
sed down from the University of Stuttgart to the designers at Aeroflex. Any issues
or changes to the board during the design process were made in consult with the
University.
The amount of on board memory contained on any Processor-Board is a very
important element in the performance of the processor. This fact is true of home
PCs as well as SBCs and follows the common understanding that more is better.
As mentioned previously, the processor requires two types of memory—volatile
and nonvolatile.
• Non-volatile memory: This type of memory will retain data even when the
power to the board has been turned off. It is suitable for storing the processor
operating system. This type of memory is similar in function to the boot EE-
PROM and hard disk used in home computers. It typically is slower than most
types of volatile memory which however is acceptable since it is accessed only
during boot-up of the computer.
• Volatile memory: This type of memory does not retain data when the power to
the device is turned off. It is suitable for use by the processor when running its
intended tasks (Fig. 2.6).
The amount of each type of memory on the board is dependent on the functional
requirements of the board being designed. The non-volatile memory devices need
to be dense enough to hold the entire RTOS image as well as any boot code that
the user desires.
designed to use both of the ROM Select (ROMS) signals on the LEON3FT.
ROMS[0] begins at address 0x00000000 and ROMS[1] begins at 0x10000000.
Each bank of non-volatile memory provides 2 MB of SRAM to the LEON3FT.
FRAM CE
FPGA
UT699
LEON3
Address
4MB EDAC
Data NVMEM
Control
Figure 2.7 shows the top level interface from the LEON3FT to the FRAM
devices. The ROMS signal from the LEON3FT to the FPGA, is essentially the
Chip Enable signal from the LEON3FT. The FPGA manipulates the timing of the
ROMS signal to create CE signals to the FRAM devices. The manipulation is such
that the CE signals will meet the timing of the FRAMs and the system impact is
the use of three wait states required when interfacing to the non-volatile memory
space by the LEON3FT. Since these memories are not read from frequently, the
impact to the processor performance is almost negligible.
• LEON3FT Access to NVMEM:
The LEON3FT processor has two Chip Enables for the ROM memory area on
the device. The non-volatile memory devices are mapped into the ROMS[0] and
the ROMS[1] space of the LEON3FT. The ROMS signals are Chip Enables for
non-volatile memories. The result is there are two banks of 2 MB each of non-
volatile memory on the FM SBC.
• NVMEM Wait States:
A minimum of three wait states need to be set in the LEON3FT Memory
Configuration 1 (mcfg1) register when NVMEM accesses are performed. This is
to ensure the timing of the LEON3FT interface using the FPGA to control the
CE signals to the FRAM devices. The three wait states should be set for both
read and write. Refer to the LEON3FT Functional Manual [50] for a detailed
description of the mcfg registers in the LEON3FT. Note that on power up the
default wait states for the PROM memory area are set to the maximum of 30.
2 The OBC Processor-Boards 35
E4
E1
512K by 39
SRAM
Die 4
Fig. 2.8 Aeroflex 8 MB stacked SRAM with on chip EDAC bits. Aeroflex Inc.
36 S. Stratton and D. Stevenson
SRAM CE
FPGA
UT699 8MB
Address Stacked
LEON3 Data [31:0] SRAM
Check Bits [39:32]
Control
SpaceWire is a point to point serial bus that supports full duplex communication at
a data rate of up to 200 Mbs. The protocol uses a simple ‘token’ based system to
manage data to and from each end point. Each token character tells the receiver of
the token that the transmitter has 8 bytes of data space available in its receive
buffer. Therefore, if a SpaceWire node has data to send it will send 8 bytes of data
for each token it receives. The resulting function is simply as follows: As long as
each side has data to send, and the data that gets received is taken out of the
receive buffer, the system keeps running. For the FLP satellite program, the SBC
implements all four of the dedicated SpaceWire ports on the LEON3FT micro-
processor operating at 10 Mbs (Fig. 2.10).
2 The OBC Processor-Boards 37
The OBC Processor-Board has a number of miscellaneous functions that are not
suitable for the LEON3FT microprocessor. These functions have been designed
into the UT6325 FPGA and are discussed in the following sections. FPGAs are
uniquely suited for Processor-Board utility functions and the UT6325 is used by
the OBC Processor-Board designers (Fig. 2.11).
As stated previously, the FM22L16 FRAM devices chosen for the OBC Processor-
Board have timing that is incompatible with the timing of the LEON3FT. There-
fore, certain signals required to control the memory need to be managed by the on-
board FPGA. The signals in this instance are the FRAM Chip Enables. The internal
logic of the FPGA ensures the proper timing over worst case flight conditions.
The LEON3FT is not designed to interface directly with SRAM devices that
implement a stacked die configuration. The used UT8ER2M39 SRAM devices
have four die layers inside one package. The on-board FPGA uses one of the Chip
Enables from the LEON3FT along with upper address bits to generate the four
Chip Enables to the SRAM devices.
A very good example of a utility function suited for an FPGA is the Pulse Per
Second requirement (PPS). The signal is used to sync the star tracker interface of
the FLP satellite to the LEON3FT. The signal is generated by the on-board FPGA
and provided on the 44 pin D-sub connector. Generating this type of signal using
MSI devices would require four or five separate chips and would also most likely
take up more board space than the FPGA.
The one second pulse is shown in Fig. 2.12 and this scope plot was taken from
the OBC FM unit. The timing is measured from the rising edge to the next rising
edge. The timing parameters for the signal are shown in Table 2.2. The signal is also
routed to one of the GPIOs on the LEON3FT. That way, the signal can be monitored
by the LEON3FT if the user desires. The GPIO used for this input is GPIO 10.
Fig. 2.12 Processor-Board oscilloscope plot of the PPS signal. Aeroflex Inc.
2 The OBC Processor-Boards 39
The LEON3FT has a watchdog trip signal on chip and the OBC processor routes
that signal to the FPGA. When enabled, the logic inside the FPGA will use the
watchdog signal trip to reset the LEON3FT.
2.7.6 Resets
All digital circuit boards need to be reset at the very least on power up. The OBC
processor is no exception. There is a Power On Reset circuit that will hold the
LEON3FT in reset until the input power is stable (Fig. 2.13).
40 S. Stratton and D. Stevenson
WD
LEON3FT
WD_EN
44 Pin Connector
UT6325 POR 3.3V
FPGA Circuit
External Reset
POR Reset:
The Power On Reset (POR) on the OBC Processor-Board is approximately 250 ms
long. The LEON3FT and the FPGA will be reset at power-up. The FPGA is only
reset at power-up and is designed to control the LEON3FT reset.
SpaceWire Clock:
The SpaceWire clock as already cited is 10 MHz.
The SBC has a 44 pin D-sub connector which is used for the 3.3 V ± 5 % input
power. In addition there are five MDM 9 pin connectors, 4 of which are for
SpaceWire interfaces and one for RS422. The SBC consumes a maximum of no
more than 5w at full throughput.
The SBC is based on a 3U cPCI Printed Circuit Board (PCB). The dimensions are
100 mm by 160 mm. Refer to Fig. 2.14 for a conceptual drawing of the board and
the connector placement.
Ethernet and Power
RS-422
6.50 by
DB-44
Proposed Frame
10.00
9.75 by 40.25
Dimensions
2.0 mm
70.00 mm
100.00 mm
Port 2
Port1
Drill Hole
6.50 by
6.50 by
10.00
10.00
10.00 mm
Keepout area
50.00 mm
Port 3
6.50 by
6.50 by
10.00
10.00
Port 0
6.0 mm
SpaceWire 2
.
0
0
m
m
6.0 mm 3.20 mm
80.00 mm
160.00 mm
The approach for these two OBC components is a common design for the OBC
I/O-Boards and the CCSDS decoder/encoder boards (or CCSDS-Boards for short).
Both of these board types are available with single redundancy inside the OBC and
are connected to the two OBC Processor-Boards via cross coupled SpaceWire
links running RMAP protocol. A key characteristic is that although implementing
completely different functionalities both boards are based on the same 3U printed
circuit board, FPGA chip and I/O-driver ICs design.
The I/O-Boards are designed by 4Links Limited, including both printed circuit
board and functionality implemented in the FPGA IP Core. All the I/O-Board IP,
for SpaceWire, RMAP, and the I/O interfaces is from 4Links. The SpaceWire core
is derived from the design used in 4Links test equipment for SpaceWire, which
uses synchronous clock and data recovery—greatly simplifying the design and
eliminating problems of asynchronous design. The simplified IP Core’s ease of use
is testified by a recent customer who praised 4Links’ support when, in practice,
negligible support was required (Fig. 3.1).
MRAM
routing the onboard TM and TC signals directly to spacecraft’s GPS receivers, star
trackers, fiberoptic gyros, reaction wheels, magnetotorquer electronics, magnetometer
electronics, telemetry tracking and control equipment, payload controller, payload
data downlink system and the power control and distribution unit. The reader inter-
ested in the details of the FLP target satellite’s architecture is referred to Chap. 10 and
specifically to Fig. 10.5. In total, an I/O-Board provides 62 external interfaces:
• 2 SpaceWire links at 10 Mb/s.
• 22 UART interfaces, mostly full duplex, at rates from 9600 to 115200 baud
• 1 bus at 2 MHz for the fiberoptic gyro
• 2 IIC buses at 100 kHz and
• 35 digital I/O lines.
Data transfer between the I/O-Board and Onboard Computer occurs within a
schedule of 1.25 ms time slots. The most demanding transfer, 1200 bytes in a
single slot, amounts to not quite 1 MB/s. This is achieved with a SpaceWire raw
line rate of 10 Mb/s. The FPGA can thus be run with a 10 MHz clock, reducing
dynamic power consumption and also reducing the probability that an upset in the
combinatorial logic will be retained.
Transfers between the I/O-Board and attached devices are slower and several
transactions can occur simultaneously—this is far simpler and more easily verified
as concurrent hardware in FPGA than as pseudo-concurrent software on a pro-
cessor. The connection between the OBC and I/O-Board will be kept reasonably
busy with transfers at 10 Mb/s, making SpaceWire with its LVDS interface a good
choice. Other interfaces run at a much lower data rate and source terminated
RS422/485 lines are used for most connections so that power is consumed only
during edge transitions, giving a very low average power consumption whilst also
providing a wide voltage swing to provide good noise immunity.
SpaceWire CODECS conform to the current ECSS standard [11] and the
associated RMAP protocol [13, 14] was chosen to structure transfers. Although a
Remote Memory Access Protocol (RMAP) is an obvious choice for accesses to
memory—MRAM that can be used as both normal SRAM and NVRAM—it is less
clearly suited to streaming data such as found in UARTs.
External (non-memory) transfers are made to look like memory transfers with a
simple additional protocol between RMAP and the device. Details differ, depending
on the exact interface requirement (UART, IIC, etc.) but all follow the same overall
structure. Data to be sent from the I/O-Board (to an instrument or actuator) is first
written to a memory-mapped buffer where it is held until confirmed to be error-free.
It is transferred from the buffer to the external interface, taking account of those
interfaces that require a more complex action than a simple data-copy—for
example an IIC read which uses a write/read sequence of sending an address
followed by inputting data. Received data is stored in a buffer until read, as a
standard memory read, by a later transaction initiated by the OBC Processor-Board.
Each external interface is given a unique RMAP memory address.
46 B. M. Cook et al.
In normal operational cases one OBC I/O-Board is active, controlled by the active
Processor-Board. As described, the OBC controls all the satellite’s onboard
equipment through the driver interfaces of the I/O-Board and it stores all the
relevant status and housekeeping data and timeline processing status in the
I/O-Board memory.
However in case an interface driver—e.g. a UART—to an onboard equipment
becomes defective w.r.t. its hardware, cabling or maybe even on the controlled
equipment’s side, it might be required to switch over to the redundant I/O-Board. In
such case however the operational board also should contain the current house-
keeping and status data as well as the recorded history. Therefore the functionality
exists to reconfigure to the healthy I/O-Board and still keep the defective one
3 The I/O-Boards 47
switched on with deactivated interface buffer chips. This permits the history of the
recorded telemetry on the defective I/O-Board to be copied over to the healthy one.
The buffer chips are deactivated by removing their power, in which case their
outputs become high-impedance.
The I/O-Board’s interface buffers are not powered until instructed by writing
values at address 098100 (the value 255 powers the buffers, 0 removes power, any
other value is invalid). The first logic value controls connector-D buffers and the
second value controls connector-E buffers.
1
If the JTAG function is required (for example, to load new firmware) the ‘‘grounded’’ pin may
be connected via a 1 KX resistor to ground and the programmer can then be wired directly to the
connector pin. The JTAG programmer should be disconnected when it is not being used to ensure
correct board operation.
The board will then respond to requests sent to logical address 0921.
48
Overall, the board presents itself as a large, sparse, memory map accessed through
the RMAP protocol. Table 3.1 lists the mapping from RMAP memory space to
real memory and interfaces, showing how the ‘extended address’ field is used to
select major blocks and ‘address’ for the detailed selection.
Writes to the state vector memory and telemetry memory are unbuffered:
verification before writing is not possible. At least some of the data will be written
to memory, even if an error occurs—the RMAP Verify Bit will be ignored. If a
write acknowledge is requested and an error occurs, a specific error code will be
returned (unless the header is corrupted, in which case there will be no reply—and
no data will be written to memory.)
Writes to the UART, IIC, FOG and Logic-out are fully buffered and data may
be verified before it is written. Data will not be transferred if the RMAP Verify Bit
is set and an error occurs. Some data may be transferred if the RMAP Verify Bit is
not set, even if an error occurs.
This section depicts the I/O circuitry diagrams for the diverse types of interface,
for both nominally grounded interfaces and for specific interfaces which due to the
target satellite’s connected equipment required I/O groups with isolated ground on
the board.
This interface type is applied for all RS485 and RS422 connections in the
spacecraft. For the target platform the this interface applies to connections of the
OBC with PCDU, magnetometer control electronics, the transceiver housekeeping
interfaces, the reaction wheels and the fiberoptic gyros.
A low-pass filter is applied at the output terminals to reduce possible Electro-
magnetic Interference (EMI) to other signals/systems in the spacecraft. At a
maximum data rate of 2 Mb/s for any signal (and most at 100 kb/s or less) the
buffer output rise and fall times are far faster than required. Fast edges do not aid
signal integrity in this application (inputs all have Schmitt triggers) but merely
enhance spurious emissions—hence the filtering (Fig. 3.2).
Fig. 3.2 Example for a standard serial differential interface. 4Links Ltd.
Another example for an isolated group interface is given in Fig. 3.4. It serves
for units requiring onboard SW patch functions. The example depicted here is
from the GPS receiver (Fig. 3.5).
SWE+3V3
IC40
100nF
100nF
C136
C135
C164
C165
10uF
10uF
ADuM5402 GPS_3V3
VDD1 VISO
+ +
GND1A GND-ISO-A
GND1B GND-ISO-B
VIB VOB nc
SWE+3V3
GND
IC52
100nF
100nF
C134
C133
ADuM1402
VDD1 VDD2
GND1A GND2A
GND1B GND2B
GPS_INT_ON EVD
VIA VOA
GPS_CDM_ON EUD
VIB VOB
IC53
AM26LV31 C132
BAS40H
G VCC
10nF
GBAR GND D1
R81 BAS40H EY_VOUT
1A 1Y
1Z nc 100R
R76 D2 S_EV_OUT_T
2A 2Y
2Z nc 100R
R74 S_EU_OUT_T
3A 3Y
3Z nc 100R C50 C51 C52
4A 4Y nc 10nF 10nF 10nF
4Z nc
GROUP_E3_RETURN
Fig. 3.4 Isolated group for equipment with SW patch function—here GPS. 4Links Ltd.
And finally there may exist connected S/C units with a bunch of different
command/control interfaces which all must be electrically isolated as being
grounded physically on equipment unit side. For such units entire isolated I/F
groups are provided on the I/O-Board. The example depicted in Fig. 3.5 is used for
a complex payload controller.
3 The I/O-Boards 53
R34 D_D1C_IN_T
IC55
220R
220R
A8
R36
2
R B7
R35 D_D1C_IN_I
220R
R37 D_D1C_OUT_T
Y 5 390R C139
3
SWD+3V3
D Z 6
R39 270pF D_D1C_OUT_I
IC54 +3V3 +3V3
390R
1
SN65HVD30-3V3
100nF
100nF
C137
C138
ADuM1402 C157 C156 IC55P IC56P
R41
R43
2k4
2k4
VDD1 VDD2
10nF 10nF GND GND
4
GND1A GND2A
GND1B GND2B
GND
VE1 VE2
D1CR R45 D_D1B_IN_T
VOC VIC IC56
220R
220R
D1BR A8
R49
2
VOD VID
R B7
R47 D_D1B_IN_I
D1CD 220R
VIA VOA R50 D_D1B_OUT_T
Y 5 C140
D1BD 3
390R
VIB VOB
D Z 6
R53 270pF D_D1B_OUT_I
390R
SWD+3V3
SN65HVD30-3V3
IC30
R62
R64
2k4
2k4
100nF
100nF
C152
C153
10uF
10uF
ADuM5402 CPN_0_3V3
C21
C22
VDD1 VISO
+ +
GND1A GND-ISO-A
GND1B GND-ISO-B
GND
nc RCOUT VSEL select_3V3_VISO
CPN_VTX_READY_0 DTR R112 S_DT_IN_T
VOC VIC
10k
CPN_CC_READY_0 DSR R111 S_DS_IN_T
VOD VID
10k
VIA VOA nc
VIB VOB nc
GND
SWD+3V3
IC31
100nF
100nF
ADuM1402
C23
C27
VDD1 VDD2
GND1A GND2A
GND1B GND2B
GND
VE1 VE2
CPN_CC_ERROR_0 DRR R110 S_DR_IN_T
VOC VIC
10k
PB_STATUS_0 DQR R48 S_DQ_IN_T
VOD VID
10k
PB_RESET_0 DQD
VIA VOA
VIB VOB nc
GND
SWD+3V3
IC32
100nF
100nF
ADuM1402
C28
C29
VDD1 VDD2
GND1A GND2A
GND1B GND2B
GND
VE1 VE2
CPN_T_ALERT_0 DPR R44 S_DP_IN_T
VOC VIC
10k
CPN_V_ALERT_0 DOR R46 S_DO_IN_T
VOD VID
10k
CPN_MASTER_0 DPD
VIA VOA
CPN_RESET_0 DOD
VIB VOB
IC33
AM26LV31 C126
G VCC
10nF
GBAR GND
R38 S_DQ_OUT_T
1A 1Y
1Z nc 100R
R40 S_DP_OUT_T
2A 2Y
2Z nc 100R
R42 S_DO_OUT_T
3A 3Y
3Z nc 100R
C25 C24 C26
4A 4Y nc
nc 10nF 10nF 10nF
4Z GROUP_D1_RETURN
Fig. 3.5 Isolated I/F group for a complex payload controller. 4Links Ltd.
The I/O-Board debug/JTAG Interface finally completes the set of external OBC
interfaces. The electrical grounding and termination diagrams are provided in
Fig. 3.6.
54 B. M. Cook et al.
The board’s interface types were already mentioned in the previous section. The
protocols for interface access from the OBC Processor-Board are listed here.
Memory:
Beyond the RMAP protocol, there is no additional protocol for data written to/read
from State Vector and Telemetry data. All data bytes in a command or reply are
exactly copied to the absolute memory addresses or read from absolute addresses.
3 The I/O-Boards 55
UART:
There is also no additional protocol for data written to/read from a UART. All data
Bytes in a command packet written to the UART are forwarded through the UART
without addition or deletion or change of any Byte. Similarly, all data Bytes
received by the UART are read from the UART’s buffer without addition, deletion
or change of any Byte.
Logic-out:
The logic-out and logic-in interfaces use a simple protocol within the RMAP
packet.
Each logic-out address allows one or more signals to accessed. The signal may
be set low, set high, or left unchanged. The data is a sequence of bytes, one for
each signal, in the order shown in the above interface table. A data value of 0 will
set the output signal low, a value of 255 will set the output signal high, and any
other value will leave the signal unchanged.
If fewer data bytes than signals are sent, Logic-out signals that are not sent data
will not change.
If more data bytes than signals are sent, excess bytes will be ignored and no
error will be reported.
Logic-in:
Values are returned as 0 (signal wire low) or 255 (signal wire high).
IIC:
The IIC interfaces use a simple protocol within the RMAP packet.
Command: Commands consist of an address byte (with the least significant bit set
to 0), one byte indicating how many bytes are to be written (may be zero), one byte
indicating how many bytes are to be read (may be zero) and the bytes (if any) to be
written.
The connector placement on the I/O-Boards is as depicted in Fig. 3.7. The CCSDS-
Board schematic is similar and is just comprising connectors A, B, C and E.
C – SpaceWire
B – power
A – SpaceWire
The connectors A and C for SpaceWire links to the OBC Processor-Boards are
following SpaceWire standard and are implemented as Micro-D High Density
connectors.
Connector-A: SpaceWire—Nominal
This SpaceWire link will be used by default if it succeeds in establishing a con-
nection with the OBC Processor Board.
Connector-C: SpaceWire—Redundant
This redundant SpaceWire link will be used if the nominal link fails to establish a
connection with the OBC Processor-Board. If the nominal SpaceWire link suc-
ceeds in establishing a connection with the OBC Processor-Board, the redundant
link is disabled.
The power supply of the I/O-Board is provided by the connected OBC Power-
Board via connector B, a Sub-D High Density connector, 15-way. The pin
assignment is shown in Table 3.3:
3 The I/O-Boards 57
Connectors D and E (or J5/6 and J11/12 on OBC unit level—see Fig. 11.4) pro-
vide the signal I/O between OBC and connected spacecraft equipment, connector
E in addition carries the I/O-Board JTAG Interface. To avoid connection errors
during harness mounting
• connector D is a Micro-miniature D-socket (female), 100-way and
• connector E is a Micro-miniature D-plug (male), 100-way with pins protected
by the connector body.
The signal I/O connections comprise both the standard grounded groups and the
isolated I/F groups. The standard ground pins are all equivalent. Within each
isolated group the ground pins are equivalent per connector.
• Logic inputs can accept 3.3 V CMOS, 3.3 V TTL and 5 V TTL signals. Logic
outputs meet the requirements of 3.3 V CMOS, 3.3 V TTL and 5 V TTL
signals.
• The SW_NVRET (connector E) logic output has a series diode limiting the
output to being a voltage source at 3 V.
• The connector data pins will behave as high-impedance wires when the buffers
(or the whole board) is not powered—these buffers must be powered before
normal I/O operation is possible.
• JTAG pins on connector E connect to/from the FPGA via permanently powered
buffers to provide ESD protection.
• JTAG TRST has a pull-down resistor to hold JTAG reset unless it is actively
taken high—the programmer may need to be configured to drive this pin.
58 B. M. Cook et al.
The connector pin assignments for connectors D and E are depicted in the
annex Sects. 11.7. (I/O-Board) and 11.8. (CCSDS-Board).
The operating temperature range of both the I/O and CCSDS-Board is from: -40
to +85 C.
Note that these limits apply to the temperature of the components/silicon.
Allowance must be made for thermal conductivities from those components to the
chassis.
The storage temperature range ranges from: -55 to +105 C.
Sandi Habinc
© Edelweiss – Fotolia.com
4.1 Introduction
S. Habinc (&)
Aeroflex Gaisler AB, Göteborg, Sweden
e-mail: sandi@gaisler.com
The approach followed here in this CDPI architecture is that part of the CCSDS
decoding/encoding is performed in FPGA hardware on the CCSDS decoder/
encoder board and part of the task is done in software using libraries provided by
Aeroflex Gaisler AB together with the RTEMS realtime operating system for the
Aeroflex Processor-Boards. The processing IP Cores in the FPGA on the CCSDS-
Board and the software libraries which are available in the RTEMS SPARC
tailoring and which are running on the LEON3FT Processor-Board are designed in
a common architecture by Aeroflex Gaisler AB.
The CCSDS Decoder/Encoder boards are based on the same Microsemi/Actel
RT ProASIC3 FPGA as the I/O-Boards and are also manufactured by 4Links Ltd.
Since the CCSDS board only uses the SpaceWire interfaces to the Processor-
Board, the CLTU and CADU NRZ-L lines and the HPC UART interface to PCDU,
it applies a reduced interface driver IC mounting on the PCB compared to the
I/O-Board. The same applies to memory equipment on PCB. The board hardware
even shares the PCB layout with the I/O-Boards as well as the SpaceWire interface
LVDS transceiver design for the processor board interfaces. With respect to
electronic circuitry it is a ‘‘not fully populated I/O-Board’’. The connector pinout
of the CCSDS-Boards can be taken from the annex Sect. 11.8.
The IP Cores loaded in the FPGA are provided by Aeroflex Gaisler AB. The
product name for this specific implementation is GR-TMTC-0004. The overall
architecture is based on IP Cores from the GRLIB VHDL IP Core library. All
functions implemented in the FPGA are based on Triple Module Redundancy
(TMR) technique to assure sufficiently high robustness under space environmental
radiation conditions. The GR-TMTC-0004 is also available for other devices of the
Microsemi/Actel ProAsic3 series in other package types and for diverse speed
grades.
This main chapter consists of extracts from the Aeroflex Gaisler CCSDS
TM/TC and SpaceWire FPGA Data Sheet and User’s Manual [62], which is the
detailed documentation of the GR-TMTC-0004 product. The tailoring is done
according to what the overall OBC user needs to know for
• installation of the GR-TMTC-0004 FPGA programming file on the CCSDS
boards and for
• applying the proper TC/TM link settings for ground equipment to connect to the
OBC.
Since this chapter is providing all the necessary details on the TM encoding, TC
decoding and on High Priority TC handling functionality, it exceeds the size of the
other purely hardware oriented main chapters.
Figure 4.1 shows a simple block diagram of the device. Although this block
diagram shows the functional structure, its external interfaces shown here directly
correspond to the hardware interfaces of the CCSDS-Board:
• On the left side the two SpaceWire interfaces to the processor boards can be
seen which physically are implemented on the front side of the CCSDS Board,
as for the I/O Board.
• The same applies to the power supply connector. Power supply is not depicted in
this logical diagram.
• Furthermore, on the left side of this figure the High Priority Command (HPC) of
the telecommand decoder is shown. It is cited with ‘‘TC UART’’ here since it is
implemented as an RS422 type interface.
62 S. Habinc
• On the right side of the figure the CLTU Telecommand input interface and the
CADU Telemetry interface are shown (both also of type RS422 in hardware).
Some of the interfaces (e.g. TC input) is physically split into multiple signals.
For the explanations and the signal overview please refer to the later Sect. 4.2.9
and Fig. 4.2. Note that all IP Cores with AMBA AHB master interfaces also have
APB slave interfaces for configuration and status monitoring, although not shown
in the block diagram. The Telemetry and Telecommand specification comprises
the following elements:
• CCSDS compliant Telemetry encoder:
– Input:
4 Virtual Channels ? 1 Virtual Channel for Idle Frames
Input access via SpaceWire link
CCSDS Space Packet data (or any custom data block)
4 The CCSDS Decoder/Encoder Boards 63
CLCW
Input via SpaceWire link
CLCW internally from hardware commands
CLCW externally from two dedicated asynchronous bit serial inputs
– Output:
CADU/encoded CADU
NRZ-L encoding
Pseudo-Randomization
Reed-Solomon and/or Convolutional encoding
Bit synchronous output: clock and data
• CCSDS compliant Telecommand decoder (software commands):
– Layers in hardware:
Coding layer
– Input:
Auto adaptable bit rate
Bit synchronous input: clock, qualifier and data
– Output:
– Output access via SpaceWire link
CLTU (Telecommand Transfer Frame and Filler Data)
CLCW internally connected to Telemetry encoder
CLCW on dedicated asynchronous bit serial output
• CCSDS compliant Telecommand decoder (hardware commands):
– Layers in hardware:
Coding layer
Transfer layer (BD frames only)
CLCW internally connected to Telemetry encoder
– Input:
Auto adaptable bit rate
Bit synchronous input: clock, qualifier and data
Telecommand Frame with Segment
– Output:
Redundant UART
CLCW on dedicated asynchronous bit serial output.
64 S. Habinc
4.2.1 Interfaces
• System level
– System clock and reset
– SpaceWire link with RMAP support for telemetry and telecommand.
The system clock is taken directly from a separate external input. The telemetry
transmitter clock is derived from the system clock. The SpaceWire transmitter
clock is derived from system clock. The device is reset with a single external reset
input that need not be synchronous with the system clock input.
4.2.4 Performance
The CCSDS Telemetry Encoder implements the Data Link Layer, covering the
Protocol Sub-layer and the Synchronization and Coding Sub-layer and part of the
Physical Layer of the packet telemetry encoder protocol.
The Telemetry Encoder comprises several encoders and modulators imple-
menting the Consultative Committee for Space Data Systems (CCSDS) recom-
mendations, European Cooperation on Space Standardization (ECSS) and the
66 S. Habinc
European Space Agency (ESA) Procedures, Standards and Specifications (PSS) for
telemetry and channel coding.
The Telemetry Encoder implements four Virtual Channels accessible via
SpaceWire links. The Virtual Channels accept CCSDS Space Packet data [27] as
input via the SpaceWire RMAP protocol. An additional Virtual Channel is
implemented for Idle Frames only.
In the target satellite project the ‘‘Virtual Channel Generation Function Input
Interface’’ of the encoder was used as it is described in depth in Sect. 4.3.12. This
simplifies VC handling for the software designers and only requires the proper use
of the registers for submission of CCSDS Space Packets to the encoder and the
according activation and control registers. These are described further in Sect. 4.3.
Idle Frames are generated on a separate Virtual Channel, using identifier 7. See
Sect. 4.3.3.4.
The CCSDS Telecommand Decoder implements part of the Data Link Layer,
covering the Protocol Sub-layer and the Synchronization and Coding Sub-layer
and part of the Physical Layer of the packet telecommand decoder protocol.
The Telecommand Decoder supports decoding of higher protocol layers in
software, being accessible via a SpaceWire link. It also supports decoding in
hardware for hardware commands (see Sect. 4.5), for which CLCW is produced to
on-chip Telemetry Encoder.
The Telecommand Decoder has multiple separate serial input streams from
transponders etc., comprising serial data, clock, channel active indicator (bit lock)
and RF carrier available. The input stream is auto-adaptable.
The SpaceWire links provide an interface between the on-chip bus and a Space-
Wire network. They implement the SpaceWire standard [12] with the protocol
identification extension [13]. The Memory Access Protocol (RMAP) command
handler implements the ECSS standard [14].
Two times 16 kB of on-chip volatile memory is provided in the FPGA for tem-
porary storage of 7 Telemetry Transfer Frames for each of the Telemetry Virtual
Channels 0 through 3, together with a dedicated hard-coded descriptor memory
containing 7 descriptors for each channel. Additional 8 kB of on-chip volatile
memory is provided to be used for telecommand data. All memory is protected by
EDAC. Neither automatic scrubbing nor an error counter are implemented.
The signal overview of the telemetry encoder and telecommand decoder is shown
in Fig. 4.2.
The functional signals are shown in Table 4.1. Note that index 0 is MSB for
TM/TC signals.
Further details on the applied IP Cores, interrupts, the memory map and signals
can be taken from the reference document [62].
4 The CCSDS Decoder/Encoder Boards 71
4.3.1 Overview
Note that the SpaceWire input interface is described separately. The SpaceWire
interfaces and corresponding Virtual Channel Generation function and buffer
memories are not shown in the block diagram below, as is the case for the CLCW
multiplexing function (Fig. 4.3).
4.3.2 Layers
The relationship between Packet Telemetry standard and the Open Systems
Interconnection (OSI) reference model is such that the OSI Data Link Layer
corresponds to two separate layers, namely the Data Link Protocol Sub-layer and
Synchronization and Channel Coding Sub-Layer.
4 The CCSDS Decoder/Encoder Boards 73
The Virtual Channel Frame Service is not implemented, except as a support for
Virtual Channels 0, 1, 2, and 3.
The function keeps track of the number of octets received and the packet
boundaries in order to calculated the First Header Pointer (FHP). The data are
stored in pre-allocated slots in the buffer memory comprising complete Transfer
Frames. The module fully supports the FHP generation and does not require any
alignment of the packets with the Transfer Frame Data Field boundary. The buffer
memory space allocated to each Virtual Channel is treated as a circular buffer. The
function communicates with the Virtual Channel Frame Service by means of the
on-chip buffer memory.
The data input format can be CCSDS Space Packet [27] or any user-defined
data-block (see Sect. 4.3.12).
The Virtual Channel Generation function for Virtual Channels 0, 1, 2 and 3 is
enabled through the GRTM DMA External VC Control register. The transfer is
done automatically via the Virtual Channel Frame Service (i.e. DMA function).
The Virtual Channel Generation function is used to generate the Virtual Channel
Counter for Idle Frames as described here below.
in the core is performed between two sources: Virtual Channel Generation func-
tion (Virtual Channels 0, 1, 2 and 3) and Idle Frames (Virtual Channel 7). Note
that multiplexing between different Virtual Channels is assumed to be done as part
of the Virtual Channel Frame Service outside the core, i.e. in hardware for Virtual
Channels 0, 1, 2 and 3. The Idle Frame generation is described hereafter.
Bandwidth allocation between Virtual Channels 0, 1,2 and 3 is done in hard-
ware and is equal between these channels, see Sect. 4.3.11 and [62]. Bandwidth
allocation to VC7 is only done when no other VC has anything to send. If one VC
has no data to send, then the next one can send.
Idle Frame generation can be enabled and disabled by means of a register. The
Spacecraft ID to be used for Idle Frames is programmable by means of a register.
The Virtual Channel ID to be used for Idle Frames is programmable by means of a
register, e.g. Virtual Channel 7.
Master Channel Counter generation for Idle Frames can be enabled and disabled
by means of a register. Note that it is also possible to generate the Master Channel
Counter field as part of the Master Channel Generation function described in the
next section. When Master Channel Counter generation is enabled for Idle Frames,
then the generation in the Master Channel Generation function is bypassed.
The Master Channel Counter is generated for all frames on the master channel.
The Operational Control Field (OCF) is generated from a 32-bit input, via the
Command Link Control Word (CLCW) input of the Telecommand Decoder—
Software Commands (see Sect. 4.4.9) or the internal Telecommand Decoder—
Hardware Commands. This is done for all frames on the master channel (MC OCF).
The transmit order repeats every fourth Transfer Frames and is as follows:
• CLCW from the internal software commands register (Telecommand Decoder
CLCW Register 1 (CLCWR1), see Sect. 4.4.9 for details) is transmitted in
Transfer Frames with Transfer Frame Master Channel Counter value ending
with bits 0b00.
• CLCW from the internal hardware commands is transmitted in Transfer Frames
with Transfer Frame Master Channel Counter value ending with bits 0b01.
• CLCW from the external asynchronous bit serial interface input clcwin[0] is
transmitted in Transfer Frames with Transfer Frame Master Channel Counter
value ending with bits 0b10.
• CLCW from the external asynchronous bit serial interface input clcwin[1] is
transmitted in Transfer Frames with Transfer Frame Master Channel Counter
value ending with bits 0b11.
Note that the above order depends on the state of the static input pin id. If id is
logical zero, then the above scheme is applied, else the two first entries are
swapped with the two last entries.
76 S. Habinc
Note that bit 16 (No RF Available) and 17 (No Bit Lock) of the CLCW and
project specific OCF are taken from information carried on discrete inputs tcrfa[]
and tcactive[].
• The Master Channel Frame Service is not implemented.
• The Master Channel Multiplexing Function is not implemented.
The CCSDS recommendation [25] and ECSS standard [19] specify Reed-Solomon
codes, one (255, 223) code. The ESA PSS standard [40] only specifies the former
code. Although the definition style differs between the documents, the (255, 223)
code is the same in all three documents. The definition used in this document is
based on the PSS standard [40]. Details shall be taken from [62].
4.3.4.3 Pseudo-Randomizer
The Pseudo-Randomizer (PSR) generates a bit sequence according to [25] and [19]
which is xor-ed with the data output of preceding encoders. This function allows
the required bit transition density to be obtained on a channel in order to permit the
receiver on ground to maintain bit synchronization. The implementation details are
described in [62].
puncturing. This basic convolutional code is also specified in the CCSDS recom-
mendation [25] and ECSS standard [19], which in addition specifies a punctured
convolutional code. For details of the implementation please again refer to [62].
The clock divider (CD) provides clock enable signals for the telemetry and
channel encoding chain. The clock enable signals are used for controlling the bit
rates of the different encoder and modulators. The source for the bit rate frequency
is the system clock input. The system clock input can be divided to a degree 215.
The divider can be configured during operation to divide the system clock fre-
quency from 1/1 to 1/215. The bit rate frequency is based on the output frequency
of the last encoder in a coding chain, except for the sub-carrier modulator. No
actual clock division is performed, since clock enable signals are used. No clock
multiplexing is performed in the core. Details for the clock divider settings are
contained in [62].
4.3.6 Connectivity
The output from the Packet Telemetry encoder can be connected to the following
postprocessings:
78 S. Habinc
• Reed-Solomon encoder
• Pseudo-Randomizer
• Non-Return-to-Zero encoder
• Convolutional encoder.
4.3.7 Operation
The Telemetry Encoder DMA interface provides a means for the user to insert
Transfer Frames in the Packet Telemetry and AOS Encoder. Depending on which
functions are enabled in the encoder, the various fields of the Transfer Frame are
overwritten by the encoder. It is also possible to bypass some of these functions for
each Transfer Frame by means of the control bits in the descriptor associated to
each Transfer Frame. The DMA interface allows the implementation of Virtual
Channel Frame Service and Master Channel Frame Service, or a mixture of both,
depending on what functions are enabled or bypassed.
The transmitter DMA interface is used for transmitting Transfer Frames on the
downlink. The transmission is done using descriptors located in memory.
A single descriptor is shown in Tables 4.2 and 4.3. The number of bytes to be
sent is set globally for all Transfer Frames in the length field in register DMA
length register. The the address field of the descriptor should point to the start of
the Transfer Frame. The address must be word-aligned. If the Interrupt Enable (IE)
bit is set, an interrupt will be generated when the Transfer Frame has been sent
(this requires that the transmitter interrupt enable bit in the control register is also
set). The interrupt will be generated regardless of whether the Transfer Frame was
transmitted successfully or not. The wrap (WR) bit is also a control bit that should
be set before transmission and it will be explained later in this section.
To enable a descriptor the enable (EN) bit should be set and after this is done,
the descriptor should not be touched until the enable bit has been cleared by the
core.
address offset 0 9 3F8 has been used). The WR bit in the descriptors can be set to
make the pointer wrap back to zero before the 1 kB boundary.
The pointer field has also been made writable for maximum flexibility but care
should be taken when writing to the descriptor pointer register. It should never be
touched when a transmission is active.
The final step to activate the transmission is to set the transmit enable bit in the
DMA control register. This tells the core that there are more active descriptors in
the descriptor table. This bit should always be set when new descriptors are
enabled, even if transmissions are already active. The descriptors must always be
enabled before the transmit enable bit is set.
When a transmission of a frame has finished, status is written to the first word in
the corresponding descriptor. The Underrun Error bit is set if the FIFO became
empty before the frame was completely transmitted. The other bits in the first
descriptor word are set to zero after transmission while the second word is left
untouched. The enable bit should be used as the indicator when a descriptor can be
used again, which is when it has been cleared by the core.
There are multiple bits in the DMA status register that hold transmission status.
The Transmitter Interrupt (TI) bit is set each time a DMA transmission of a
Transfer Frame ended successfully. The Transmitter Error (TE) bit is set each time
a DMA transmission of a Transfer Frame ended with an Underrun Error. For either
event, an interrupt is generated for Transfer Frames for which the Interrupt Enable
(IE) was set in the descriptor (Virtual Channels 0 through 2 only). The interrupt is
maskable with the Interrupt Enable (IE) bit in the control register.
The Transmitter AMBA error (TA) bit is set when an AMBA AHB error was
encountered either when reading a descriptor or when reading Transfer Frame
data. Any active transmissions were aborted and the DMA channel was disabled.
This can be a result of a DMA access caused by any Virtual Channel. It is
recommended that the Telemetry Encoder is reset after an AMBA AHB error. The
interrupt is maskable with the Interrupt Enable (IE) bit in the control register.
The Transfer Frame Sent (TFS) bit is set whenever a Transfer Frame has been
sent, independently if it was sent via the DMA interface or generated by the core.
The interrupt is maskable with the Transfer Frame Interrupt Enable (TFIE) bit in
the control register. Any Virtual Channel causes this interrupt.
The Transfer Frame Failure (TFF) bit is set whenever a Transfer Frame has
failed for other reasons, such as when Idle Frame generation is not enabled and no
user Transfer Frame is ready for transmission, independently if it was sent via the
DMA interface or generated by the core. The interrupt is maskable with the
Transfer Frame Interrupt Enable (TFIE) bit in the control register.
4 The CCSDS Decoder/Encoder Boards 81
The Transfer Frame Ongoing (TFO) bit is set when DMA transfers are enabled,
and is not cleared until all DMA induced Transfer Frames have been transmitted
after DMA transfers are disabled.
The External Transmitter Interrupt (XTI) bit is set each time a DMA trans-
mission of a Transfer Frame ended successfully (unused here). The External
Transmitter Error (XTE) bit is set each time a DMA transmission of a Transfer
Frame ended with an underrun error (for Virtual Channels 0 through 3 only).
4.3.8 Registers
The core is programmed through registers mapped into APB address space
(Table 4.4).
The signals and their reset values are described in Table 4.5. The key ones are ‘‘RF
Available’’ and ‘‘Bit Lock’’ to start transmission not before the RF ground link
really is established.
The function keeps track of the number of octets received and the packet
boundaries in order to calculated the First Header Pointer (FHP). The data are
stored in pre-allocated slots in the buffer memory comprising complete Transfer
Frames. The module fully supports the FHP generation and does not require any
alignment of the packets with the Transfer Frame Data Field boundary.
The data input format can be CCSDS Space Packet [27] or any user-defined
data-block. Data is input via a separate Virtual Channel Generation function input
interface.
The function communicates with the Telemetry Encoder Virtual Channel
Frame Service by means of a buffer memory space. The buffer memory space
allocated to the Virtual Channel is treated as a circular buffer. The buffer memory
space is accessed by means of an AMBA AHB master interface.
The control registers for this function can be found in annex Sect. 11.4.
84 S. Habinc
4.4.1 Overview
The Command Link Control Word (CLCW) and the Frame Analysis Report
(FAR) can be read and written as registers via the AMBA AHB bus. Parts of the
two registers are generated by the Coding Layer (CL). The CLCW is automatically
transmitted to the Telemetry Encoder (TM) for transmission to the ground.
Note that most parts of the CLCW and FAR are not produced by the Tele-
command Decoder (GRTC) hardware part. This is instead done by the software
part of the decoder.
4 The CCSDS Decoder/Encoder Boards 85
4.4.1.1 Concept
The GRTC has been split into several clock domains to facilitate higher bit rates and
partitioning. The two resulting sub-cores have been named Telecommand Channel
Layer (TCC) and the Telecommand Interface (TCI). Note that TCI is called AHB2TCI.
A complete CCSDS packet telecommand decoder can be realized at software level
according to the latest available standards, staring from the Transfer Layer.
The Telecommand Decoder (GRTC) only implements the Coding Layer of the
Packet Telecommand Decoder standard [43]. All other layers are to be imple-
mented in software, e.g. Authentication Unit (AU). The Command Pulse Decoding
86 S. Habinc
Unit (CPDU) is not implemented. As explained in chapter 1 for the CDPI archi-
tecture a Command Pulse Distribution Unit is not needed.
The following functions of the GRTC are programmable by means of registers:
• Pseudo De-Randomisation
• Non-Return-to-Zero-Mark decoding.
The pin configurable settings have been applied accordingly by 4Links Ltd. as
the CCSDS-Board hardware designer.
The Coding Layer (CL) synchronizes the incoming bit stream and provides an
error correction capability for the Command Link Transmission Unit (CLTU). The
Coding Layer receives a dirty bit stream together with control information on
whether the physical channel is active or inactive for the multiple input channels.
The bit stream is assumed to be NRZ-L encoded, as the standards specify for
the Physical Layer. As an option, it can also be NRZ-M encoded. There are no
assumptions made regarding the periodicity or continuity of the input clock signal
while an input channel is inactive. The most significant bit (Bit 0 according to
[43]) is received first.
Searching for the Start Sequence, the Coding Layer finds the beginning of a
CLTU and decodes the subsequent codeblocks. As long as no errors are detected, or
errors are detected and corrected, the Coding Layer passes clean blocks of data to
the Transfer Layer which is implemented in software. When a codeblock with an
uncorrectable error is encountered, it is considered as the Tail Sequence, its con-
tents are discarded and the Coding Layer returns to the Start Sequence search mode.
4 The CCSDS Decoder/Encoder Boards 87
The Coding Layer also provides status information for the FAR, and it is
possible to enable an optional de-randomizer according to [30].
The received codeblocks are decoded using the standard (63, 56) modified BCH code.
Any single bit error in a received codeblock is corrected. A codeblock is rejected as a
Tail Sequence if more than one bit error is detected. Information regarding Count of
Single Error Corrections and Count of Accept codeblocks is provided to the FAR.
Information regarding Selected Channel Input is provided via a register.
4.4.3.3 De-Randomizer
In order to maintain bit synchronization with the received telecommand signal, the
incoming signal must have a minimum bit transition density. If a sufficient bit
transition density is not ensured for the channel by other methods, the randomizer
is required. Its use is optional otherwise. The presence or absence of randomization
is fixed for a physical channel and is managed (i.e. its presence or absence is not
88 S. Habinc
signaled but must be known a priori by the spacecraft and ground system). A
random sequence is exclusively OR-ed with the input data to increase the fre-
quency of bit transitions. On the receiving end, the same random sequence is
exclusively OR-ed with the decoded data, restoring the original data form. At the
receiving end, the de-randomisation is applied to the successfully decoded data.
The de-randomizer remains in the ‘‘all-ones’’ state until the Start Sequence has
been detected. The pattern is exclusively OR-ed, bit by bit, to the successfully
decoded data (after the Error Control Bits have been removed). The de-randomizer
is reset to the ‘‘all-ones’’ state following a failure of the decoder to successfully
decode a codeblock or other loss of input channel.
The coding layer is supporting 1 to 8 channel inputs, although PSS requires at least 4.
A codeblock is fixed to 56 information bits (as per CCSDS/ECSS).
The CCSDS/ECSS (1024 octets) or PSS (256 octets) standard maximum frame
lengths are supported, being programmable via bit PSS in the GCR register. The
former allows more than 37 codeblocks to be received.
The Frame Analysis Report (FAR) interface supports 8 bit CAC field, as well as
the 6 bit CAC field specified in ESA PSS-04-151 When the PSS bit is cleared to
‘0’, the two most significant bits of the CAC will spill over into the ‘‘LEGAL/
ILLEGAL’’ FRAME QUALIFIER field in the FAR. These bits will however be
all-zero when PSS compatible frame lengths are received or the PSS bit is set to
‘1’. The saturation is done at 6 bits when PSS bit is set to ‘1’ and at 8 bits when
PSS bit is cleared to ‘0’.
The Pseudo-Randomizer decoder is included (as per CCSDS/ECSS), its usage
being input signal programmable.
The Physical Layer input can be NRZ-L or NRZ-M modulated, allowing for
polarity ambiguity. NRZ-L/M selection is programmable. This is an extension to
ECSS: Non-Return to Zero-Mark decoder added, with its internal state reset to
zero when channel is deactivated.
Note: If input clock disappears, it will also affect the codeblock acquired imme-
diately before the codeblock just being decoded (accepted by ESA PSS-04-151).
In state S1, all active inputs are searched for start sequence, there is no priority
search, only round robin search. The search for the start sequence is sequential
over all inputs: maximum input frequency = system frequency/(gIn ? 2)
The ESA PSS-04-151 [44] specified CASE-1 and CASE-2 actions are imple-
mented according to aforementioned specification, not leading to aborted frames.
4 The CCSDS Decoder/Encoder Boards 89
This interface provides Direct Memory Access (DMA) capability between the
AMBA bus and the Coding Layer. The DMA operation is programmed via an
AHB slave interface. This interface technique is used by the OBSW of the
Stuttgart University FLP satellite platform.
The DMA interface is an element in a communication concept that contains
several levels of buffering. The first level is performed in the Coding Layer where
a complete codeblock is received and kept until it can be corrected and sent to the
next level of the decoding chain. This is done by inserting each correct information
octet of the codeblock in an on-chip local First-In-First-Out (FIFO) memory which
is used for providing improved burst capabilities. The data is then transferred from
the FIFO to a system level ring buffer in the user memory (i.e. on-chip memory
located in the FPGA) which is accessed by means of DMA.
The following storage elements can thus be found in this design:
• The shift and hold registers in the Coding Layer
• The local FIFO (parallel; 32-bit; 4 words deep)
• The system ring buffer (on-chip FPGA memory; 32-bit; 1 to 256 kB deep).
4.4.4 Transmission
The serial data is received and shifted in a shift register in the Coding Layer when
the reception is enabled. After correction, the information content of the shift
register is put into a hold register.
When space is available in the peripheral FIFO, the content of the hold register
is transferred to the FIFO. The FIFO is of 32-bit width and the byte must thus be
placed on the next free byte location in the word.
When the FIFO is filled for 50 %, a request is done to transfer the available data
towards the system level buffer.
If the system level ring buffer isn’t full, the data is transported from the FIFO,
via the AHB master interface towards the main processor and stored in e.g. SRAM.
If no place is available in the system level ring buffer, the data is held in the FIFO.
When the GRTC keeps receiving data, the FIFO will fill up and when it reaches
100 % of data, and the hold and shift registers are full, a receiver overrun interrupt
will be generated (IRQ RX OVERRUN). All new incoming data is rejected until
space is available in the peripheral FIFO.
90 S. Habinc
When the receiving data stream is stopped (e.g. when a complete data block is
received), and some bytes are still in the peripheral FIFO, then these bytes will be
transmitted to the system level ring buffer automatically. Received bytes in the
shift and hold register are always directly transferred to the peripheral FIFO.
The FIFO is automatically emptied when a CLTU is either ready or has been
abandoned. The reason for the latter can be codeblock error, time out etc. as
described in CLTU decoding state diagram.
The operational state machine is shown in Fig. 4.8.
Legend:
rx_w_ptr Write pointer
rx_r_ptr Read pointer
When in the decode state, each candidate codeblock is decoded in single error
correction mode as described hereafter.
Figure 4.9 Note that the diagram has been improved with explicit handling of
different E2 possibilities listed below.
State Definition:
S1 Inactive
S2 Search
S3 Decode
Event Definition:
E1 Channel Activation
E2a Channel Deactivation—all inputs are inactive
E2b Channel Deactivation—selected becomes inactive (CB = 0 ? frame
abandoned)
E2c Channel Deactivation—too many codeblocks received (all ? frames
abandoned)
E2d Channel Deactivation—selected is timed-out (all ? frames abandoned)
E3 Start Sequence Found
E4 Codeblock Rejection (CB = 0 ? frame abandoned)
92 S. Habinc
4.4.4.3 Nominal
4.4.4.4 CASE 1
4.4.4.5 CASE 2
4.4.4.6 Abandoned
• B: When an Event 4 (E4), or an Event 2 (E2), occurs which affects the first
candidate codeblock 0, the CLTU shall be abandoned. No candidate frame
octets have been transferred.
• C: If and when more than 37 codeblocks have been accepted in one CLTU, the
decoder returns to the SEARCH state (S2). The CLTU is effectively aborted and
this is will be reported to the software by writing the ‘‘Candidate Frame
Abandoned flag’’ to bit 1 or 17, indicating to the software to erase the ‘‘Can-
didate frame’’.
Details on the relationship between buffers and FIFOs, buffer full condition han-
dling etc. again can be found in [55].
The Command Link Control Word (CLCW) is inserted in the Telemetry Transfer
Frame by the Telemetry Encoder (TM) when the Operation Control Field (OPCF)
is present. The CLCW is created by the software part of the telecommand decoder.
The telecommand decoder hardware provides two registers for this purpose which
can be accessed via the AMBA AHB bus.
Note that bit 16 (No RF Available) and 17 (No Bit Lock) of the CLCW are not
possible to write by software. The information carried in these bits is based on
discrete inputs.
The CLCW Register 1 (CLCWR1) is internally connected to the Telemetry
Encoder.
The CLCW Register 2 (CLCWR2) is connected to the external clcwout[0]
signal. One Packet Asynchronous interfaces (PA) are used for the transmission of
the CLCW from the telecommand decoder. The protocol is fixed to 115200 baud,
94 S. Habinc
1 start bit, 8 data bits, 1 stop, with a BREAK command for message delimiting
(sending 13 bits of logical zero). The CLCWs are automatically transferred over
the PA interface after reset, on each write access to the CLCW register and on each
change of the bit 16 (No RF Available) and 17 (No Bit Lock) (Table 4.7).
For the cross strapping of the CLCW routing between the redundant CCSDS-
Boards please refer to Sect. 4.2.2.
Details on the TC Decoder configuration interface also go beyond the scope of this
book and shall be taken from [55].
4.4.8 Interrupts
4.4.9 Registers
The core is programmed through registers mapped into AHB I/O address space
(Table 4.9).
Also for the TC Decoder only the most important registers shall be treated as far
as they are used by the OBSW of the satellite target platform FLP. The register
descriptions can be found in the annex Sect. 11.5.
The signals and their reset values are described in Table 4.10.
4.5.1 Overview
The Channel Coding Sub-Layer and the Physical Layer are shared with the
Telecommand Decoder—Software Commands, and are therefore not repeated here.
4.5.2 Operation
In the Application Layer and the Data Link—Protocol Sub-Layer, the information
octets from the Channel Coding Sub-Layer are decoded as described in the fol-
lowing subsections.
The Master Channel Demultiplexing is performed implicitly during the All Frames
Reception procedure described above.
The Virtual Channel Demultiplexing is performed implicitly during the All Frames
Reception procedure described above.
The Virtual Channel Reception supports Command Link Control Word (CLCW)
generation and transfer to the Telemetry Encoder, according to the following field
description.
• Control Word Type field is 0.
• CLCW Version Number field is 0.
• Status Field is 0.
• COP in Effect field is 1.
• Virtual Channel Identification is taken from pin configurable input value.
98 S. Habinc
Note that the CLCW is not generated unless also the Segment and Packet
extraction is successful and the Space Packet has been sent out via the UART
interfaces.
The CLCW transmission protocol is fixed to 115200 baud, 1 start bit, 8 data
bits, 1 stop, with a BREAK command for message delimiting (sending 13 bits of
logical zero).
The decoder implements the Segmentation Sublayer and extracts the Segment
from the Frame Data Unit on the Virtual Channel, received from the Virtual
Channel Reception function. It supports blocking, but neither segmentation nor
packet assembly control. It only supports one Virtual Channel.
The Segment Header is checked to have the following fixed values:
• Sequence Flags are checked to be 11b
• MAP Identifier is compared with a fixed value, see Table 4.15.
The Virtual Channel Packet Extraction function extracts the Space Packet from the
Segment Data Field received from the Virtual Channel Segment Extraction
function. The aggregated length of all Space Packet(s) in one Segment Data Field
may be maximum 56 octets. The contents of the Space Packet(s) are not checked.
The Space Packet(s) are sent to the UART interface for output.
4 The CCSDS Decoder/Encoder Boards 99
The Space Packet(s) received from the Virtual Channel Packet Extraction function
our sent out via the redundant UART outputs. For each correctly received Transfer
Frame, a synchronization pattern containing two bytes, 0xFF followed by 0 9 55,
is first sent out serially, followed by the Space Packet(s).
The CLCW transmission protocol is fixed to 115200 baud, 1 start bit, 8 data bits,
and 1 stop bit. After the Space Packet(s) have been sent, the CLCW is updated.
The Telecommand Transfer Frame for hardware commands has the following
structures (Tables 4.11, 4.12, 4.13).
The signals and their reset values are described in Table 4.14.
The JTAG Interface provides access to on-chip AMBA AHB bus through JTAG.
This interface is made available to the user by the 4Links CCSDS-Board but is no
longer available to the closed OBC unit external. For application on standard user
side it is relevant only for loading the IP Core onto the 4Links board hardware.
4 The CCSDS Decoder/Encoder Boards 101
The JTAG debug interface implements a simple protocol which translates JTAG
instructions to AMBA AHB transfers. Details on its operation can be taken from
[55] and (Fig. 4.10).
For each individual spacecraft some fixed parameters of the design must be
configured as shown in Table 4.15. This task is performed by Aeroflex Gaisler
according to the customer’s specification.
5.1 Introduction
The FLP PCDU supplies an unregulated power bus with voltage levels between
19 V and 25 V as explained later in Chap. 8. The three data handling boards of the
OBC, namely the Processor-Board, I/O-Board and CCSDS-Board, require a steady
voltage of approx. 3.3 V. Thus, the main task for the power conversion of the OBC
Power Supply Board—or OBC Power-Board for short—is to establish a steady
conversion of the unregulated FLP main power bus to the required 3.3 V, and
within the range as specified for every board by their manufacturers.
Besides provision of regulated power to the OBC data handling boards, the
OBC Power-Boards fulfill a second task, which is the signal transmission and
conversion for the FLP pulse signals. These are available for clock synchroniza-
tion of OBC with GPS and star tracker system. The GPS, if powered, provides a
Pulse Per Second (PPS), signal which is transmitted to the OBC accompanied by a
time packet. The OBC can synchronize its own PPS signal to the GPS provided
one. Furthermore, the OBC Processor-Board is able to submit a PPS and a time
packet to the star tracker system. Combining both the STR and GPS systems will
be working on a common clock strobe base as close as possible which significantly
improves packet communication stability between GPS and OBC and STR and
OBC respectively.
Finally, there are interfaces that need to be routed out of or into the OBC
housing. These are led via the OBC Power-Boards as well. All interfaces along
with all other boards are depicted in Fig. 5.1. From the main task, the power
supply lines for the data handling boards are displayed in red color. In purple, the
pulse signals are shown. The two blue shades depict the power lines for the OBC
heaters, circuit 1 and circuit 2, or nominal (N) and redundant (R), respectively.
They are routed to a bi-metal thermostat switch and from there on to the corre-
sponding heaters placed on the backside of every second frame. More details about
heater placement can be taken from Figs. 5.11, 5.12 and 6.10. The green shades
show the Service Interface and the JTAG Interface lines to the OBC Processor-
Board. These are used to access the Processor-Boards after the assembly of the
complete OBC, when the connector on the Processor-Board can no longer be
directly accessed. The connectors in Fig. 5.1 are the same as in Fig. 1.2.
It can be identified here that the OBC Power-Boards don’t provide cross-
coupling with the data handling boards. The data handling boards and their parts
are significantly more complex than those of the Power-Boards. This implies the
data handling boards being more prone to hardware failure. This permitted
desisting from power line cross-coupling and simplified the electric design.
Through the Power-Board redundancy the overall OBC design is still single-
failure tolerant in accordance with the mission requirements.
5 The OBC Power-Boards 105
The power conversion on the OBC Power-Board has to provide a steady 3.3 V
voltage output for all three data handling boards. Since the main power bus is
unregulated, an exact steady supply voltage is not achievable. On the other hand,
the provision of a voltage exceeding the 3.3 V might cause damage to the sensitive
parts on the data handling boards. Therefore, the manufacturers give a range of
voltage which is allowed in order for their board to work properly. These ranges
are summarized in Table 5.1.
Table 5.1 Requirements to the OBC Power-Boards derived from input characteristics of the data
handling boards
Device Permitted input voltage Maximum Bleeder resistors
range power (power consumption
consumption at 3.3 V)
Processor-Board 3.3 V ± 5 % (3.135 … 3.456 V) 2.5 … 4.75 W 50 X (0.22 W)
(Chap. 2)
For the power conversion, DC/DC converters from Gaia Converter (hereafter
simply referred to as Gaia) have been selected. Gaia offers a series of single output
106 R. Witt and M. Hartling
converters for a load of 4 W and 10 W which are specially designed for appli-
cation in space, namely the SGDS04 and SGDS10 family. They can be ordered
with varying input and output voltages and have an operating temperature range of
-40 C up to +85 C which is compliant with the operating temperature
requirements of the OBC. As taken from Gaia, the converters are characterized by
a heavy ions and ionizing dose of 20K rad, provide an undervoltage lockout and a
permanent output current limitation. Furthermore, they comply to ESA standard
PSS-01-301 [71].
By characterizing the input voltage to 9–36 VDC (? ‘H’) and the output
voltage to 3.3 V (? ‘B’) the selected converters for the three OBC boards are:
• MGDS10-HB for the Processor-Board
• MGDS04-HB for the I/O-Board
• MGDS04-HB for the CCSDS-Board.
Please note that due to the estimated maximum load of the Processor-Board, the
MGDS10 converter with 10 W maximum nominal load was selected for the OBC.
One disadvantage of these power converters however, is, that they require a
certain load being drawn to provide the desired output voltage. To demonstrate the
effect, the output characteristics of an exemplary Gaia converter, namely the
MGDS04JC with an input Voltage of 16–40 V and 5 V output voltage, is shown in
Fig. 5.2.1 It can be identified that for lower output currents the output voltage
exceeds the nominal 5 V significantly. Only for a consumed current of 600 mA or
more, the dedicated output voltage of 5 V can be reached in the example of this
1
The 5 V converter example is depicted here since the supplier does not provide an according
diagram for the selected converter models of the MGDS series.
5 The OBC Power-Boards 107
converter. For the complete MGDS converter series this effect has to be consid-
ered. As for all OBC boards a certain input voltage range is prescribed with
Table 5.1, it had to be guaranteed that these requirements are met by the OBC
Power-Board output.
As result, additional tests had to be performed to exactly characterize the
power-up behavior of each OBC data handling board on the one hand, and the
exact behavior of the converter output on the other hand. The results of a load test
of the ordered converters is provided in Table 5.2. The relevant voltage limits are
marked in bold.
The MGDS10-HB stays below the required 3.465 V at a drawn current of
0.59 A which marks the power value of 3.465 V * 0.59 A = 2.04 W. It is spec-
ified in Table 5.1 that the Processor-Board exceeds a power consumption of
2.5 W. So no further protective action had to be taken into account.
An analog examination for the I/O and CCSDS-Board power supply leads to the
following results:
The MGDS04-HB stays below the required 3.6 V at a drawn current of 0.075 A
due its significantly lower specified power level. This marks a power level of
3.6 V * 0.075 A = 0.27 W which is below the specified minimum power con-
sumption of the I/O-Board, yet greater than the one for the CCSDS-Board by
0.105 W. Consequently, between the output of the converter and the CCSDS-
Board a bleeder resistor as constant power consumer had to be implemented. Its
maximum value results from the following (3.6 V)2 / 0.105 W = 123.428 X.
Therewith a resistor with a value below 123.42 X had to be implemented.
However, for reasons of security, in all three power lines of Processor, I/O and
CCSDS-Board power supply a set of resistors is included to guarantee con-
sumption of the minimum load. The resulting values for the bleeder resistors are
also provided in Table 5.1. As example the circuitry of the CCSDS power line is
depicted in Fig. 5.3, including the output line leading to the CCSDS-Board,
beginning at the MGDS04-HB DC/DC converter. For clarity reasons, all parts left/
upstream of the converter are hidden.
As the Power-Board in this design meets the steady state requirements given by
the manufacturers of all three OBC data handling board types, an Engineering
Model was built. Diverse tests combining this OBC Power-Board EM with the
other OBC board types have been run as explained in the subsequent sections.
108 R. Witt and M. Hartling
The previous section covered the proper Power-Board design concerning steady
operation power consumption. Another important aspect is the behavior versus
consumer inrush current characteristics. Therefore, tests were performed to char-
acterize supply voltage behavior of Power-Board on the lines of the different types
of OBC data handling boards within the first couple of milliseconds after the
power-up of the OBC Processor, I/O and CCSDS-Board.
Expected was a voltage peak when applying the power due to the capacitances
on the OBC data handling boards and, either
• after the peak, the steady state will be reached directly, or
• after the peak the current decreases for a short period below the steady state
current and ramps up again.
The second case holds the danger that the converter delivers a higher output
voltage level during that current decrease period, which might be hazardous for the
connected OBC board.
The setup for this test is depicted in Fig. 5.4 as it was used to record the drawn
current by the data handling boards at start-up. A TTI Power Supply Unit (PSU)
was used as power source being configured for 3.3 V output. Between PSU and
OBC board there is a low resistive shunt where the current can be registered by
means of an oscilloscope. A differential probe was used as input sensor for the
oscilloscope. The shunt resistors have been chosen as low resistive as possible so
that not too much of voltage is lost over the resistor, yet so high that the probe can
still detect the voltage properly. The resulting shunt resistor values are provided in
Table 5.3. Since the OBC I/O-Board behaves very similar to the CCSDS-Board at
power-up due to the delayed activation of the transmission interfaces on both
boards, the same shunt and estimated power consumption was used for test of both
boards. Due to the higher power consumption of the OBC Processor-Board, a
smaller resistor was used to minimize the shunt voltage drop.
Conducting the test resulted in the following diagrams. In Fig. 5.5 the start-up
of the CCSDS-Board is depicted. The appearing peaks are marked.
In Fig. 5.6 the power-up is shown for the I/O-Board without activating its
transmission interfaces. The diagrams don’t differ significantly, both show several
peaks and a slow re-drop of the inrush current down to the steady state value which
implies that there is no danger of over-voltage at start-up.
In Fig. 5.7 the start-up behavior of the OBC Processor-Board is depicted. It can
be identified that in this case, there is a possibly hazardous phase after the main
peak when the inrush current drops significantly below the steady state current. A
After the inrush behavior of the OBC Processor, I/O and CCSDS-Board had been
characterized in the previous tests and since the Processor-Board was identified to
be problematic due to its power-up current behavior, dedicated tests of the OBC
data handling boards in combination with the Power-Board became necessary to
assure overall design adequacy.
The following tests were conducted to verify the overall power supply charac-
teristics of the Gaia converters on the Power-Boards with connected load. The
electrical set-up for this test can be taken from Fig. 5.8. Significant difference to the
previous test is the utilization of a Line Impedance Stabilization Network (LISN).
Oscilloscope
R1/2/3
22V
Power Power
Supply
LISN 22V Ret J4
Board J1 3.3V_OBC /
3.3V_IO /
GND 3.3V_CCSDS
Fig. 5.8 Test setup for characterization of Power-Board start-up behavior. IRS, University of
Stuttgart
Due to the 3.3 V low voltage range applied, during the previous board inrush
behavior tests such a device wasn’t necessary. In current test the LISN is used to
provide representative 22 V as supplied by the satellite PCDU instantaneously at
power line activation. Using only a PSU otherwise might have caused a non-
realistic, delayed build-up of the voltage. Basically, the LISN is a capacitor in
parallel to the power line which provides the 22 V level at switch on.
The resistors R1, R2 and R3 were selected according to the actual steady state
currents taken from the previous test diagrams, marked in magenta. The values of
the resistors are 22, 44.6 and 50 X for Processor, I/O and CCSDS-Board. Since
this test is only essential for the OBC Processor-Board line, only the result of that
test is discussed in particular. In Fig. 5.9 the behavior of the MGDS10-HB con-
verter is shown, with a load of the mentioned 22 X as consumer. It can be seen that
despite the low power consumed temporarily at the beginning, the voltage con-
tinuously increases to the nominal 3.3 V (please refer to Fig. 5.7). Furthermore,
despite the usual variations that appear within such processes, the voltage never
exceeds the 3.465 V limit specified by the supplier in Table 5.2.
112 R. Witt and M. Hartling
The test-set-up for the connection of Power-Board and the connected OBC data
handling boards can be taken from Fig. 5.10. The oscilloscope was kept included
during the test to observe the behavior of the DC/DC output lines and to allow fast
cancellation of the test in case any unexpected over-voltage would be observed.
The boards proved to work together perfectly and the EMs are meanwhile being
operated permanently in the STB in the configuration depicted in Fig. 5.10. The
FM boards are integrated into the OBC flight model.
3.3V
22V
Power LISN 22V Ret J3
Power J1/ Core /
Supply Board J2 IO /
GND GND CCSDS
Oscilloscope
Fig. 5.10 Setup for final board connection test. IRS, University of Stuttgart
5 The OBC Power-Boards 113
Please note that packet communication between star tracker and OBC is han-
dled via I/O-Boards as well.
The power for the driver and gate chips is taken from the Processor-Board
power line since the conversion is only required as long as the according Pro-
cessor-Board is active. A dedicated converter provides the voltage for the corre-
sponding chips.
Inside the OBC housing there are two heater circuits that can be activated if the
temperature drops below the minimum operational limit. Please also refer to the
figures and explanations in Sect. 7.3. The two circuits are redundant and each
comprises four heaters that are mounted on the backside of every second frame as
depicted in Figs. 5.11, 7.15 and 7.16. For the detection of the internal temperature
one bi-metal thermostat switch is included in each circuit. In case when the
temperature drops below the minimum operational limit of -40 C, the switches
will close and the heaters are activated—provided that the PCDU power lines are
Temp. - Sensor
PCB
Bimetal switch
switched ON. Dedicated temperature sensors are glued onto the OBC housing and
connected to the PCDU to feed back actual housing temperature data to the
thermal control loop. No dedicated wiring is necessary for these inside the housing
on the OBC power board.
The schematic of the heater and temperature sensor wiring is provided in
Fig. 5.12. On the PCDU there are two fuses and switches which are used for both
the OBC and the TTC system. One switch will activate the nominal heaters, the
other activates the redundant side. From the heater power output of the PCDU the
wire is routed to the bi-metal thermostat. After that the wire is split and leads to the
four heaters. The return lines are brought together on a star point on the power
board from which the rail leads back to the return pin of the PCDU. The same
principle is applied for both nominal and redundant heater circuit.
As already cited and depicted in Fig. 5.1, the OBC Power-Boards also provide the
routing of the OBC Processor-Boards’ serial Service Interface (SIF), and the JTAG
debug interface. The reason for routing these signals from OBC internal connec-
tors of the Processor-Boards to the OBC housing’s external connectors is the same
as for the PPS signals discussed earlier. After OBC unit assembly completion and
closed housing the Processor-Board interfaces are no longer directly accessible.
SIF and JTAG Interface are essential for upload of software versions after OBC
integration into the satellite and for OBSW debugging. For the FLP target satellite
both interfaces will be routed to the spacecraft’s external skin connector.
116 R. Witt and M. Hartling
All connectors on the Power Board are Sub-D High Density connectors and have
either 15 or 26 Pins. Power lines and data lines are routed over different con-
nectors. Connectors with the same size on the same side of the board are of
different gender. All connectors are soldered with flying wires onto the board.
Fixed connectors might break at the soldering points under vibrational load.
Fig. 5.13 provides a schematic of the Power-Board. Also the different connectors
can be identified. The naming of the connectors is board specific. This differs from
the naming convention of the overall OBC unit as depicted in annex Sect. 11.6. A
short description on the connectors is provided below. The relevant pinouts of the
OBC unit’s external connectors of the power board (J2, J3 and J4)—or according
to annex Sect. 11.6—(J1/J7, J2/J8, J3/J9) are included in annex Sect. 11.9.
From connector J0 (Sub-D HD 26 male) the power supply lines lead to one
OBC Processor-Board, one I/O-Board and one CCSDS-Board. The other Power-
Board supplies the redundant set. The power supply lines for the heaters are also
provided via this connector. And finally the forwarding of the pulse signals is
established via this connector as LVTTL signals.
J1 is the data connector leading data lines for JTAG and SIF into the OBC
housing. There is no necessity to lead these lines over the Power Board’s elec-
tronics so they will be wired directly from connector J1 via PCB to connector J2—
see below. The connector is a female Sub-D HD 15 connector.
In Table 11.45 the pins for the data connector J2 on the long side of the board
are provided. This connector is the correspondent to data connector J1 on the short
board side and, thus, will also be of type Sub-D HD 15 (female). The cable
5 The OBC Power-Boards 117
mounted to this connectors will connect to the skin connector of the FLP target
satellite which will be mounted on the satellite’s primary structure. Both data
connectors J2 of the Power-Boards are routed to one connector on the spacecraft
skin connector of the FLP target satellite.
In Table 11.46 the pins for the Power Connector J3 on the long side of the board
are listed. This connector is also of type Sub-D HD 15 (male). It receives the
power lines for the individual OBC boards connected internally and for the
powered heaters. All lines to this connector source from the PCDU.
In comparison to classical OBC designs here the ‘‘OBC’’ receives a separate
power line for each individual board since the reconfiguration unit—the Common-
Controller in the PCDU—by this means is able to power each board individually
for nominal operation and to perform shutdowns, power-cycling and redundancy
activations in FDIR cases. In a classical OBC, this type of reconfiguration lines are
implemented inside the OBC housing. Here, they become visible due to the fact of
this CDPI prototype being physically realized as two boxes.
In Table 11.47 the pins of the PPS connector J4 are listed. The OBC receives
PPS signal lines from the GPS receivers and it provides PPS line as output e.g. for
the star trackers in the FLP target satellite. The technical details of these PPS lines
were already covered in Sect. 5.3.
Chapter 6
The OBC Internal Harness
6.1 Introduction
As explained in Chap. 1 the OBC components have all been designed from scratch
and were developed largely in parallel. A significant number of interface details
have not been available at project start which led to the decision of building the
OBC as protoflight model based on an inter-board harness instead of a backplane.
Figure 6.1 provides an overview on the main OBC internal lines for the SpaceWire
board to board connections, the power supply lines and the heater supply lines.
Please also refer to Fig. 5.1 which only depicts the Power-Board relevant
interfaces but in addition details the ‘‘Data Connections’’ cited in Fig. 6.1 into
pulse lines, Service Interface and JTAG Interface lines. Figure 6.2 in addition then
shows the individual power and data connection routing through the Power Boards
for one of the redundant branches from Fig. 5.1. Thus it can be identified that the
OBC internal harness can be split into two elements,
• the SpaceWire subharness and
• the Power Board subharness including
– power lines,
– heater lines,
– pulse lines,
– Serve Interface lines, and
– JTAG debug interface lines.
The overall OBC internal harness was implemented by HEMA Kabeltechnik
GmbH & Co. KG, a professional harness supplier for both space checkout
equipment on ground and for space harness applications. The harness was
assembled at HEMA under clean room conditions by applying a geometric con-
nector mockup of the assembled OBC frame stck (Fig. 6.3).
6.1.1 Requirements
The electric cabling input for the harness was handed over to HEMA in electronic
form by the IRS. Further requirements where the types of to be used connectors,
cleanliness during production and geometric constraints:
• Connectors: Sub-D connectors and Micro-D connectors
• Available Space for harness routing (Go/No-Go Areas)
• Cleanliness for Space application: clean room of class 100 000/ISO 8
• Final integration for customer at IRS.
6.1.2 Challenges
The following challenges had to be considered during harness design and
manufacturing:
• Cable routing was difficult in the small OBC front compartment area with a
dimension of approximately 250 9 100 9 35 mm.
• Small harness bending radii resulted from dimension limits.
• Insufficient available space was available for cable splicing.
• Insufficient space was available for standard SpaceWire cabling. In fact not
enough space for the minimum bending radii of standard SpaceWire cables and
no space for connector backshell mounting.
6 The OBC Internal Harness
121
Core N P4
PWR 0 13
Ext_Ret 14
PWR 0 GND 15
PWR 0 DSUTMS 16
PWR 0 DSUTCK 17
PWR 0 DSUTDI 18
PWR 0 DSUTDO 19
PWR 0 DSUEMDC 20
21
22
23
24
25
26
27
PWR 0 P1
28
1 29
2 30 Not Connected
3 31
DSU Signals
4 32
5 33 Debug Output
6 PWR 0 DSUACT 34 3.3V Power
7 PWR 0 DSUBRK 35 24V Power
8 PWR 0 DSUEN 36
9 PWR 0 DSURSTN 37
Ground
10 PWR 0 RS422 _Data+ 1 38 PPS Pulse Signal
11 PWR 0 RS422_Data- 2 39
12 3 40
13 PWR 0 3.3V_Ext_Power 4 41
14 5 42
15 6 Core N P5 43
7 44
8 1
9 2
3
PWR 0 P0
4
1 PWR 0 3.3V_Core_N 5
10 PWR 0 3.3V_Core_N 6
19 PWR 0 3.3V_Core_N 7
2 PWR 0 3.3V_Core_N 8
11 PWR 0 Ret_Core_N 9
20 PWR 0 Ret_Core_N 10
3 PWR 0 Ret_Core_N 11
IO 0 P1
12 PWR 0 Ret_Core_N 12 CCSDS 0 P1
4 PWR 0 3.3V_IO_N 11
13 PWR 0 3.3V_IO_N 12
21 PWR 0 Ret_IO_N 13
22 PWR 0 Ret_IO_N 8
5 PWR 0 3.3V_CC_N 11 1
14 PWR 0 3.3V_CC_N 12 2
23 PWR 0 Ret_CC_N 13 6
6 PWR 0 Ret_CC_N 8 7
15 1 3
24 2 4
7 6 9
16 7 5
25 3 10
8 PWR 0 PPS_Clock_STR 4 PWR 1 24V_Heater_R 14
17 PWR 0 PPS_Clock_GPS 9 PWR 1 Ret_Heater_R 15
26 5
9 PWR 0 Ret_Heater_N 10
18 PWR 1 24V_Heater_R 14
PWR 1 Ret_Heater_R 15
PWR 0 24V_Heater_N
CCSDS 1 P1 5
10
5
10
IO R P1
Bi-Metal
Switch
PWR 0 Ret_Heater_N
PWR 0 Ret_Heater_N
PWR 0 Ret_Heater_N
Fig. 6.2 Debug-, pulse signal and thermostat line routing. IRS, University of Stuttgart
• The stability of the harness routing had to be guaranteed also under vibration loads.
• Mounting areas had to be foreseen for fixing of the harness bundles to tie-bases
glued to the OBC cassette frames. The latter part of the integration into OBC
frame assembly was foreseen to be performed by IRS.
• The harness was required to be disconnectable which requires that the access for
all connector screws and mounting screws had to be guaranteed by design.
6 The OBC Internal Harness 123
6.1.3 Realization
This section describes the engineering process for a space application harness.
Each harness line starts with a connector and is plugged to its corresponding target
connector on the according OBC board. Table 6.1 illustrates the OBC boards with
their connectors.
124 A. Eberle et al.
The detailed pin allocation list is included in the product documentation from
HEMA [72]. The harness bundle definitions were the next step after freeze of the
pin allocation. Usually the signal lines of one interface (UART, RS488, HPC,
Status, etc.) are twisted to one cable. The cables are then combined to a bundle
(Fig. 6.4).
The routing of the harness shall not disturb both the transmitted signal itself
(reflections/damping) nor other signals (EMC). For this reason usually the power
harness and signal harness are routed separately. HEMA decided to separate the
SpaceWire harness and the power/signal harness. As illustrated in Fig. 6.5 the
orange lines represent the power and the green ones SpaceWire bundle.
6 The OBC Internal Harness 125
PWR 0 P1
DEMA-15P
2xTP22 IO N P1 1xSL-22
1xSL-22 DEMA-15P 1xSL22
1xSL-22 IO R P1 1xSL-22
2xTP-22
DEMA-15P
1xSL-22
2xTP-22
1xSL-22 CCSDS 0 P1 1xSL22
DEMA-15S
PWR0 CCSDS
0 CORE N IO N PWR 1 CCSDS 1 CORE R IO R
Bi-Metall
Bi-Metall
Switch
Switch
CCSDS 1 P2
CCSDS 0 P2
IO R P2
Micro-D 9P
IO N P2
Micro-D 9P
Micro-D 9P
Micro-D 9P
PWR 1 P1
PWR 0 P1
DEMA-15P
DEMA-15P
Core N P5
Core r P5
Core R P4
Core N P4
Micro-D 9P
Micro-D 9P
DBMA-44P
DBMA-44P
CCSDS 1 P1
CCSDS 0 P1
Core R P2
Core N P2
IO R P1
DEMA-15P
IO N P1
DEMA-15P
DEMA-15S
Micro-D 9P
DEMA-15S
Micro-D 9P
Core R P1
Core N P1
Micro-D 9P
Micro-D 9P
PWR 1 P1
PWR 0 P0
DAMA-26S
DAMA-26S
Core R P0
Core N P0
Micro-D 9P
Micro-D 9P
Core R P3
Core N P3
CCSDS 1 P0
Micro-D 9P
Micro-D 9P
CCSDS 0 P0
IO R P0
Micro-D 9P
IO N P0
Micro-D 9P
Micro-D 9P
Micro-D 9P
SpaceWire is a high speed field bus interface standard for space equipment inter-
communication, standardized by a consortium from multiple space agencies. The
specification can be found in literature (see for example [11, 12]). The SpaceWire
interfaces in the OBC accordingly consist of point-to-point, bidirectional data links.
126 A. Eberle et al.
Two differential signal pairs in each direction make a total of eight signal wires with
an additional screen wire (Fig. 6.6).
Fig. 6.7 SpaceWire Harness stepwise implementation. HEMA Kabeltechnik GmbH & Co. KG
6 The OBC Internal Harness 127
The OBC internal power harness includes the line routing for power supply lines
from the OBC Power-Boards to the consumers, the routing of the OBC internal
heater lines from Power-Board via thermostats to the boards equipped with heaters
and finally the routing of pulse lines and debug lines. Figure 6.9 provides an
impression of the power harness.
As already explained the OBC is equipped with internal heater lines controlled
by thermostats. These are included in the power harness and are positioned on the
according OBC frames (Fig. 6.10).
The thermostats are pre-integrated into the power harness already by HEMA.
The allocation of the thermostats in the OBC are:
6.3 Verification
The OBC Harness was tested against potential failures and to verify proper
manufacturing quality and correct interconnections. The following tests were
performed:
6 The OBC Internal Harness 129
OBC Harness
Test Harness
SUP
Test
Equipment
RTN
Fig. 6.11 Harness test setup. HEMA Kabeltechnik GmbH & Co. KG
Test Conditions
The tests were performed with the same clean room conditions and following the
same handling procedures as during harness manufacturing. The tests had to avoid
overstressing the harness. To limit the mate/de-mate rate of the connector the use
of test adapters was mandatory. The test conductor was not allowed to be the same
person as the harness assembler.
Retention Test
The contacts of the connectors was tested with the contact retention tool, to verify
full contact insertion and correct retention forces. This test only was applied to
contacts being crimped to the wire (Fig. 6.12).
Harness Harness
025.8mΩ 003.7mΩ
Milliohmmeter Milliohmmeter
(a) contact to contact resistance test (continuity) (b) contact to shield resistance test (bonding)
The bonding value that was used reflects only the transitions, which are elec-
trically relevant to the system. An acceptable value stays far below 1 X. In the
most applications the value must be far below 20 mX, which also was the refer-
ence limit for this harness.
In the continuity test the harness was checked for the correct point to point pin
allocation and the resistance value for each line was recorded. This allowed for
comparison of values between multiple cables in the harness, and an analysis
based on length and diameter. Cables with the same length and diameter should
have also the same resistance value.
Insulation Test
This test was performed to verify the proper electrical isolation between two parts
such as contacts to each other or to housing. The applied test conditions were
500VDC, Test Current: 1A (Fig. 6.14).
Harness Harness
SUP
RTN
>500MΩ >500MΩ
SUP
RTN
(a) contact to contact insulation test (b) contact to shield insulation test
© Vilnis – Fotolia.com
The mechanical structure and thermal system of the OBC was designed by the
IRS. With this approach it was possible to find a configuration of the OBC housing
and electronics as compact as possible and well adapted to the FLP target
spacecraft. The conceptual design of the mechanical and thermal architecture was
conducted on the base of the following requirements:
Mechanical:
M01: The mechanical structure shall cover Processor-Board, I/O-Board, CCSDS-
Board and the Power-Board as well as their redundancies.
M02: The maximum envelope of the OBC shall not exceed 220 9 300 9 140 mm3
(see Fig. 7.1) and the the mass shall not exceed 6 kg.
Fig. 7.1 Envelope of OBC in the FLP satellite. IRS, University of Stuttgart
M03: The OBC shall be dimensioned to withstand a quasi static load of 100 g in
all axes. The first eigenfrequency of the OBC shall be higher than 130 Hz.
M04: The mechanic structure shall provide a rigid connection between the elec-
tronic boards as well as a firm attachment to the satellite.
M05: All circuit boards as well as the internal connection harness shall be sealed
from each other with regard to HF interferences.
M06: All circuit boards shall be separately testable and safely embedded for
handling.
M07: The mechanical structure shall be designed for possible removal of all
PCBs.
M08: The used components shall withstand the orbit environment with respect to
thermal, EMC and radiation conditions.
Thermal:
T01: The OBC shall feature an operating temperature range of -40 to 80 C.
T02: The thermal connection of the circuit boards to the structure of the OBC
housing shall be designed to prevent high temperature spots.
T03: The temperature of the OBC shall be measured with 2 thermal sensors.
T04: Redundant survival heaters shall be installed preventing cold switch on
temperatures of the OBC. Furthermore, they shall be controlled without
telecommand.
7 OBC Mechanical and Thermal Design 135
Fig. 7.2 Power Board frame as example for design principle. IRS, University of Stuttgart
136 M. Lengowski and F. Steinmetz
with four rods requires a high volume of envelope. For very compact Micro Sat-
ellites the flight harness connection envelope does not allow this solution for the
OBC. In the selected design every single cassette is mounted to the base plate of the
satellite structure by means of three M5 screws to achieve the required stack
stability. Therewith this base plate takes over the function of the lower tie rods. To
permit mounting and multiple dis-/re-mountings of the entire OBC from the
satellite’s base plate, helicoils are used for all M5 screws. The upper tie rods of a
conventional cassette design are replaced by the form-locking cassette intercon-
nection to prevent movements of the cassettes relative to each other. The design is
strengthened by the numerous M2 screws. In order to achieve a plane mounting area
at the frame/baseplate contact surface small counter-sunk screws were selected.
The OBC boards connect to two different harnesses. The first one is the harness
to the satellite providing the power for OBC and the interfaces to the spacecraft
components. This harness directly starts at the CCSDS, the I/O and Power-Boards.
Due to the required pin number on these OBC external connectors the PCB long
side is used for these connections whereas the short side of each PCB provides the
internal connectors (see Figs. 1.2, 2.1, 3.7, 7.3 and 7.4).
Fig. 7.3 Cassette separation for I/O-Board frame. IRS, University of Stuttgart
The significantly large number of the external harness lines connecting to the I/
O-Board requires two special connectors featuring a higher integration density
than standard D-Sub connectors. Special 100 pin Micro-D Axon connectors are
used for these interfaces—see connectors D and E in Fig. 3.7.
The OBC internal harness (see Chap. 6) interlinks the OBC boards with each
other. It is required that the internal harness is shielded against HF influences from
the circuit boards, that it does not radiate any HF towards the circuit boards and that
it is shielded against HF influences from the OBC’s outer environment. Therefore
the frames are designed with an upper and lower, overlapping ‘‘nose’’ to create an
additional front compartment in the OBC housing when assembled all together—
see Figs. 1.2, 7.3 and 7.4. These edges of the compartments are manufactured in the
same way as the circuit board compartments with two rectangle contact surfaces.
In order to provide the possibility of replacing the circuit boards for maintenance
the frames are designed in two parts. In case of CCSDS, I/O and Power-Board the
cassette is separated into a frame part and a cover plate for the external connectors.
This design allows dismounting of the circuit board from the frame after dismounting
the frame cover plate. To assure high-frequency signal shielding also here contacting
overlapping edges are used. For the CPU cassette which has no external connectors it
is required to remove the cassette rear plane for accessing the internal connectors.
The configuration of the cassette assemblies are depicted in Figs. 7.3 and 7.4.
All frames are supplied with cut-outs in the rear plane and the outer surfaces for
mass reduction. The top and side cut-outs are applied from the outside and can be
milled. The rear plane cut-outs had to be made from the inside of the frame in
order to produce an elevated contact surface for thermal coupling between PCB
and frame. These cut-outs are manufactured by eroding because of the undercut. In
order to increase the eigenfrequency of the circuit boards themselves in a frame, an
additional mounting point was foreseen in the center of each board.
Furthermore, two venting-holes are foreseen in each frame for faster evacuation
of the cassette. To prevent a potential HF leak these venting-holes are realized with
a very small diameter of 1.5 mm and going around a rectangular corner. The
corner is realized by a hole from inside and a hole from the outside meeting each
other in a right angle. All remaining open surfaces of the OBC housing are closed
by three integrally manufactured plates—see front cover (removed in Fig. 1.2),
cover of rightmost frame and small left front compartment cover in Fig. 1.2. The
general mechanical properties of the OBC are provided in Table 7.1. Figure 7.5
shows the closed assembly of the OBC housing.
Fig. 7.5 OBC assembly with closed front cover. IRS, University of Stuttgart
The design and dimensioning of the OBC housing is done using the CAD software
CATIA V5 R20 from Dassault Systems and Nx I-deas 6.1 from Siemens PLM
Software. CATIA is a multi-platform CAD/CAM/CAE program, and is used as the
principal mechanical design software for the Stuttgart Small Satellites Program.
The CAD/FEM software NX I-deas is used to assist in mechanical dimensioning.
Due to differences in the software tools and their implementation, two different
models were created:
• In CATIA a 3D-model was created to be used for fitting and collision analyses
as well as to detail the manufacturing process.
• On the other hand the FEM model consists of 2D-elements for shell meshing of
the simulation.
This meshing type was selected in order to reduce computing time and to
increase the accuracy of the simulation. The use of 3D-elements would have
generated a higher number of elements than desirable for simulating a structure
with such small wall thicknesses.
The OBC frames and their circuit boards are both modeled in the FEM simu-
lation. All shell elements are defined with their corresponding shell thicknesses
taken from the CAD data. To simplify the modeling process the radii of the cut-
outs are not included in the simulation. The electrical components of the circuit
boards are modeled as non-structural masses on the boards. A quasi homogeneous
distribution of the components over the PCBs is expected. The connection of the
7 OBC Mechanical and Thermal Design 139
boards to the frame is represented with one dimensional rigid elements in all seven
screw locations. Such a rigid element is a connection between two nodes that
maintains the distance and angle constant between them (Fig. 7.6).
Fig. 7.6 Quasi-static simulation of OBC (data in N/mm2). IRS, University of Stuttgart
Table 7.2 Loads, deformations and first eigenfrequency from OBC FEM simulations
Simulation Results Approval value
Quasi-static load of 100 g in x direction 39.0 N/mm2; 0.042 mm 135 N/mm2
Quasi-static load of 100 g in y direction 38.9 N/mm2; 0.311 mm 135 N/mm2
Quasi-static load of 100 g in z direction 47.4 N/mm2; 0.155 mm 135 N/mm2
First eigenfrequency of modal analysis 174.6 Hz 130 Hz
140 M. Lengowski and F. Steinmetz
The OBC assembly is vibration tested when mounted into the target satellite
platform. The capability of the applied cassette frame concept was demonstrated
with other components of the FLP target satellite featuring PCBs of the same size
and resulting same frame sizes. The OBC cassettes also correlate with respect to
wall thicknesses, screw mountings and interconnection between cassettes. The
loads applied to these units are included in Tables 7.3 and 7.4. The random and the
sine vibration tests were conducted in each axis.
Table 7.3 Random vibration Frequency (Hz) Qualification level PSD (g2/Hz)
test loads
20 0.017
110 0.017
250 0.3
1,000 0.3
2,000 0.077
gRMS 19.86
Duration 3 min/axis
Table 7.4 Sine vibration test Frequency range (Hz) Qualification level
loads
Longitudinal axis 5–21 12.5 mm (0 to peak)
21–100 11 g
Lateral axis 5–16.7 12.5 mm (0 to peak)
16.7–100 7g
The design of an electronic box for a very compact satellite has to consider some
particularities with respect to thermal balancing. In contrast to electronics in large
satellites such a SmallSat unit has no large compartment where to radiate heat into
and where waste heat can be absorbed and further conducted/radiated away.
Therefore the CDPI units OBC and PCDU are thermally designed to be mounted on
a radiator plate where waste heat can be conducted to and on the spacecraft outer
side gets radiated into space. In case of the FLP target satellite this radiator at the
same time forms the structure baseplate with the launch adapter ring—see also
Sect. 10. For proper cooling the CDPI units additionally are coated on their outer
surface with thermal paint featuring a high emissivity. The inside of the OBC unit
frames also is painted black to prevent hot spots on the electronic boards and to
allow thermally high emissive chips to radiate their waste heat to the board’s frame.
As was explained in the previous sections on the OBC Power-Boards and the
internal OBC harness, the OBC is equipped with compartment heaters on each
second PCB frame for keeping the OBC boards above the minimum operating
7 OBC Mechanical and Thermal Design 141
To identify the thermal behavior of the OBC housing, a lumped parameter model
was established with the software ESATAN-TMS (see [73]). The model is shown
in Fig. 7.7 depicting each separate frame with a different color. For comparison the
CAD model of the OBC is presented in Fig. 7.8.
Each part of the OBC is modeled separately and is connected via user-defined
conductive couplings. Solid part elements were merged together in order to rep-
resent correctly the heat conductivity within each integrally machined part.
Figure 7.9 shows a frame in CAD compared to the ESATAN model in
Fig. 7.10. The mesh of the PCB was designed to represent correctly the area of
contacting surfaces between PCB and frame. This contact substantially influences
the conducted heat flux and therefore the temperature of the PCB. For contact
conductivity between nodes a value of 300 W/(m2K) was assumed. This value
represents an average value for screw connections [74]. These values were con-
firmed by thermal-vacuum tests (Fig. 7.11).
Fig. 7.10 Thermal network model of a frame (yellow) and a PCB (green). IRS, University of
Stuttgart
Fig. 7.11 Contact resistance of the OBC boards. IRS, University of Stuttgart
For analysis purposes the heat dissipation over the PCBs was assumed as being
equally distributed over each board, independent from board type. Figure 7.12
shows the heat dissipating parts of the mesh. The power dissipated in each PCB
can be taken from Table 7.5.
144 M. Lengowski and F. Steinmetz
Every PCB is redundant in the OBC unit. Processor-Boards and I/O-Boards are
normally operated in cold redundancy except for certain FDIR and software patch
cases. The CCSDS-Boards are permanently operated in hot redundancy. The
material parameters applied for the OBC thermal model can be taken from
Tables 7.10 and 7.11.
The thermal model was also used to verify the heat conduction to the radiator plate
under different thermal environmental conditions. The applied conditions are
given in Table 7.6.
The results of the simulation runs is shown in Figs. 7.13 and 7.14. From these
the conclusion can be drawn that the heat transport within the OBC is sufficient to
conduct its own dissipated power and the dissipation from satellite interior onto the
OBC towards the radiator. The PCBs are sufficiently coupled to the mounting
frames.
It was already discussed that the OBC temperature can drop below the minimum
operational temperature in case of a longer deactivation—e.g. in case of a satellite
OBSW failure, resulting satellite tumbling and longer power outage in eclipse. The
heaters to warm up the OBC before activation any board also was mentioned. In
the following paragraphs more details on these heaters, their dimensioning and the
selected parts shall be provided. For the positioning of such heaters on each second
OBC frame (please refer to Figs. 7.15 and 7.16).
The heaters are realized as heater mats, glued onto the frame cassette floors.
They conduct heat through the frame bottom into the cassette onto which they are
mounted. And they radiate heat into the neighbor frame stacked to it with its open
side oriented towards the heaters.
7 OBC Mechanical and Thermal Design 147
An analysis with the thermal model showed that an electrical power above
40 W becomes inefficient for PCB heat up as more and more of the generated heat
is directly conducted to the radiator of the satellite rather than reaching the PCB
(see Fig. 7.17). As result the resistance of the heaters was selected to lead to a
maximum dissipation of 40 W in total. Variations may result from the unregulated
power bus voltage.
Inside the OBC a set of four nominal heaters are in switched parallel and
another chain of four heaters for the same compartments represents the redundant
chain. So in each second compartment where heaters are placed, there exists one
heater from the nominal and one from the redundant chain. In a worst case sce-
nario the heaters are activated in eclipse where the solar panels do not supply any
power to the satellite bus. For that case the design voltage for the heaters is the
lowest possible battery voltage being 22 V. The resistance of the heaters thus was
chosen to provide approximately 5 W in this scenario (please refer to Fig. 7.18).
Fig. 7.18 Heat dissipation per heater over battery voltage (blue), manufacturing tolerance
(black), minimum battery voltage (red). IRS, University of Stuttgart
148 M. Lengowski and F. Steinmetz
The heaters are acquired from Minco Inc. as these models are suitable for
vacuum environment according NASA standard [57]. The type of heaters selected
for the OBC is model HK5591 with aluminum backing and pressure sensitive
adhesive, suitable for a temperature range from -73 to 150 C.
Heaters are activated by bimetallic thermostats as long as power is supplied to
the heater line so that no micro-controller needs to be active for their control in an
FDIR state. The thermostats are manufactured and tested by the company
COMEPA according ESA ESCC 3702 & 3702/001 standard for bimetallic
switches [76]. These thermostats are listed in the European Preferred Parts List
(EPPL) for space components [77].
The definition of the optimum thermostat switch temperatures was again
achieved by using the thermal lumped parameter model. The temperatures at the
thermostat positions and the PCB temperatures were analyzed by means of a
simulated cool down and a warm up scenario. For the precise assessment of the
necessary lower heater activation limit Tr transient cool-down simulations have
been performed with the OBC thermal model. One scenario started from the OBC’s
upper operational temperature limit, one from a cold case with moderate temper-
atures. The model assumed all OBC boards themselves being in ‘‘off’’ state, i.e. not
dissipating any heat. The results show that when reaching the minimum operational
temperature of -40 C there are almost no temperature gradients anymore in the
network. Therefore the activation temperature of the thermostats can be set directly
to the minimum operational temperature of the PCBs which is -40 C.
The switch temperatures of the procured OBC FM thermostats were charac-
terized in a thermal vacuum chamber at IRS premises. The test results are in
accordance with the supplier data and a measurement tolerance of ±1 K for the
upper switching temperature Tf. The measured values of the lower switch tem-
perature Tr are exceeding this tolerance. But the overall system performance is still
valid with these devices since they will activate heating even before the critical
temperature of -40 C is reached (Table 7.7).
By means of the thermal model it was analyzed after which time during heat up
the upper thermistor switch temperature will be reached, considering the ther-
mostat positions in the OBC housing front compartment, analyzing different heater
configurations and varying power supply voltages. The results of these simulations
as well as an exemplary temperature chart of a heat–up simulation are condensed
in Table 7.8 and Fig. 7.19.
7 OBC Mechanical and Thermal Design 149
Fig. 7.19 OBC warm up, all heaters active, battery at 25 V. IRS, University of Stuttgart
FR4 0.91 –
8.1 Introduction
A Power Control and Distribution Unit (PCDU), traditionally performs the power
regulation, control and distribution activities in a satellite system. Furthermore, the
PCDU is responsible for monitoring and protecting the satellite power bus. Thus,
the PCDU is one of the key components on board the satellite together with the
OBC. Some specific functionalities were implemented into the PCDU design in
order to facilitate the overall Combined Data and Power Management Infra-
structure (CDPI). This chapter describes both the specific PCDU functionality
which enable this CDPI concept as well as the standard PCDU functions.
The FLP satellite’s Power Control and Distribution Unit is developed in
cooperation with an experienced industrial partner. The expertise and compliance
N.N.
Vectronic Aerospace GmbH, Berlin, Germany
A. N. Uryu (&)
Institute of Space Systems, University of Stuttgart, Stuttgart, Germany
e-mail: uryu@irs.uni-stuttgart.de
Like most satellites orbiting Earth, the FLP follows the standard implementation
approach for the primary power source and energy storage device:
Primary power source: Photovoltaic solar cells
Secondary energy storage: Battery cells
The FLP features three solar panels in total which are implemented in two
different configurations, a side panel type which is deployed after separation from
the upper stage of the launcher, and a center panel type which is body mounted
(please also refer to Chap. 10). The GAGET1-ID/160-8040 [78] solar cells from
AZUR Space Solar Power are applied as primary energy source together with
lithium iron phosphate secondary battery cells for energy storage from A123
Systems [79]. The center solar panel includes a test string with experimental solar
cells with an efficiency of 27.8 % (BoL, 28 C, AM0) from AZUR Space Solar
Power [80] which shall be qualified for space use during the FLP mission.
Table 8.1 gives a short overview of the FLP target satellite’s power sources. Please
consult the respective data sheet for detailed technical information.
8 The Power Control and Distribution Unit 153
Secondary battery
Type Lithium iron phosphate
Identification ANR26650M1-B
Total capacity for the 35 Ah
configuration, BOL
Steady state power consumption of the unit lies below 5 W. By design, heat
emitting parts like fuses, switches or the CPUs are placed by Vectronic on PCBs
near the baseplate, which is connected to the structure for thermal conductance
reasons. The remaining surface sections are anodized in a matt black color to
increase the thermal balancing by radiation. A PCB internal heating for the CPU
PCBs facilitates a fast warm-up to -20 C in order to prevent damaging of
electronic parts due to thermal stress due to high temperature gradients. Moreover,
the PCDU is qualified to a lower limit of -40 C for operational use to increase
the availability of the PCDU and thus satellite system reliability. The thermal
conditions are monitored by five internal temperature sensors the PCDU.
According to FLP design regulations the PCDU is designed single failure tol-
erant. This means that a specific functionality is covered by a redundant unit or
functional path in case the nominal unit fails. The FM unit was also subjected to
environmental tests, such as vibration and thermal-vacuum tests, to facilitate a safe
launch and reliable operations in orbit. Additionally, the PCDU is designed to
fulfill its tasks reliably under the influence of the expected space radiation for the
projected mission lifetime of two years. According to [81] 1–10 krad are to be
expected per year.
The PCDU is equipped with a number of interfaces for connecting digital and
analog equipment plus the serial interconnection to the OBC. Furthermore, the
PCDU provides electrical interfaces for power generation, storage and distribution.
In addition interfaces are implemented for satellite operations, system monitoring
and for all tasks of OBC monitoring and reconfiguration in the frame of the overall
CDPI architecture. The listing provided below comprises all interfaces that are
implemented for FLP use, see also Table 8.12 in sect. 8.9 and Tables 11.48–11.50
in the annex for the connector affiliation. Figure 11.10 depicts the PCDU
dimensions.
8 The Power Control and Distribution Unit 155
In general, Common Commands are available to control the fuses, switches and
PCDU modes and to request sensor data as well as PCDU status. All Common
Commands and according TM data can be transmitted with a baud rate of 115200
through a full-duplex RS422 interface. The transmission protocol for commanding
consists of a mandatory part of 8 bytes and an optional data part, see Table 8.3.
A safe and reliable step-by-step boot-up sequence of the PCDU and thus of the
entire satellite system is implemented to facilitate the completion of the first stable
satellite mode, the System Safe Mode. The boot-up procedure includes specific
prerequisites before the OBC boards are powered by the PCDU and assume
control of the satellite. Thereby, following actions are performed to prevent the
damaging of critical satellite units:
1. PCDU internal heaters warm up the unit up to its operational temperature limit.
2. Check of the power level of the batteries to complete the entire boot-up pro-
cedure up to the System Safe Mode.
3. Check of the temperature level of the OBC unit and the TT&C transceivers If
temperature is below the operational limit, the PCDU activates the power
switches for the redundant heater design of both units. These heaters include
thermistors to facilitate the heating up to the specified operating temperature.
Alternatively, a timer condition is implemented which is set according to the
results of thermal simulations. As soon as the timer condition is met, the PCDU
continues the boot-up process.
4. The last step concludes the boot-up procedure to the System Safe Mode.
From here, the OBC can command every other defined operations mode of the
satellite.
8 The Power Control and Distribution Unit 157
The main task of the PCDU is the distribution and regulation of the electric power
on board the satellite. The power handling design is specified in order to safeguard
the power supply of the satellite bus as far as possible. Furthermore, specific
protection features are implemented in order to prevent damaging of the on-board
components or the batteries which are essential for accomplishing the mission
objectives. Figure 8.2 shows the circuitry of the PCDU and its connections to the
satellite bus.
Each solar panel is connected to only one battery string by a Battery Charge
Regulator (BCR), in order to prevent a single point failure. If all power strings
were interconnected, the complete power supply would for example be disabled in
the case when a non-insulated cable accidentally contacts the structure. The FLP
target satellite configuration represents a non-DET system [82] with an unregu-
lated power bus supplying the power bus with a voltage between 18.5 V and
25.5 V. The BCR is located in the direct energy path to protect the satellite bus
from excessive voltage or current transients. Each BCR is adjusted to an upper
voltage limit of 25.5 V, which corresponds with the end charge voltage value of
each battery string.
The three independent power strings are combined before the Main Switch of
the PCDU, but secured with diodes to prevent the current flow of one string into
another. In case a battery string or solar panel is broken or short-circuited, the
energy of the other two strings can be used to operate the S/C. String 0 and 1
represent the energy paths of the side solar panels, whereas string 2 represents the
path of the middle solar panel and solar test string. The solar test string is used for
the generation of electrical energy by default.
The distribution of power to the consumer loads is controlled by the appli-
cation of a fuse and switch system. The PCDU deactivates the power supply by
the respective Latching Current Limiters (LCLs), as soon as an over-current is
measured. Due to volume and cost reasons some power outputs are combined at
one fuse. However, critical on-board components such as the OBC boards and the
TC receivers are implemented as single loads on a fuse. For reliability reasons
and due to the combined allocation of multiple loads to one fuse, additional
switches are used to regulate power supply of single loads. High-power con-
suming components are equipped with two switches in series in order to protect
the satellite bus. If a switch should break during the mission lifetime, the second
serially connected switch can be opened to deactivate the respective component,
if necessary. The LCL fuses can be reactivated after the event of an over-current
and the connected consumers are not lost for the mission. A complete list of the
component affiliations to fuses and switches can be found in Table 11.51 in the
annex.
In addition to the given fuse-switch control and protection system for the on-
board loads, there are two bi-stable relays. Each one of these bi-stable relays is
dedicated to a battery survival heater. The relays are implemented in order to
158
A. N. Uryu
Fig. 8.2 Circuitry and power connections of the PCDU. IRS, University of Stuttgart
8 The Power Control and Distribution Unit 159
safeguard the heating of the battery compartment, even if the satellite is deacti-
vated due to the under-voltage protection feature. Since the batteries are very
sensitive with regard to their storage temperature conditions, this mean was
implemented to protect the energy storage devices from damage.
Figure 8.3 shows the connections between the PCDU and one battery string.
Charge and discharge of the battery is managed by the power interface (IF). By
default, the switch is closed in order to allow the charging of the battery string. If
the charge process shall be interrupted the switch can be opened. The energy path
with the diode still allows energy extraction from the battery.
Since the PCDU only monitors the voltage level of a complete battery string,
single cells are not protected from overcharging. In order to prevent overcharging
of a single cell, an electrical circuitry is applied at the battery side which monitors
the respective cell voltage. If the voltage of single, serially connected cells
diverges too much, single cells could be overcharged before the combined charge
limit of 25.5 V is reached. The PCDU features the reception interfaces for dedi-
cated signals sent by the monitoring circuitry. As soon as the PCDU receives the
interrupt signal, battery charging is stopped by opening the respective switch in the
energy path for a specified time. In case of a fault event there exists the possibility
to command the PCDU to ignore the interrupt signal.
Each battery string is equipped with two temperature sensors for thermal
monitoring. In case the temperature limits for a stable energy output are violated,
the charging is disabled to prevent long-term damaging of the cells.
Battery String
TS
TS
- +
Diode Switch
Legend:
IF: Interface TS: Temperature Sensor
Fig. 8.3 Connection of the PCDU and a battery string. IRS, University of Stuttgart
160 A. N. Uryu
One of the special characteristics which exceeds the scope of duties of a common
PCDU is the different approach for the on-board data reception. Usually, the
collection of all data data is conducted by a separate unit in an industrial satellite,
sometimes referred to as Remote Interface Unit [10].
For the FLP, digital and analog data interfaces are separated in the command
chain. Making use of synergies, the PCDU contains all analog on-board IFs. Since
the PCDU contains Analog-to-Digital Converters (ADCs), for the measurement of
voltages and currents anyway, this step was a reasonable decision. Most of the
digital IFs to the satellite units are comprised by the I/O-Board. Dividing the two
interface types and assigning these to two distinct components reduces complexity
in each of the respective units. Each interface unit can thus be developed as fast as
8 The Power Control and Distribution Unit 161
possible, only dependent on the definition status of the respective IFs. Moreover,
the required qualification effort is split and qualification time can be minimized as
both units may be tested in parallel.
According to this interface control concept, the PCDU collects all analog sensor
data on board the satellite. Some digital sensor data which is required for the
PCDU tasks is collected as well. The sensor data as shown in Table 8.5 is collected
by the PCDU.
This sensor data is not processed inside the PCDU. Whereas the analog sensor
data is converted to digital data by the ADCs, all sensor data is forwarded to the
OBC. The handling is conducted in the OBC, utilizing its processing power, where
the relevant data is also distributed to the respective subsystem control module.
In case the PCDU is not polled cyclically as specified, a failure of the OBC
system is assumed by its reconfiguration function. Figure 8.4 shows the four
OBC boards that are involved in the command chain of Common Commands from
OBC to the PCDU. These are the nominal and the redundant OBC Processor-
Board—or ‘‘OBC Core’’ for short—as well as the nominal and redundant I/O-
Board. Figure 8.4 also depicts the connections between the OBC’s CCSDS-Boards
and the PCDU for High Priority Commanding, which is explained in the following
section in detail.
The affiliation ‘0’ and ‘1’ indicate that the respective units are operated in a hot
redundant mode. In contrast, the boards which are operated in a cold redundant
mode are affiliated Nominal (N) and Redundant (R).
Considering a single OBC failure leading to a watchdog timeout or TM request
timeout on PCDU side, in the first place for the CDPI Combined-Controller it is
impossible to identify which of the OBC boards is defective—defective in this
context meaning either electrically defective or simply non operational due to
crashed OBSW. Therefore, a specific reconfiguration procedure for the OBC is
performed to restore the command chain operability by switching through the
available instances. After each switch step a delay time is foreseen—e.g. to allow
the redundant Processor-Board to boot up entirely—and it is verified by the PCDU
whether telemetry polling is resumed again by the OBC. This default hold time can
be adapted by command. If polling is resumed, the reconfiguration is considered
successful and the further sequence is aborted. If no polling is occurring yet, the
next switchover step is performed:
8 The Power Control and Distribution Unit 163
SpaceWire
RS 422
CPU 0 CPU 1 Common Com.
High Priority Comm.
PCDU
Fig. 8.4 Interface communication between the OBC and the PCDU. IRS, University of
Stuttgart
• Turn the switches off for both I/O-Boards and on for the nominal I/O-Board
• Turn the switches off for both OBC Processor-Boards and on for the nominal
OBC Processor-Board
• Turn the switch off for the nominal I/O-Board and activate the switch for
the redundant I/O-Board
• Turn the switch off for the nominal OBC Processor-Board and activate the
switch for the redundant OBC Processor-Board
• Turn the switch off for the redundant I/O-Board and activate the switch for the
nominal I/O-Board.
The implementation of the above described concept for the OBC reconfigura-
tion is only reasonable, since the PCDU itself is equipped with an internal
watchdog circuit which facilitates the autonomous switching between the PCDU
internal, redundant controllers. Thus, the monitoring and switching tasks can be
performed without significant delays and thus safeguard a minimum downtime of
the satellite system.
Figure 8.5 shows the watchdog functionality for the autonomous switching of
the PCDU internal controllers. Both, the nominal and the redundant controller, are
operated in a hot-redundant concept with a master and a slave unit at separate
electric circuits. The master unit performs all actions, whereas the slave monitors
the master.
Switch
Master Master
switch switch
signal signal
Controller N Controller R
Fig. 8.5 Functional design of the switch logic for the PCDU internal CPUs. IRS, University of
Stuttgart
The master CPU is sending a confirmation signal during its processing cycle in
order to permanently confirm its operability. If this condition is no longer met, the
switch logic commands to switch the master functionality to the slave unit.
CDPI architecture HPCs also bypass the OBSW command chain but do not need a
CPDU as they are transmitted directly from the CCSDS-Boards to the PCDU (see
Sect. 1.4 and [4]).
The following explanations of the CCSDS protocol are limited to the parts that
are necessary to understand the HPC transmission and forwarding. Please consult
the CCSDS standards [23] for further information. Figure 1.11 in Sect. 1.6.2
shows an example of the composition of an uplinked TC packet that is transmitted
from ground to the satellite. After the decoding of the so-called Command Link
Transmission Unit (CLTU), the TC Transfer Frame contains the commands from
ground. Only the Frame Header and the TC Segment must be considered to
understand the HPC forwarding. The Virtual Channel (VC), in the Frame Header
indicates which command chain unit receives the data. The following four VCs are
available for FLP target spacecraft:
• VC0: nominal command to OBC Processor-Board N
• VC1: HPC1 to CCSDS-Board N
• VC2: nominal command to OBC Processor-Board R
• VC3: HPC1 to CCSDS-Board R.
Whereas nominal commands are assigned to the VCs ‘0’ and ‘2’, HPCs are
allocated to the VCs ‘1’ and ‘3’. An HPC that is commanded by ground is referred
to as High Priority Command Level 1 (HPC1). HPCs Level 2 are processed by the
OBC S/W. The TC Segment contains the Multiplexer Access Point Identifier
(MAP-ID). A MAP-ID that equals ‘0’ states that the TC Segment contains HPC1 s
and that containing commands are directly forwarded from the CCSDS-Board to
the PCDU. All MAP-IDs unequal to ‘0’ imply PUS packets and are transmitted to
the OBC Processor-Board for further processing by the OBC S/W.
As described in Sect. 1.4.2 industrial satellites may feature a so-called Com-
mand Pulse Decoding Unit (CPDU) on board to receive HPCs. This unit routes the
commands to the respective units. An HPC consists of 2 bytes. The first 8 bits
contain information on the channel selection, the second 8 bits on pulse length
definition. So it is possible to command 256 units (channel selection) through 256
commands (pulse length definition) by utilizing HPCs. For FLP, the CPDU is
integrated in the PCDU as the only unit commanded by HPCs. Thus, 65536
different commands may be implemented to reconfigure the satellite system by
switching LCLs and component switches. The PCDU features a nominal and a
redundant RS422 communication interface for the reception of HPCs from the
OBC CCSDS-Boards (see Fig. 8.4) with a baud rate of 115200. All HPC packets
are implemented with a 2 byte header that serve as an identifier for the following
HPC frame. The composition of the HPC header is shown in Table 8.6.
An HPC frame can contain up to four high priority commands. Every command
starts with a TC Source Packet Header (TSPH) which is completely dismissed.
Each of the HPCs consists of the 6 bytes TSPH, 2 bytes command plus 2 bytes
checksum. HPCs can be used to activate or deactivate a single or a specific set of
on-board components which cover a specific safety aspect. By virtue of their
importance, HPCs are processed immediately after reception at the PCDU and in
favor to Common Commands. The most important HPCs are:
• Activate or deactivate the on-board heaters
• Deactivate all non-essential loads for the Safe Mode to save energy
• Reconfigure the command chain.
Table 8.8 shows an example of an HPC command sequence which may contain
up to four single HPCs. Table 8.9 gives an overview of all implemented HPCs for
the FLP.
Due to the PCDU taking over in the CDPI architecture some functions of a
classical OBC Remote Interface Unit (RIU), it features an arming switch for
detection of spacecraft separation from the launcher by opening the according
circuits. This prerequisite together with a sufficient level of solar array input power
is required to startup the PCDU operations.
The control and the monitoring of the deployment procedure of the solar panels by
the PCDU is based on both implemented deployment timers (timer 0 and timer 1)
and on the activation flag. The control is performed if the activation flag is enabled
(default setting).
After the timer 0 becomes active (time after launcher separation) the PCDU
activates the fuses and the switches of the heaters for the deployment mechanism
and checks its status. As soon as the deployment mechanism signalizes a successful
deployment of the solar panels, the PCDU switches off the heaters and disables the
activation flag. If the deployment mechanism does not signalize a successful
deployment of the solar panels and the timeout value is exceeded, the PCDU
will switch off the heaters without disabling the deployment process.
168 A. N. Uryu
A total of five attempts will be made (with a wait interval in between) to release the
solar panels by switching the deployment device heaters. After five unsuccessful
attempts the autosequence is finally deactivated in order to save power, and FDIR
from ground has to take over.
The power switches for the Data Downlink System for payload data transmission
are deactivated after a certain duration. This feature is implemented to restrict
the data downlink only to the specified access times. Thus, the transmission of
data is avoided over specific regions of the Earth according to the International
Telecommunication Union (ITU), regulations.
The PCDU software includes a history log functionality for commands, events and
configuration of working components. The history log functionality is introduced
in order to establish a means to check on actions inside the unit in case of oper-
ational issues. Each of the above given values that are recorded are identified by an
dedicated ID and a time stamp.
In addition to the under voltage protection feature for the batteries, the PCDU
features an overvoltage protection for itself. The PCDU is switched off automa-
tically via its main switch as soon as a bus voltage greater than 28.5 V is detected.
This case may apply during tests on ground, when the PCDU is powered through
the auxiliary power input.
8 The Power Control and Distribution Unit 169
The PCDU features a measurement circuitry base on a DAC for recording the
characteristic line of the test string of the satellite’s middle solar array. The
measurement is initiated by command. The PCDU sets the current flow through a
shunt resistor and records the values of the current and of the associated voltage.
The PCDU is tested under varying thermal and vacuum conditions. The tempe-
rature profile of the thermal testing of the unit is shown in Fig. 8.6. Usually, the
lower non-operational is specified lower than the lower operational temperature
limit. The lower operational temperature was adapted to -40 C in order to
increase the operational reliability of the unit and thus for the overall satellite
system.
The operating temperature range of the PCDU ranges from: -40 C to +70
The non-operational temperature range is from: -40 C to +80 C
Number of thermal cycles: 5
The vacuum tests were conducted with a maximum pressure level lower than
1 9 10-5, where the operability of the unit and the PCB-internal heaters were
confirmed.
The PCDU resists at least a radiation up to 20 krad total dose, without significant
degradation.
170 A. N. Uryu
Ta Ambient temperature
Tnh , nl Non-operational temperature, high orlow
Temperature Toh , ol Operational temperature, high orlow
Electrical Test
operational
non-operational
4h Thermal Cycling
Tnh
4h 4h 4h 4h 4h 4h
Toh
Ta …..
Tnl,ol
4h 4h 4h 4h 4h 4h
Time
Fig. 8.6 Test profile for thermal testing of the PCDU. Vectronic Aerospace, IRS
The PCDU survives and performs without any degradation after being exposed to
the vibration loads as shown in Tables 8.10 and 8.11.
The table below provides an overview over all connectors of the PCDU unit
including a keyword description on the connector use. Detailed pin allocations
obviously are driven by the onboard equipment of the individual mission—in this
case the FLP satellite—and are therefore not provided here.
Connector pin allocations for the CDPI inter-unit cabling between OBC and
PCDU for standard commanding and HPCs are included in the annex in
Tables 11.49 and 11.50 (Table 8.12).
The PCDU provides a large number of commands for controlling all the described
functions and for requesting the according telemetry for the processing activities
by the OBSW. For details on the commands and telemetry messages, for the full
lists and their detailed syntax and arguments the reader is kindly referred to the
PCDU ICD from Vectronic Aerospace [85]. Below a brief overview is given to
depict the sophisticated features of the PCDU:
• Power control commands
– Status requests commands (e.g. currents and voltages of components, solar
panels, batteries)
– Control commands for all component LCLs, switches and relays
– Adjustment commands for the over current monitoring by PCDU software
– Adjustment commands for the charge regulation of the batteries.
• Commands for Satellite Operations
– Adjustment and status request commands of solar panel deployment
– Status request commands of thermistor temperatures
– Status request commands for sun sensors
– Control and status request commands for solar panel test string measurement
– Adjustment commands for the boot-up procedure and its prerequisites.
• Reconfiguration Activities and FDIR commands
– Adjustment and status request commands for the reconfiguration process of
OBC units
– Control and status request commands for internal PCDU controllers
– Request commands for History Log.
• Diverse Commands
– PCDU Reset
– Status Request commands for software version info.
Chapter 9
CDPI System Testing
9.1 Introduction
Both the modularity and the novelty of the presented CDPI infrastructure require
extensive system testing. These tests cover the fields of
• hardware/software compatibility,
• internal communication between Onboard Computer elements and
• OBC communication with external equipment and infrastructure.
The complexity of the tests required a consistent concept. This concept had to
cover the following tasks:
• Configuration tests of all OBC modules. This comprises read and write opera-
tion from/to memory, IP Core updates as well as successful OBSW uploads and
runs.
• Communication tests between the OBC modules. The elements (boards and
functional subgroups such as I/O-Board memory) must interact as specified.
EMs were to be exchanged step by step by FMs. After each replacement, the
tests needed to be re-run. Modules to test in interaction were:
– CCSDS-Board and OBC Processor-Board
– OBC Processor-Board and I/O-Board covering:
equipment interface read/write access by OBC Processor Board via I/O-Board
drivers and
housekeeping data management on I/O-Board
– OBC Processor-Board, I/O-Board and PCDU covering:
PCDU cmd./ctrl. by OBC and
OBC reconfiguration by PCDU
– CCSDS-Board and PCDU
9 CDPI System Testing 175
• System chain tests were necessary for controlling the CDPI from the Mission
Control System ESA SCOS-2000 [88] via CCSDS protocol (see [23–28]) as
foreseen for the target satellite through the frontend equipment to the CCSDS-
Board (Fig. 9.1).
• System tests for the onboard chains between OBC and S/C equipment were
necessary, e.g. for PCDU, star tracker and reaction wheels. Such tests served to
prove that specification requirements were implemented correctly both on OBC
and on equipment side and in the OBSW.
• Communication tests with simulated S/C equipment for Onboard Software
verification. Thus, operational scenarios could be tested applying closed-loop
scenarios without sensor stimulation.
Fig. 9.1 ESA mission control system SCOS-2000. IRS, University of Stuttgart
For EMs the tests were performed under ambient conditions in an air-condi-
tioned laboratory. For FMs all tests concerning boards or tests concerning inte-
grated units were performed exclusively in a temperature and humidity controlled
class 100.000/ISO 8 cleanroom at IRS premises.
176 M. Fritz et al.
This aspect is of key relevance for this particular university project since it had to
cope with significant personnel fluctuation over the development period—partic-
ularly in the domain of Assembly, Integration and Tests (AIT) of the CDPI. For all
projects it is necessary to determine between:
• Engineering staff—designing the foreseen tests up to a specification level and
• AIT staff—debugging the test flows, executing them and documenting the results.
The areas of responsibilities have to be clearly defined. This also applies for a
university project. However, some simplifications are necessary at university level
due to limited manpower and the requirement for lean system engineering. At
university, almost the entire team consists of students and PhD candidates, which
implies the high personnel fluctuation in the project. Students support the project
for less than 1 year, PhD candidates for 3–4 years. It is favorable to perform a
limited, clearly defined subset of the test campaign with the same set of people and
the next functional subset with the successor team to be as efficient as possible.
Team fluctuation in such a program implies the necessity to organize proper know-
how transfer and system test status handover from one to the next sub-team with
sufficient handover time. Furthermore, a proper tracking of personnel to tests and
of lessons learned is required. Such tasks are preferably assigned to PhD candi-
dates due to their longer availability in the team.
Since the OBC Processor-Board is an item subjected to ITAR, there were test
tasks which could only be performed by authorized personnel.
level. All mentioned considerations directly lead to a test plan which is presented
in condensed form in the following section.
This section provides a condensed overview on the most important elements of the
test plan which served as baseline for the qualification test campaign of the CDPI
System. In order to keep the overview on the to be performed tests, to track test
execution status during the campaign and to track test success status, so called
functional verification matrices were used. These FV matrices address the fol-
lowing questions on:
• Which tests have to be performed?—Test scope
• Which testbench is to be used?—Test setup and environment
• Which HW model is to be used in the test?—BBM, EM or FM—Item under test
Exact documentation of the used software version as well as TM/TC database
was essential in order to reproduce the tests—especially if they had to be run on
multiple testbenches or with multiple hardware models (EM ? FM). Furthermore,
the test plan had to consider the test concept. It organized the order of tests,
described simplifications and served as documentation both for internal and
external purposes. Furthermore, lessons-learned were part of this document in
order to accumulate experience and knowledge.
The next step in the test program was to develop test procedures for each test
identified in the test plan. Then the tests had to be executed and test reports were to
be filled out for documentation. To simplify the overall procedure in the frame of
the university program, combined test procedure/report documents were
established:
• First the test procedure part was implemented, containing sections on test setup,
required equipment/instrumentation, test conditions, personnel, etc. and finally
at the end a step by step procedure to be executed with entry fields for expected
and as received results. The latter column first remains empty.
• Then the procedure was reviewed and formally signed off by the senior engineer
of the program.
• Thereafter the test was executed—potential procedure variations directly were
added to the document and the as-is result was entered into the according column
together with an OK/NOK information to track success/failure. Each test with
FM HW was executed at least with 2 persons to guarantee 4 eyes principle.
The general component functional tests—except for OBC reconfiguration—unit
performance tests and all EM unit tests were executed on a dedicated Satellite Test
Bed which included a ground station frontend for the TC/TM traffic to/from the
OBC on one side and a simulation of the target satellite on the backside. This
testbench (see Fig. 1.16) includes BBM/EM units of the OBC and PCDU. It does
178 M. Fritz et al.
not provide redundancy for the OBC units. The development of this testbed is
described in more detail in Sect. 9.5.
The tests requiring OBC redundancy and certain reconfiguration functionality
of the PCDU were performed exclusively in the FM FlatSat setup of the project
(see Fig. 1.17, 9.12) since some prerequisite features were not yet available in the
PCDU EM.
Functional verification matrices in the tables presented on the following pages
provide a brief overview on all tests and show which kind of tests were performed
in which particular setup.
Test-types:
Q = Qualification by the unit supplier
BB = Bread Board Model test at IRS
EM = EM Qualification test at IRS
FM = FM Qualification tests at IRS
Test-setups:
• Test at supplier premises: supplier specific test infrastructure
• Satellite Test Bed (STB) in configuration 1 and 2 as described in Sect. 9.5
• FlatSat as described in Sect. 9.5
• Test configuration for Thermal Vacuum (TV) Chamber
• Test configuration for vibration test on Shaker
The Power Control and Distribution Unit was completely qualified on supplier
side. To prove compatibility with the CDPI System, an extensive series of com-
munication tests with the OBC was performed at IRS (Table 9.1). Since the PCDU
is an essential element in the overall concept of the CDPI system it was also
significantly involved in reconfiguration tests for the entire system—see Table 9.7.
The FM Processor-Boards were tested during the FlatSat campaign whilst the EM
was used to run test software with the connected satellite simulation environ-
ment—the STB. The FM Boards were operated exclusively under cleanroom
conditions for the FlatSat tests (Table 9.2). It is envisaged to use the Satellite Test
Bed with the Processor-Board EM later for system simulations on ground during
Phase E of the mission as well as for pretests of OBSW patches before uplinking
them to the spacecraft in orbit.
Thermal qualification Q – – – – –
Mechanical qualification: – – – – FM –
FM qualification in the frame of the overall
satellite vibration and shock test campaign
Tests for the Power-Boards were focused on verifying the requirements to be met
by each Power-Board. The most critical phase regarding voltage control is during
the startup process of the connected OBC data handling boards. More details on
this topic are provided in Sect. 5.2. Further tests were conducted to verify signal
conversion of the GPS PPS signals and STR PPS and their routing to an external
OBC connector. Please see Chap. 5. Dedicated unit shaker tests were performed
only with the EM in order to avoid damaging of flight hardware (Table 9.3).
180 M. Fritz et al.
Shaker test – – – – EM –
Thermal Qualification: – – – – – FM
FM qualification in the frame of the
overall satellite TV test campaign
The PCBs for I/O-Boards and CCSDS-Boards are based on the same design and
the hardware of both board types is manufactured by 4Links Ltd. Therefore, basic
hardware qualification for both boards was performed on supplier side except for
environmental tests. This environmental qualification was performed by IRS in
the frame of the overall mechanical and thermal spacecraft test campaign
(Tables 9.4, 9.5).
Mechanical qualification: – – – – – FM
FM qualification in the frame
of the overall satellite vibration
and shock test campaign
Thermal qualification: – – – – – FM
FM qualification in the frame
of the overall satellite TV test
campaign
9 CDPI System Testing 181
Mechanical qualification: – – – – – FM
FM qualification in the frame
of the overall satellite vibration
and shock test campaign
Thermal qualification: – – – – – FM
FM qualification in the frame of the
overall satellite TV test campaign
For preliminary tests Breadboard Models were used while assembling the
Satellite Test Bed infrastructure. The BBM for the CCSDS-Board was provided by
Aeroflex Gaisler. 4Links provided the BBM for the I/O-Board in the form of a
partially populated PCB. Later in the project a full I/O-Board EM was supplied by
4Links.
The OBC subsystem was assembled from the diverse delivered boards plus the
OBC internal harness in the cleanroom at IRS premises. Therefore, it also had to
be electrically and functionally qualified by the IRS team. This qualification was
performed within the FlatSat test campaign. Board and unit thermal qualification
could also be performed in-house while vibration and shock tests were only per-
formed externally in the frame of the shaker tests of the completely integrated
satellite for reduced stress of the CDPI components (Table 9.6).
182 M. Fritz et al.
The CDPI reconfiguration test series was targeted to prove the redundancy man-
agement of the CDPI system. The reconfiguration functionality is controlled by the
CDPI Common-Controller which in this implementation is the processor of the
PCDU. The test series included basic switching between single nominal and
redundant boards by externally commanded HPCs as well as by automatic routines
of the PCDU. The final test was an artificially created, critical outage of one
Processor-Board. The system needed to successfully recover automatically back to
an operational configuration and within specified parameters in order to pass the
test (Table 9.7).
For the overall test campaign a distinction between Engineering Model tests
(EM tests) and flight model tests (FM tests) had to be made. Both groups of tests
involved different hardware and had completely different objectives.
The objective of the EM tests was to functionally verify the system itself. This
covers interface tests and the first assembly of all EM parts where subsequently
complexity was added until one non redundant OBC EM was complete and all
interfaces were tested and working as desired. This EM of the OBC is powered by an
external power supply—not via Power-Boards and therefore doesn’t represent the
complete system as it is used within the satellite. Such limitations were the reason
9 CDPI System Testing 183
why only functional tests could be performed on the STB. This resulted in leaving out
all tests regarding power supply and redundancy. After the test had passed, the EM
was used to conduct simulation tests with the real-time simulator, i.e. attitude control
test scenarios, and will also be used during the mission to verify software updates.
The focus for FM tests was on reliability aspects and CDPI redundancy man-
agement. Compared to EM tests, the group of tests for the FM follows a different path.
The main objective was to qualify the FM system for space. This means the entire
system including Power-Boards had to be assembled and tested thoroughly under
clean-room conditions. In this setup tests like OBC reconfiguration between nominal
and redundant Processor-Board and I/O-Board were performed interactively with the
PCDU. Nevertheless, the FM OBC had to undergo similar interface tests as con-
ducted for the EM too but for the FM additional safety aspects came into play.
The Satellite Test Bed described in Sect. 9.2 is schematically depicted in Figs. 9.2
and 9.3. The first line in Fig. 9.3 represents the command/control infrastructure
which is representative for what will later be used in the satellite ground station.
All parts of the OBC are depicted in the second line. The third line shows a
simulation environment for OBSW verification.
One part of the command/control infrastructure is the ESA Mission Control
System SCOS-2000 shown in the middle of the top line. It supports telecommand/
telemetry data handling, visualization and packet processing and is applied as
standard tool in ESA and DLR satellite project ground infrastructures. In order to
convert TC and TM packets into formats which are suitable for lossless trans-
mission, there is a Telemetry and Telecommand Frontend implemented which
performs the transformation of TC packets to Command Link Transmission Units
(CLTU) and vice versa of Channel Acquisition Data Units (CADU) back to TM
packets for the ground.
S/C Simulator
System Models (Env./Dyn./etc.)
Equipment
Model
Simulator Kernel
OBC Equipment
Model
Equipment
Model Control
Console
Power TM/TC Equipment
Front- Front- Model
end end
Simulator I/O
Testbench LAN
Fig. 9.2 STB with complete OBC in the loop. Jens Eickhoff [9]
184 M. Fritz et al.
• The power supply for the testbench electronic components in the rack had to be
independent from primary power fluctuations. This comprised both blackouts
and overvoltages. A non-interruptable power supply was integrated to act as
buffer between primary network power and consumers.
• Due to the OBC Processor-Boards falling under ITAR regulations, access to OBC
components had to be restricted to authorized personnel. However full setup
configuration for all non ITAR elements of the testbench was required and so was
• quick access for installation and de-installation of both OBC EM components
and equipment.
All requirements together lead to a rack based design solution. Based on a UPS
and a remote power switch, the power supply fulfills all requirements. The rack
itself provides a star point for correct grounding. The rack can be locked. As the
rack frontend only consists of one plug for mains power supply and one Ethernet
connector, it is simple to move. In order to keep the temperature at an acceptable
level, a temperature-controlled ventilation is installed at the top side of the rack.
The testbed in Fig. 1.16 for the performed EM tests in its full deployment is called a
Satellite Test Bed or STB (see also [9]). In an initial configuration CDPI component
breadboard versions were tested (Fig. 9.4). In an upgraded setup 2 the EM hardware
was tested (still non redundant). Full CDPI redundancy and reconfiguration testing
was performed on the FLP satellite FlatSat setup later—see Sect. 9.6.
Equipment
S/C Simulator
System Models (Env./Dyn./etc.)
Simulator Kernel
OBC Equipment
Model
Equipment
Model Control
Console
Power TM/TC Equipment
Front- Front- Model
end end
Simulator I/O
Testbench LAN
Fig. 9.4 EFM testbed with S/C equipment hardware connected to OBC. Jens Eickhoff [9]
186 M. Fritz et al.
Both the I/O-Boards and the CCSDS-Boards of the OBC are FPGA based. The
IP Core of each board was updated several times during the test program in order to
install new respectively corrected functionality. The FPGAs are programmable via
a JTAG Interface. The JTAG pins are located on the 100 pin Micro-D connectors E
of each board as described in Sect. 3.8 respectively in the according annex tables.
For stage 1 the connection between the OBC Processor-Board EM and the Real-
time Simulator (RTS) was established. Therefore a SpaceWire Router manufac-
tured by 4Links was used. The Realtime Simulator (RTS) uses a standard Ethernet
connection for communication with the 4Links SpaceWire Router. The SpaceWire
Router was connected on the other side to the same SpaceWire port of the OBC
Processor-Board that I/O-Board normally would use. This is why either the RTS or
the real I/O-Board can be used—not both in parallel. The RTS holds a simulated
Front-End which acts as a replacement for the actual I/O-Board (Fig. 9.5).
Onboard Computer
Real-Time Simulator
SpWR RTS
Fig. 9.5 Connecting processor-board and S/C simulator. IRS, University of Stuttgart
After connecting the RTS to the OBC Processor-Board it was possible to conduct
simulation tests focusing on the device handling part of the OBSW. However, to
reasonably control and monitor the OBSW telecommanding and telemetry han-
dling is required. Therefore as next step, the CCSDS-Board was taken into
operation (Fig. 9.6).
9 CDPI System Testing 187
Onboard Computer
Real-Time Simulator
SpWR RTS
The preliminary OBSW used for this stage had to configure CCSDS-Board
parameters such as link encoding and decoding settings, Spacecraft Identifier,
symbol-rates and clock dividers. Also TC reception buffers were configured by this
OBSW release.
The test infrastructure concept was to conduct all tests with a command/control
infrastructure that also will be used during the mission in flight. The only hardware
part that cannot be used within a laboratory environment is the Radio Frequency
(RF) equipment. To bridge the RF link in the test setup, a bypass is used. The
uplink input for the bypass line is the clock-driven synchronous signal that will
later be the input for the baseband up and down-converters. This signal corre-
sponds to the line signal from the onboard receiver to the OBC’s CCSDS-Boards.
Similarly the downlink signal on the bypass line is the encoded output of the
CCSDS-Boards which normally would be routed to the spacecraft’s transmitters.
In the bridge it is directly connected to the telemetry decoding frontend in the
ground infrastructure. The connection is physically realized by an RS422 interface.
This chain test served for:
• Testing High Priority Command reception and routing (RS422)
• Testing command reception using OBSW
• Testing telemetry generation by the OBSW
• Testing TM Idle Frame generation and reception on ground
188 M. Fritz et al.
Figure 9.7 shows all involved components and CCSDS layer conversions [24].
The ground control system i.e. SCOS-2000 operates on the packetization layer.
These packets are forwarded and returned to and from the TM/TC-Frontend via
TCP/IP using the facility LAN. The TM/TC-Frontend can be operated either alone
or in combination with radio frequency up and down-converters as an RF-SCOE for
RF tests. For this stage of EM tests only the TM/TC-Frontend was used. It handles
conversions between packetization layer and coding layer. On the corresponding
side these conversions are partly done by the CCSDS-Board and the OBSW.
SBC Board
OBC Processor Board
CCSDS Board
Decoder / Encoder
NRZ-L
NRZ-L
RF-SCOE
Onboard Computer
Real-Time Simulator
SpWR RTS
Fig. 9.8 Entire command chain bridging RF link. IRS, University of Stuttgart
High Priority Commands (HPCs) are an essential means for the safety concept of
the spacecraft system design. With HPC type 1 commands certain emergency
shutdown and reconfiguration functions can be commanded from ground without
an OBSW running on board the spacecraft. Therefore the CCSDS-Board func-
tionalities which directly forward HPC1 packets to the CDPI Common-Controller
in the PCDU were tested. The command concept in this implementation foresees
the identification of HPCs by Virtual Channels (see Fig. 1.11) and not by MAP-ID
(see also [10]).
Received HPCs are submitted directly from the CCSDS-Board to the PCDU.
This is why for this test an EM of the PCDU was required. The PCDU EM was
placed in the hardware test frame right next to the STB rack (see Fig. 1.16). It was
directly connected to the HPC UART ports of the CCSDS-Board (Fig. 9.9).
190 M. Fritz et al.
Onboard Computer
Real-Time Simulator
SpWR RTS
For commanding actual hardware devices, the preliminary version of the OBSW
needed to provide at least rudimentary device handling capability. Once this
requirement was met, the I/O-Board could be integrated into the STB followed by
a series of tests that firstly verified the proper SpaceWire address mapping of each
device. Furthermore, these tests served to check whether the I/O-Board input and
output signals were interpreted respectively generated correctly. In the STB each
device interface then was tested by connecting an engineering model of the
according spacecraft equipment hardware to the corresponding I/O-Board
interface.
With the possibility of connecting actual EM devices to the I/O-Board, the
assembly of the STB was completed. The OBC EM now could be operated from
the Mission Control System while the OBC either was connected to the RTS for
simulation of scenarios or to real EM hardware for interface tests. In summary the
following types of tests were performed:
9 CDPI System Testing 191
The final STB setup is depicted in Fig. 9.10. This setup is referred to as ‘‘STB
Configuration 2’’ in the test matrices in Sect. 9.3 and will remain unchanged until
the end of the mission.
Onboard Computer
Equip-
CCSDS Core I/O
ment
CCSDS-Board Processor-Board I/O-Board
EM EM EM
Real-Time Simulator
SpWR RTS
Fig. 9.10 Commanding simulated S/C equipment and hardware in the loop. IRS, University
of Stuttgart
A further type of tests concerns some early performance tests which were run to
evaluate the CPU load induced by the Onboard Software in realistic load cases—
more complex than only small channel access tests, boot or debug testcases. The
192 M. Fritz et al.
tests were run on STB with the EM OBC connected to the spacecraft simulator
running attitude control scenarios for the FLP target satellite. The Attitude Control
System (ACS) software implements preliminary control functions which were
designed in Simulink, converted to C++ and integrated into the OBSW
framework.
All test showed good results with respect to minor control precision errors due
to the early stage of the OBSW. The attitude control system provides a good
example for numeric load tasks which the system has to cope with during oper-
ation. The ACS controller software represents one application module in the
OBSW framework and accesses diverse equipment handlers for all the ACS
sensors and actuators. In addition to pure data handling functions like TC handling
and TM generation in OBSW the ACS is also performing significant floating-point
calculations and thus complements the scope of used processor functionality of the
LEON3FT, namely the extensive use of the processor’s floating point unit.
After successful debugging of the ACS test scenario, a performance analysis
was conducted in order to estimate margins of available processing time. For this
test, the CPU time of the ACS task was measured. A design guideline for the
overall system was to keep the CPU time for ACS below 33 % of the overall CPU
load. Figure 9.11 shows a CPU load result summary from multiple test scenarios
with the ACS subsystem controlling the satellite in its different operational modes
(x-Axis). As can be seen, even normal target pointing mode consumes only a bit
more than 20 % of CPU load thanks to the high CPU performance of the
LEON3FT. Only with activated additional numeric filter the limit is reached as
worst case. The blue bars for device handling represent the CPU load for access of
ACS sensor and actuator equipment. Since I/O access timings are fixed in a
common polling sequence table over all operational modes of the software this
load is not influenced by the complexity of the ACS computations.
All FM components have to be operated and stored under clean room conditions.
To test these components, a so-called FlatSat assembly was built up in the IRS
clean room. This assembly in its full deployment consists of all FM components of
the satellite. For the FlatSat assembly, all satellite components are electrically
connected together on a table to check the proper operability of the complete
satellite system. It serves as a final interface test environment before equipment is
integrated into the structure of the satellite. The FlatSat starts out with the CDPI
system units, the OBC and the PCDU plus a power supply replacing the Solar
Panels, and expands until the complete S/C system is assembled.
Figure 9.12 provides an outline of the FlatSat assembly of the FLP project. For
a better overview the figure illustrates only a subset of the overall equipment
assembly as far as relevant for the CDPI testing and as far as including the
functional connections of the main technical units. The Command and Data
Handling subsystem and the Power Subsystem are shown with their distinct
subunits. All remaining subsystems that are composed of multiple component
boxes are illustrated in an abstracted view. An additional simplification is achieved
by the illustration of only one side of the redundant component sets and conse-
quently also by leaving out cross-strappings.
Modulator/ Transceivers
Demodulator
RF-SCOE
Fig. 9.12 Functional overview of the FlatSat assembly. IRS, University of Stuttgart
The EGSE that is necessary for the control of the simulator and the FlatSat
assembly incorporates control applications for:
• Flight procedure execution which again was realized by a second instance of
MOIS supplied by RHEA Group
194 M. Fritz et al.
10.1 Introduction
The FLP is the first in a series of planned satellites implemented in the frame of the
SmallSat Program of the IRS at the University of Stuttgart, Germany. It is being
developed and built primarily by PhD and graduate students funded by the major
industry partner Astrium (Astrium Satellites GmbH and its daughter TESAT
Spacecom), by space agency and research facilities (German Aerospace Center,
DLR) and contributing other universities. The project is financed by the federal
state of Baden-Württemberg, the university and industry partners. Current devel-
opment status is assembly Phase D with a functional test program on the FlatSat
testbench and the qualification of a structural/thermal model of the flight structure.
The flight hardware units have been built at IRS or have been procured and unit as
well as system and OBSW tests are still ongoing.
A main project goal for the industry partners is to qualify electronic components
for space, in particular the elements of the innovative, integrated OBC/PCDU
infrastructure CDPI. For the university it is to establish the expertise and infra-
structure for development, integration, test and operations of satellites at the IRS. It
is also used to improve the education of students by providing a possibility for
hands on experience within a challenging space project. Once in orbit, the satellite
shall be used to demonstrate new technologies and to perform Earth observation.
The FLP is three-axis stabilized and features target pointing capabilities. The
total pointing error during one pass is less than 150 arcsec and the pointing
knowledge is better than 7 arcsec. To achieve these values, state of the art star
trackers, magnetometers and fiberoptic gyros as well as GPS receivers are used to
measure the attitude. Reaction wheels and magnetotorquers are used as actuators.
As an asset, the star trackers can also be used in camera mode to search for both
Inner Earth Asteroids and Near Earth Asteroids. The FLP is not equipped with any
means for propulsion or orbit control (Table 10.1).
The satellite has been designed for a circular Sun-Synchronous Orbit (SSO) with a
Local Time of Descending Node (LTDN) between 9:30 and 11 h. As the opera-
tional lifetime of the satellite is targeted to be two years and the satellite should not
stay in orbit for more than 25 years after end of life (considering the European Code
of Conduct on Space Debris Mitigation), the desired orbital altitude is between 500
and 650 km. For de-orbiting after operational use the satellite is equipped with an
experimental De-Orbiting Mechanism (DOM) from Tohoku University, Japan.
The FLP is a cuboid with two deployable solar panels. It has an estimated total
mass of less than 130 kg. The exact mass cannot be provided before completion of
the flight harness manufacturing. Figure 10.1 shows the satellite configuration with
10 The Research Target Satellite 197
Fig. 10.1 FLP in-orbit simulation and mechanical configuration. IRS, University of Stuttgart
deployed solar panels. Its main dimensions during launch with undeployed solar
panels are 600 by 702 by 866 mm as depicted in Fig. 10.2. An adapter ring to the
launcher adapter is installed on the satellite. The depicted variant was designed to
be compliant to the PSLV piggy-back separation adapter. For alternative launch
vehicles this still can be adapted accordingly. It needs to be noted that the
deployable De-Orbiting Mechanism (DOM) is located inside the launch adapter
ring.
The structure of the FLP is designed to be a hybrid structure. The lower part is
made of integral aluminum parts and the upper part, where the optical payloads are
installed, consists of carbon-fiber reinforced sandwich structures which provide a
more stable alignment of the cameras due to their low thermal expansion. The
Thermal Control System (TCS) of the satellite consists of several temperature
sensors and heaters inside the satellite as well as Multi Layer Insulation and
radiators on the outside. No active cooling system is used.
198
865.95
Satellite Launch Adapter Mount
836.45
600 De-Orbiting Mechanism plunged in the launch adapter
162
702
298
358
The Attitude Control System (ACS) of the FLP satellite and its algorithms are fully
defined, developed and were tested as described in [89, 90] in a Matlab/Simulink
environment. The ACS key features are:
• Damping of rotational rates after its separation from the launch-vehicle or in
case of emergency.
• A safe-mode in which power supply is ensured by utilizing reliable equipment
only.
• Coarse attitude determination by using reliable sensors.
• The capability of pointing the satellite to any given target with an absolute
pointing accuracy of 150 arcsec.
• A Kalman Filter for increased accuracy of rate and attitude measurements.
• Propagation of orbit and magnetic field models.
• State estimation and rating of sensor data.
Table 10.2 shows all sensors of the FLP satellite that are used for the attitude
control system. Two redundant magnetometer systems (MGM), four fiberoptic
gyros (FOG), eight sun sensors (SUS) distributed all over the satellite, a GPS
system with three receivers and antennas and a high precision star tracker (STR)
system with two camera units.
Table 10.3 gives the key parameters of the FLP satellite’s actuator system
which is composed of three magnetotorquers (MGT) and four reaction wheels
(RWL).
10 The Research Target Satellite 201
The FLP satellite control system can be operated in six different modes—see
Figs. 10.3 and 10.4:
For the communication with the ground stations, the TT&C system uses omni-
directional antennas to receive telecommands and transmit telemetry in com-
mercial S-Band. All these TM/TC packages are encoded in standard protocols
according to the Consultative Committee for Space Data Systems (CCSDS).
Payload data downlink is handled also in S-Band range on an amateur radio
frequency with a separate onboard transmitter.
%% , " "
%% ,
. %% , " %
. %% ,
%- * , "
%- * ,
"
* ,
* ,
' , * ,
' , * ,
$
+ $
+ &
$
+ '
( $
$
( &
( '
( !
#
$
01##0
&
" # $% '
" # $ !
" # &%
" # &
" # ' %
" # '
" "$
" "&
Fig. 10.5 FLP Satellite electrical block diagram. IRS, University of Stuttgart
$%
$
) %
)
! ,- , # / -
$
0
&
0
'
*
%% ,
%% ,
203 The Research Target Satellite 10
204 H.-P. Röser
The satellite is powered by three solar panels equipped with triple junction GaInP2/
GaAs/Ge solar cells. On the center panel, there is a further string of more advanced
triple junction cells, of which the on-orbit performance shall be verified.
The maximum achievable power is estimated to be about 230 W. The power
system is controlled by a Power Control and Distribution Unit (PCDU), which
includes the Battery Charge Regulators and provides an unregulated bus voltage of
19–25 V to the instruments. A battery assembled from off-the-shelf Lithium-Iron-
Phosphate cells is used to provide electrical power during eclipse periods
(Fig. 10.5).
To deploy the solar panels after launch, no pyrotechnics are used. Instead, a
melting wire mechanism has been developed by IRS. For this mechanism, the bolts
retaining the solar panels during launch are attached to the satellite using a split
pod, which is held together by the melting wire. After the separation from the
upper stage of the launcher, the wire is melted by heat resistors and the panels are
deployed.
The De-Orbiting Mechanism is stowed within the launch adapter ring until its
deployment at the end of the satellite mission. It is a flat square sail and is
deployed using a non-explosive bi-metal switch.
For the launch supplier it essential that the satellite is designed to be launched in
off-mode and will boot after cut of launcher separation straps. The Launch and
Early Orbit Phase (LEOP) autosequence will then be performed automatically.
Thus during the launch the satellite will not emit any RF spectrum.
Chapter 11
CDPI Assembly Annexes and Data Sheets
Overview
For debug and test of the OBC Processor-Boards—both EM and FM models—a
small PCB card was designed by Aeroflex Colorado Springs to interface to the
LEON3FT DSU and Ethernet ports respectively. The DEI card is also valuable for
engineers to verify the Processor-Boards have arrived unharmed at end user premises.
The DEI card is used for essentially three purposes:
• Resetting the SBC:
During software development, code running on the processor is typically not
mature and therefore crashes can and do quite often occur. The external reset is
valuable and allows the user to reset the processor without cycling power.
This feature was extensively used during OBSW testing with the OBC EM on
the Satellite Test Bed. Please refer to Figs. 1.16 and 9.3.
• DSU Interface:
The LEON3FT has a Debug Support Unit that gives the user the ability to
modify registers, load programs and generally interface to the LEON3FT using
simple commands running out of a DOS shell.
This functionality has been used with both EM Processor-Board in STB as well
as with the complete OBC FM in the FlatSat environment. Please refer to Figs.
1.16 and 9.12.
• Ethernet Interface:
The single ended Ethernet signals are routed to the 44 pin connector on the SBC
and these signals are connected to an Ethernet Phy device on the DEI. The
Ethernet connector on the DEI is then used to connect to the Ethernet port on the
LEON3FT of the SBC.
DEI Connectors
There are four connectors on the DEI card. Type and function are as follows
(please also refer to Fig. 11.1):
• 44 Pin D-Sub Female:
Used to interface to the SBC and contains all of the signals to power, reset and
interface to Ethernet and the DSU.
• Ethernet Connector:
Interface to any 10/100 Mb Ethernet network.
• DSU Molex:
Standard 14 pin connector used to interface to the LEON3FT DSU using a
Xilinx JTAG Interface pod.
• Nine pin D-Sub Female Connector:
3.3 V Power and Ground input from any bench power supply.
Block Diagram and Dimensions
Figure 11.1 shows the general layout of the DEI card as well as the dimensions.
The card is designed to connect to the SBC with or without a cable.
DSU Molex
63.56 mm
Connector
External Reset
Switch
9 Pin D-Sub
Connector
76.27 mm
11 CDPI Assembly Annexes and Data Sheets 207
Power Connector
The DEI card gets power through a 9 pin D-sub connector. These are readily
available through many electronics suppliers (Table 11.2).
3.3V +/-5%
Input
DSU/Ethernet
Interface Card
9 Pin D-Sub Stuttgart SBC EM
Connector
44 Pin D-Sub Connector
DSU Molex
Connector
Ethernet
Connector
Ethernet
USB
Personal
Computer
• The most significant bit of an array is located to the left, carrying index number
zero, and is transmitted first.
• An octet comprises eight bits.
The applicable standards [17, 25] specify a Reed-Solomon E = 16 (255, 223) code
resulting in the frame lengths and code block sizes listed in Table 11.10.
The Command Link Control Word (CLCW) can be transmitted as part of the
Operation Control Field (OCF) in a Transfer Frame Trailer. The CLCW is spec-
ified in [20, 35] and is listed in Table 11.16.
The Space Packet defined in the CCSDS [27, 28] recommendation and is listed in
Table 11.17.
Space Packet
Primary Header Packet Data Field
Packet Packet Identification Packet Sequence Control Packet Secondary User Packet
Version Header Data
Type Secondary Application Sequence Sequence Data Error
Number (optional) Field
Header Flag Process Id Flags Count Length Control
(optional)
0:2 3 4 5:15 16:17 18:31 32:47
3 bits 1 bit 1 bit 11 bits 2 bits 14 bits 16 bits variable variable variable
The asynchronous bit serial interface complies to the data format defined in EIA-
232. It also complies to the data format and waveform shown in Table 11.18 and
Fig. 11.3. The interface is independent of the transmitted data contents. Positive
logic is considered for the data bits. The number of stop bits can optionally be
either one or two. The parity bit can be optionally included.
11 CDPI Assembly Annexes and Data Sheets 213
Table 11.19 CCSDS space packet, in RMAP write command, in spacewire packet
SpaceWire Destination Cargo EOP
Packet Address
RMAP Write Target Target Protocol lnstruction Key Reply lnitiator Transaction Extended Address Data Header Data Data EOP
Command SpaceWire Logical ldentifier Address Logical ldentifier Address Length CRC CRC
Address Address Address
CCSDS CCSDS
Space Space
Packet Packet
optionalv, 1 byte 1 byte 1 byte 1 byte optional, 1 byte 2 bytes 1 byte 4 bytes 3 bytes 1 byte variable 1 byte token
ari- able variable
The design receives and generates the waveform formats as shown in Fig. 11.3:
214
Break
Start Stop
Delimiter
Clock
See Tables 11.21, 11.22, 11.23, 11.24, 11.25, 11.26, 11.27, and 11.28.
31:23 RESERVED
22 Operational Control Field Bypass (OCFB)—CLCW implemented externally, no OCF
register
21 Encryption/Cipher Interface (CIF)—interface between protocol and channel coding sub-
layers
20 Advanced Orbiting Systems (AOS)—AOS Transfer Frame generation implemented
19 Frame Header Error Control (FHEC)—frame header error control implemented, only if
AOS also set
18 Insert Zone (IZ)—insert zone implemented, only if AOS also set
17 Master Channel Generation (MCG)—master channel counter generation implemented
16 Frame Secondary Header (FSH)—frame secondary header implemented
15 Idle Frame Generation (IDLE)—idle frame generation implemented
14 Extended VC Cntr (EVC)—extended virtual channel counter implemented (ECSS)
13 Operational Control Field (OCF)—CLCW implemented internally, OCF register
12 Frame Error Control Field (FECF)—Transfer Frame CRC implemented
11 Alternative ASM (AASM)—alternative attached synchronization marker implemented
10:9 Reed-Solomon (RS)—Reed-Solomon encoder implemented, ‘‘01’’ E = 16, ‘‘10’’ E = 8,
‘‘11’’ E = 16 & 8
8:6 Reed-Solomon Depth (RSDEPTH)—Reed-Solomon interleave depth -1 implemented
5 Turbo Encoder (TE)—turbo encoder implemented (reserved)
4 Pseudo-Randomizer (PSR)—Pseudo-Randomizer implemented
3 Non-Return-to-Zero (NRZ)—non-return-to-zero—mark encoding implemented
2 Convolutional Encoding (CE)—convolutional encoding implemented
1 Split-Phase Level (SP)—split-phase level modulation implemented
0 Sub Carrier (SC)—sub carrier modulation implemented
31:20 RESERVED
19 Encryption/Cipher Interface (CIF)—enable external encryption/cipher interface between
sub-layers
18:17 Clock Selection (CSEL)—selection of external telemetry clock source (application
specific)
16 Alternative ASM (AASM)—alternative attached synchronization marker enable. When
enabled the value from the GRTM Attached Synchronization Marker register is used, else
the standardized ASM value 0x1ACFFC1D is used
15 Reed-Solomon (RS)—Reed-Solomon encoder enable
14:12 Reed-Solomon Depth (RSDEPTH)—Reed-Solomon interleave depth -1
11 Reed-Solomon Rate (RS8)—‘0’ E = 16, ‘1’ E = 8
10:8 RESERVED
7 Pseudo-Randomizer (PSR)—Pseudo-Randomizer enable
6 Non-Return-to-Zero (NRZ)—non-return-to-zero—mark encoding enable
5 Convolutional Encoding (CE)—convolutional encoding enable
4:2 Convolutional Encoding Rate (CERATE):
‘‘00-’’rate 1/2, no puncturing
‘‘01-’’rate 1/2, punctured
‘‘100’’rate 2/3, punctured
‘‘101’’rate 3/4, punctured
‘‘110’’ rate 5/6, punctured
‘‘111’’rate 7/8, punctured
1 Split-Phase Level (SP)—split-phase level modulation enable
0 Sub Carrier (SC)—sub carrier modulation enable
CWTY VNUM STAF ClE VCl RESERVED NRFA NBLO LOUT WAlT RTMl FBCO RTYPE RVAL
31 CWTY (Control Word Type)
30:29 VNUM (CLCW Version Number)
28:26 STAF (Status Fields)
25:24 CIE (COP In Effect)
23:18 VCI (Virtual Channel Identifier)
17:16 Reserved (PSS/ECSS requires ‘‘00’’)
15 NRFA (No RF Available)
Write: Don’t care
Read: Based on discrete inputs
14 NBLO (No Bit Lock)
Write: Don’t care
Read:Based on discrete inputs.
13 LOUT (Lock Out)
12 WAIT (Wait)
11 RTMI (Retransmit)
10:9 FBCO (FARM-B Counter)
8 RTYPE (Report Type)
7:0 RVAL (Report Value)
Power-up default: 0x00000000
11 CDPI Assembly Annexes and Data Sheets 221
The following register provides to the OBSW the RF-Available flag and the
Bit-Lock flag as coming from the receivers.
The following register is essential for the OBSW to detect overrun and FIFO
full errors:
222
The following register must be initialized by the OBSW with a memory address
representing the start address of the TC buffer:
31 24 23 0
RxRdPtrUpper RxRdPtrLower
31:24 10-bit upper address pointer
Write: Don’t care
Read: This pointer = ASR[31..24]
23:0 24-bit lower address pointer
This pointer contains the current RX read address. This register is to be incremented with
the actual amount of bytes read
Power-up default: 0x00000000
31 24 23 0
RxWrPtrUpper RxWrPtrLower
31:24 10-bit upper address pointer
Write: Don’t care
Read: This pointer = ASR[31..24]
23:0 24-bit lower address pointer
This pointer contains the current RX write address. This register is incremented with the
actual amount of bytes written
Power-up default: 0x00000000
Legend:
[1]. The global system reset caused by the SRST-bit in the GRR-register results in
the following actions:
• Initiated by writing a ‘1’’, gives ‘0’ on read-back when the reset was
successful.
• No need to write a ‘0’ to remove the reset.
• Unconditionally, means no need to check/disable something in order for
this reset- function to correctly execute.
• Could of course lead to data-corruption coming/going from/to the reset core.
• Resets the complete core (all logic, buffers & register values).
• Behaviour is similar to a power-up.
Note that the above actions require that the HRESET signal is fed back
inverted to HRESETn, and the CRESET signal is fed back inverted to
CRESETn.
• The Coding Layer is not reset.
224
[2]. The FAR register supports the CCSDS/ECSS standard frame lengths
(1024 octets), requiring an 8 bit CAC field instead of the 6 bits specified for
PSS. The two most significant bits of the CAC will thus spill over into the
‘‘LEGAL/ILLEGAL’’ FRAME QUALIFIER field, Bit [26:25]. This is only
the case when the PSS bit is set to ‘0’.
[3]. Only inputs 0 through 3 are implemented.
[4]. The channel reset caused by the CRST-bit in the COR-register results in the
following actions:
• Initiated by writing a ‘1’’, gives ‘0’ on read-back when the reset was
successful.
• No need to write a ‘0’ to remove the reset.
• All other bit’s in the COR are neglected (not looked at) when the CRST-bit
is set during a write, meaning that the value of these bits has no impact on
the register-value after the reset.
• Unconditionally, means no need to check/disable something in order for
this reset- function to correctly execute.
• Could of course lead to data-corruption coming/going from/to the reset
channel.
• Resets the complete channel (all logic, buffers & register values)
• Except the ASR-register of that channel which remains it is value.
• All read- and write-pointers are automatically re-initialized and point to the
start of the ASR-address.
• All registers of the channel (except the ones described above) get their
power-up value.
• This reset shall not cause any spurious interrupts.
Note that the above actions require that the CRESET signal is fed back
inverted to CRESETn.
• The Coding Layer is not reset.
[5]. These bits are sticky bits which means that they remain present until the
register is read and that they are cleared automatically by reading the register.
[6]. The value of the pointers depends on the content of the corresponding
Address Space Register (ASR).
During a system reset, a channel reset or a change of the ASR register, the
pointers are recalculated based on the values in the ASR register.
The software has to take care (when programming the ASR register) that the
pointers never have to cross a 16MByte boundary (because this would cause
an overflow of the 24-bit pointers).
It is not possible to write an out of range value to the RRP register. Such
access will be ignored with an HERROR.
11 CDPI Assembly Annexes and Data Sheets 225
178
94
121 10
Y Z
X X 16.5
See Fig. 11.4.
J11
J12
55.5
J10 J8
89
J9 J7
119
J6 J5
267
146.5
J4 J2 185.5
11.6 OBC Unit CAD Drawing
J1 219
J3
249
I/O-Board Connector D:
• I/O-Board internal nomenclature D
• External instantiations J5/J11 according to annex Sect. 11.6
Generic Pin Allocations (IF Type)
See Fig. 11.5.
Fig. 11.6 I/O-Board connector D—target satellite specific interfaces. 4Links Ltd.
11 CDPI Assembly Annexes and Data Sheets 229
I/O-Board Connector E
• I/O-Board internal nomenclature E
• External instantiations J6/J12 according to annex Sect. 11.6
Generic Pin Allocations (IF Type)
See Fig. 11.7.
Fig. 11.8 I/O-Board connector E—target satellite specific interfaces. 4Links Ltd.
11 CDPI Assembly Annexes and Data Sheets 231
9 x GND Shield
15 x GND Shield
11 CDPI Assembly Annexes and Data Sheets 233
The lines routed via external connector J1 (see Fig. 11.2) connect to OBC
Processor-Board N. The lines routed via external connector J7 connect to OBC
Processor-Board R.
Power-Board Power Connector
• Internal nomenclature J3
• External instantiations J2/J8 according to annex Sect. 11.6 Table 11.47.
The lines routed via external connector J2 connect to OBC Boards N. The lines
routed via external connector J8 connect to OBC Boards R.
Power-Board PPS Signal Connector
• Internal nomenclature J4
• External instantiations J3/J9 according to annex Sect. 11.6.
The lines routed via external connector J3 connect to OBC Processor-Board N.
The lines routed via external connector J9 connect to OBC Processor-Board R.
234
The connector pin assignments of the PCDU are something very mission specific,
driven by the spacecraft design. Therefore here only an overview concerning
which PCDU connector routes which type of signals is provided. Just for the
connectors J11 and J12 the detailed assignments are given further below since they
route
The design of the PCDU includes a total of 27 fuses and 77 power switches plus 2
special switches for high-power consuming loads. The Table (11.51) provides an
overview of the assignments of the fuses and switches to equipment of the FLP
target spacecraft.
01 02 OBC Processor-Board R
03
02 04 OBC I/O-Board N
05
03 06 OBC I/O-Board R
07
04 08 OBC CCSDS-Board N
09
05 10 OBC CCSDS-Board R
11
06 12 TC receiver 0
07 13 TC receiver 1
11 26 RWL 3
27 STR R
28 FOG 3
12 29 FOG 0
30 RWL 0
13 31 FOG 1
32 RW 1
14 33 FOG 2
34 RW 2
15 35 MGM 0
16 36 MGM 1
17 37 Camera payload: PAMCAM
38 MGT unit R
39 GPS electronics 1
18 40 GPS electronics 0
41 MGT unit N
42 GPS electronics 2
24 67 SA retaining mechanism N
68
25 69 SA retaining mechanism R
70
26 71 De-orbiting mechanism
72
73 Payload: AIS antenna
74
75 Payload: AIS receiver
76
References
© selenamay - Fotolia.com
7. Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sandi A.; Witt, Rouven; Röser, Hans-
Peter: Common board design for the OBC I/O unit and the OBC CCSDS unit of the Stuttgart
University Satellite ‘‘Flying Laptop’’ Data Systems in Aerospace, DASIA 2011 Conference,
17–20 May, 2011, San Anton, Malta
8. Eickhoff, Jens; Stevenson, Dave; Habinc, Sandi; Röser Hans-Peter:University Satellite
featuring latest OBC Core & Payload Data Processing Technologies, Data Systems in
Aerospace, DASIA 2010 Conference, Budapest, Hungary, June, 2010
11. http://www.spacewire.esa.int/content/Home/HomeIntro.php
12. ECSS-E-ST-50-12C (31 July 2008) SpaceWire - Links, nodes, routers and networks
13. ECSS-E-ST-50-51C (5 February 2010) SpaceWire protocol Identification
14. ECSS-E-ST-50-52C (5 February 2010) SpaceWire - Remote memory access protocol
15. ECSS-E-50-12C SpaceWire cabling
16. ECSS-E-ST-50C Communications
17. ECSS-E-ST-50-01C Space data links - Telemetry synchronization and channel coding
18. ECSS-E-ST-50-02C Ranging and Doppler tracking
19. ECSS-E-ST-50-03C Space data links - Telemetry Transfer Frame protocol
20. ECSS-E-ST-50-04C Space data links - Telecommand protocols, synchronization and channel
coding
21. ECSS-E-ST-50-05C Radio frequency and modulation
22. ECSS-E-70-41A Ground systems and operations - Telemetry and telecommand packet
utilization
23. Consultative Committee for Space Data Systems: CCSDS Recommended Standards, Blue
Books, available online at http://public.ccsds.org/publications/BlueBooks.aspx
24. CCSDS 130.0-G-2 CCSDS layer conversions
25. CCSDS-131.0-B-1 TM Synchronization and Channel Coding
26. CCSDS-132.0-B-1 TM Space Data Link Protocol
27. CCSDS 133.0-B-1 Space Packet Protocol
28. CCSDS-133.0-B-1-C1 Encapsulation Service Technical Corrigendum 1
29. CCSDS-135.0-B-3 Space Link Identifiers
30. CCSDS-201.0 Telecommand - Part 1 - Channel Service, CCSDS 201.0-B-3, June 2000
31. CCSDS-202.0 Telecommand - Part 2 - Data Routing Service, CCSDS 202.0-B-3, June 2001
32. CCSDS-202.1 Telecommand - Part 2.1 - Command Operation Procedures, CCSDS 202.1-B-
2, June 2001
33. CCSDS-203.0 Telecommand - Part 3 - Data Management Service, CCSDS 203.0-B-2, June
2001
References 241
59. 4Links Ltd.: SpaceWire Interface Unit for Interfacing to Avionics, Payloads, and TM/TC
units, User Manual for FLP IO, FM SIU B-012-PPFLPIO
60. Everspin MRAM Brochure: http://everspin.com/PDF/MSG-14349_MRAM_Sales_Bro.pdf
61. Everspin MR4A16b Data Sheet: http://everspin.com/PDF/EST_MR4A16B_prod.pdf
62. Aeroflex Gaisler AB: CCSDS TM / TC and SpaceWire FPGA Data Sheet and User’s Manual
GR-TMTC-0004 July 2012, Version 1.2
63. 4Links Ltd.: SpaceWire Interface Unit for Interfacing to Avionics, Payloads, and TM/TC
units, User Manual for FLP CCSDS, FM SIU B-012-PPFLPCCSDS
64. GRLIB IP Library User’s Manual, Aeroflex Gaisler http://www.aeroflex.com/gaisler
65. GRLIB IP Core User’s Manual, Aeroflex Gaisler http://www.aeroflex.com/gaisler
66. Spacecraft Data Handling IP Core User’s Manual, Aeroflex Gaisler http://www.aeroflex.
com/gaisler
67. AMBA Specification, Rev 2.0, ARM IHI 0011A, Issue A, ARM Limited
68. Radiation-Tolerant ProASIC3 Low Power Space-Flight Flash FPGAs, 51700107-1/11.09,
Revision 1, November 2009, Actel Corp
69. ProASIC3L Low Power Flash FPGAs, 51700100-9/2.09, February 2009, Actel Corp
70. ProASIC3E Flash Family FPGAs, 51700098-9/8.09, August 2009, Actel Corp
72. Manufacturing Data Package for IRS OBC internal harness HEMA Kabeltechnik GmbH &
Co. KG, 2012
73. Schuh: Konstruktion und Analyse eines Struktur-Thermal Modells des Onboard-Computers
für den Kleinsatelliten Flying Laptop, Study Thesis, IRS, 2011
74. Ley: Handbuch der Raumfahrttechnik Hanser Verlag, 2008
75. http://www.mincoglobal.de/uploadedFiles/Products/Thermofoil_Heaters/Kapton_Heaters/hs202b-
hk.pdf
References 243
78. Gaget1-ID/160-8040 data sheet: RWE Space Solar Power GmbH, Gaget1-ID/160-8040 Data
Sheet, HNR 0002160-00, 2007
79. Battery data sheet:: A123 Systems, Inc.: Nanophosphate High Power Lithium Ion Cell
ANR26650M1B, MD100113-01, 2011
80. Test String data sheet: RWE Space Solar Power GmbH, RWE3G-ID2*/150-8040 Data Sheet,
2005
81. NASA radiation: PD-ED-1258: Space radiation Effects on Electronic Components in Low-
Earth Orbit, April 1996, NASA - Johnson Space Center (JSC)
82. Wertz, J. R.; Larson, W. J.: Space Mission Analysis and Design, 3rd ed., Microcosm Press,
1999, ISBN 978-1881883104
83. Uryu, A. N.: Development of a Multifunctional Power Supply System and an Adapted
Qualification Approach for a University Small Satellite, Dissertation, University of Stuttgart,
Stuttgart, Germany, Institute of Space Systems, 2012
84. PCDU Microcontroller data sheet: RENESAS Electronics, Renesas 32-Bit RISC
Microcomputer, SuperH RISC engine Family/SH7040 Series, hardware manual, Issue 6.0,
2003
85. VECTRONIC Aerospace GmbH: Interface Control Document & Operation Manual for
Power Control and Distribution Unit Type VPCDU-1, Project IRS-FLP TD-VAS-PCDU-
FLP-ICD16.doc Issue 6, 12.12.2011
86. Brandt, Alexander; Kossev, Ivan; Falke, Albert; Eickhoff, Jens;Röser Hans-Peter:
Preliminary System Simulation Environment of the University Micro-Satellite Flying
Laptop, 6th IAA Symposium on Small Satellites for Earth Observation, German Aerospace
Center (DLR), 23–26 April 2007, Berlin, Germany
87. Fritz, Michael: Hardware und Software Kompatibilitätstests für den Bordrechner eines
Kleinsatelliten PhD thesis, Institute of Space Systems, 2012
88. http://www.egos.esa.int/portal/egos-web/products/MCS/SCOS2000/
89. Grillmayer, Georg: An FPGA based Attitude Control System for the Micro-Satellite Flying
Laptop.PhD thesis, Institute of Space Systems, 2008
90. Zeile, Oliver: Entwicklung einer Simulationsumgebung und robuster Algorithmen für das
Lage- und Orbitkontrollsystem der Kleinsatelliten Flying Laptop und PERSEUS. PhD thesis,
Institute of Space Systems, 2012
Index
A CATIA, 138
A3PE3000L FPGA, 21, 44, 58 CCSDS, 5, 65, 71, 208
ADuM14xx Isolator, 51 CCSDS protocol, 175
ADum54xx Isolator, 51 Channel Acquisition Data Unit (CADU), 18,
AMBA bus, 15 60, 62, 183
Analog data handling, 152 Chip Enable, 34, 36, 38
Analog RIU, 160 CLCW, 63, 68, 70, 75, 84, 85, 93, 97, 98, 212
Analog-to-Digital Converter, 160 Cleanroom, 175
ARM, 15 Clock, 6, 45
ASIC, 9 Clock divider, 77
Assembly, Integration and Tests, 176 Clock signal, 188
Attitude Control System, 192 Clock strobe, 104
Authentication Unit, 85 CLTU, 17, 60, 62, 86, 165, 183
Autonomous reconfiguration, 163 CMOS, 57
Codeblock decoding, 87
Codeblock rejection, 87
B Coding Layer, 87
Backplane, 4, 119 Cold redundancy, 144, 162
Bandwidth allocation, 75 Combined Data and Power Management
Base plate, 136 Infrastructure, 1, 2, 14, 60, 86, 151, 161,
Baseplate, 135, 140 182, 199
Battery, 152 Combined-Controller, 9, 11, 18, 22, 44, 141,
Battery Charge Regulator, 157 161, 162
Battery power, 156 Command Link Control Word, 75, 84, 85, 93,
Battery status, 162 97, 212
Battery survival heater, 157 Command Link Transmission Unit, 86, 165, 183
BCR, 157 Command Pulse Decoding Unit, 8, 9, 85, 164,
Bi-stable relay, 157 165
Bleeder resistor, 107 Communication tests, 175
Board identity, 47 Compact PCI, 15
Boot-up sequence, 156 Conductive coupling, 142, 144
Breadboard Model, 18, 174, 180 Connectors, 56
Buffer chips, 47 Consultative Committee for Space Data
Systems, 65, 71, 208
Control loop, 160, 162
C Convolutional encoding, 60, 63, 76
CAD software, 138 CPDU, 86, 164, 165
CAN bus, 15 CPU load, 191
E H
ECSS, 65, 71 Hardware command, 63, 64, 69, 96, 99
EDAC, 7, 32, 35, 36, 70 Heater circuits, 114
EEPROM, 31 Heaters, 127, 140, 146
Eigenfrequency, 139 Heat-up duration, 149
Electromagnetic Interference, 51 HF influences, 135, 137
EMC, 124, 134 High Priority Command, 5, 7, 11, 18, 22, 44,
Engineering Model, 174, 182 60, 61, 96, 155, 162, 164, 165, 187, 189
EPPL, 148 History log function, 168
Error Detection and Correction, 36 HK5591 Heater, 148
ESA Procedures, Standards and Specifications, Hot redundancy, 144, 162, 164
66, 71 Housekeeping data, 5
ESATAN-TMS, 141 Housekeeping telemetry, 46
Ethernet, 15, 28, 32, 40, 206 HPC command sequence, 166
European Code of Conduct on Space Debris HPC command structure, 166
Mitigation, 196 HPC frame header, 165
European Cooperation on Space Standardiza-
tion, 65, 71
European Preferred Parts List, 148 I
External reset, 40 Idle Frame, 62, 66, 75, 187
IIC, 4, 45, 50, 51, 55
Inrush current, 110
F Insulation test, 129
Failure Detection, Isolation and Recovery, 2, Inter-board harness, 119
6, 174 International Telecommunication
Failure tolerance, 104 Union, 168
FDIR, 65, 164, 174 International Traffic in Arms Regulations, 27,
FEM model, 138 176, 185
FEM simulation, 138 Isolated group, 51, 52, 57
Field Programmable Gate Array, 31
FIFO, 89
Flash-memory, 31 J
FlatSat, 179, 181, 185, 193 JTAG Interface, 4, 53, 57, 100, 104, 115
Index 247
O
OBC CCSDS-Board, 12 Q
OBC heaters, 104 Quasi-static analysis, 139
OBC housing, 104, 114, 148 Quasi-static design load, 137
248 Index
R Software-based decoder, 59
Radiation tolerance, 4, 31, 169 Solar panel, 152, 157, 162
Radiator, 140 Solar panel deployment, 167
RAM, 6 Space Packet, 61, 62, 69, 74, 98, 212
Random vibration, 140 Space Packet Protocol layer, 69
Real Time Operating System, 2, 29, 33 Spacecraft Identifier, 68, 187
Realtime Simulator, 186 Spacecraft status, 46
Reconfiguration, 8, 11, 152, 161, 163, 177, 182 SpaceWire, 2, 4, 15, 18, 21, 30, 32, 36, 37, 41,
Reconfiguration Unit, 7–9, 161 43, 44, 45, 47, 60–62, 66, 100, 213
Redundancy, 6, 23, 161, 182, 183 SpaceWire Clock, 40
Redundant heaters, 147 SpaceWire harness, 127
Reed-Solomon encoding, 60, 63, 76, 210 SpaceWire link, 56
Reflection, 124 SpaceWire packet, 213
Reliability, 183 SpaceWire port, 37
Remote Interface Unit, 5, 21, 44, 160, 167 SpaceWire Router, 186
Remote Memory Access Protocol, 18, 43, SPARC, 30, 59
45–47, 60, 61, 70, 213 SRAM, 31–33, 35, 36, 38, 45, 46
Resistance test, 130 Stacked die, 38
Retention test, 129 Standard grounded group, 57
RF available indicator, 69 Star tracker, 104, 113
RMAP addresses, 50 Start Sequence, 87
RMAP memory address, 45 State of Charge, 160
RMAP packet, 55 Static Random Access Memory, 31
RMAP Verify Bit, 50 Storage temperature range, 58
RS422, 18, 32, 39, 41, 45, 51, 61, 65, 155, 165 Sun-Synchronous Orbit, 196
RS485, 45, 51 Switch, 154, 157, 162, 167, 236
RTEMS, 17, 18, 60 Synchronization, 87
Synchronization and Channel Coding, 66
Synchronization and Coding Sub-layer, 68
S Synchronous clock and data recovery, 44
Safeguard Memory, 6 System chain tests, 175
Safe Mode, 9, 156, 160, 163 System clock, 40
Safety, 183 System tests, 175
Satellite resonance frequency, 139
Satellite Test Bed, 23, 177–179, 183, 185
Schmitt trigger, 51 T
SCOS, 175 Technical Assistance Agreement, 27
SDRAM, 31 Telecommand, 5
Segment, 98 Telecommand active signal, 69
Segment Data Field, 69 Telecommand bit clock active, 69
Senior engineer, 177 Telecommand Decoder, 17, 59, 69, 84, 85
Sensor data, 161 Telecommand Frame, 18
Separation detection, 167 Telecommand input interface, 62
Serial differential interface, 51 Telecommand Source Packet Header, 166
Service Interface, 4, 104, 115 Telecommand Transfer Frame, 63, 99, 165,
SH7045 microcontroller, 153 211
Shaker, 178 Telemetry, 5, 155
Shunt resistor, 109 Telemetry downlink, 61
Sine vibration, 140 Telemetry Encoder, 18, 59, 65, 67, 69, 71, 84,
Single Board Computer, 4, 14, 28, 29 93
Single Event Latch-up, 58 Telemetry Encoder Descriptor, 82
Single-Event Upset, 8, 44, 58 Telemetry interface, 62
Skin connector, 4, 115, 117 Telemetry Transfer Frame, 70, 209
Sleep Bit, 35 Temperature limit, 159
Index 249