Professional Documents
Culture Documents
Final thesis
Martin Fri
Jon Börjesson
LIU-IDA/LITH-EX-A—10/010--SE
2010-05-10
Final Thesis
Martin Fri
Jon Börjesson
LIU-IDA/LITH-EX-A—10/010--SE
2010-05-10
The Integrated Modular Avionics architecture , IMA, provides means for run-
ning multiple safety-critical applications on the same hardware. ARINC 653 is a
specification for this kind of architecture. It is a specification for space and time
partition in safety-critical real-time operating systems to ensure each applica-
tion’s integrity. This Master thesis describes how databases can be implemented
and used in an ARINC 653 system. The addressed issues are interpartition
communication, deadlocks and database storage. Two alternative embedded
databases are integrated in an IMA system to be accessed from multiple clients
from different partitions. Performance benchmarking was used to study the dif-
ferences in terms of throughput, number of simultaneous clients, and scheduling.
Databases implemented and benchmarked are SQLite and Raima. The studies
indicated a clear speed advantage in favor of SQLite, when Raima was integrated
using the ODBC interface. Both databases perform quite well and seem to be
good enough for usage in embedded systems. However, since neither SQLite
or Raima have any real-time support, their usage in safety-critical systems are
limited. The testing was performed in a simulated environment which makes
the results somewhat unreliable. To validate the benchmark results, further
studies must be performed, preferably in a real target environment.
Keywords : ARINC 653, INTEGRATED MODULAR AVIONICS, EM-
BEDDED DATABASES, SAFETY-CRITICAL, REAL-TIME OPERATING
SYSTEM, VXWORKS
Contents
1 Introduction 7
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Document structure . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Background 10
2.1 Safety-critical airplane systems . . . . . . . . . . . . . . . . . . . 10
2.1.1 DO-178B . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Avionics architecture . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Federated architecture . . . . . . . . . . . . . . . . . . . . 12
2.2.2 IMA architecture . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 ARINC 653 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Part 1 - Required Services . . . . . . . . . . . . . . . . . . 14
2.3.2 Part 2 and 3 - Extended Services and Test Compliance . . 18
2.4 VxWorks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1 Configuration and building . . . . . . . . . . . . . . . . . 19
2.4.2 Configuration record . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 System image . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.4 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.5 Partitions and partition OSes . . . . . . . . . . . . . . . . 21
2.4.6 Port protocol . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.7 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.1 ODBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.2 MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.3 SQLite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.4 Mimer SQL . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.5 Raima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1
CONTENTS
4 Benchmarking 46
4.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.1 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.2 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.3 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Benchmark graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 SQLite Insert . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 SQLite task scheduling . . . . . . . . . . . . . . . . . . . . 50
4.2.3 SQLite select . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2.4 Raima select . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Benchmark graphs analysis . . . . . . . . . . . . . . . . . . . . . 53
4.3.1 Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.2 Average calculation issues . . . . . . . . . . . . . . . . . . 53
4.3.3 Five clients top . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2
CONTENTS
A Benchmark graphs 70
A.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.2 SQLite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.2.1 Insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.2.2 Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.2.3 Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2.4 Alternate task scheduling . . . . . . . . . . . . . . . . . . 80
A.2.5 No primary key . . . . . . . . . . . . . . . . . . . . . . . . 82
A.2.6 Large response sizes . . . . . . . . . . . . . . . . . . . . . 85
A.3 Raima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3.1 Insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3.2 Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A.3.3 Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3.4 Alternate task scheduling . . . . . . . . . . . . . . . . . . 97
A.3.5 No primary key . . . . . . . . . . . . . . . . . . . . . . . . 99
A.3.6 Large response sizes . . . . . . . . . . . . . . . . . . . . . 102
3
List of Figures
4.1 Average inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Average number of inserts processed during one timeslot of vari-
ous length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Average selects processed during one timeslot for different num-
bers of client partitions and timeslot lengths. Task scheduling
used is Yield only. . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4 Average selects processed during one timeslot for different num-
bers of client partitions. . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Average selects processed during one timeslot for different num-
ber of client partitions. The lines represent the average processed
queries using different timeslot lengths. . . . . . . . . . . . . . . . 52
4.6 With one client, the server manages to process all 1024 queries
in one time frame. . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.7 With two clients, the server cannot process 2*1024 in two times-
lots due to context switches. An extra time frame is required. . 54
4.8 The average processing speed is faster than 1024 queries per
timeslot, but it is not fast enought to earn an entire timeslot. . . 54
4
LIST OF FIGURES
A.1 Average inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . 71
A.2 Average no. inserts processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
A.3 Maximum inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 73
A.4 Maximum inserts processed for various timeslot lengths. . . . . . 73
A.5 Average updates processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . 74
A.6 Average no. updates processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.7 Maximum updates processed during one timeslot for different
number of client partitions. . . . . . . . . . . . . . . . . . . . . . 76
A.8 Maximum updates processed for various timeslot lengths. . . . . 76
A.9 Average selects processed during one timeslot for different num-
bers of client partitions. . . . . . . . . . . . . . . . . . . . . . . . 77
A.10 Average no. selects processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.11 Maximum selects processed during one timeslot for different num-
bers of client partitions. . . . . . . . . . . . . . . . . . . . . . . . 79
A.12 Maximum no. selects processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.13 Average selects processed during one timeslot for different num-
bers of client partitions and timeslot lengths. . . . . . . . . . . . 80
A.14 Maximum selects processed during one timeslot for different num-
bers of client partitions and timeslot lengths. . . . . . . . . . . . 81
A.15 Average selects processed during one timeslot for different num-
bers of client partitions and timeslot lengths. . . . . . . . . . . . 82
A.16 Average no. selects processed during one timeslot of various length. 83
A.17 Maximum selects processed during one timeslot for different num-
bers of client partitions and timeslot lengths. . . . . . . . . . . . 84
A.18 Maximum no. selects processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
A.19 Average and maximum processed select queries. These selects
ask for128 rows. No sorting is applied. . . . . . . . . . . . . . . . 85
5
A.20 Average and maximum processed selects are displayed. Each
query asks for a 128 rows response. Results are sorted in an
ascending order by an unindexed column. . . . . . . . . . . . . . 86
A.21 Average inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 87
A.22 Average inserts processed for various timeslot lengths. . . . . . . 88
A.23 Maximum inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 89
A.24 Maximum inserts processed for various timeslot lengths. . . . . . 89
A.25 Average number of updates processed during one timeslot for
different number of client partitions. . . . . . . . . . . . . . . . . 90
A.26 Average number of updates processed for various timeslot lengths.
91
A.27 Maximum updates processed during one timeslot for different
number of client partitions. . . . . . . . . . . . . . . . . . . . . . 92
A.28 Maximum updates processed for various timeslot lengths. . . . . 93
A.29 Average selects processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 94
A.30 Average selects for various timeslot lengths. . . . . . . . . . . . . 95
A.31 Maximum selects processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 96
A.32 Maximum selects processed for various timeslot lengths. . . . . 96
A.33 Average selects processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 97
A.34 Maximum selects processed during one timeslot for different num-
ber of client partitions. The lines represent the maximum pro-
cessed queries using different timeslot lengths. . . . . . . . . . . . 98
A.35 Average selects processed during one timeslot for different num-
ber of client partitions. No primary key is used. . . . . . . . . . . 99
A.36 Average selects processed for various timeslot lengths. No pri-
mary key is used. . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
A.37 Maximum selects processed during one timeslot for different num-
ber of client partitions. No primary key is used. . . . . . . . . . . 101
A.38 Maximum select processed for various timeslot lengths. No pri-
mary key is used. . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
A.39 Average and maximum processed selects are displayed. Each
query asks for 128 rows. . . . . . . . . . . . . . . . . . . . . . . . 102
A.40 Average and maximum processed selects are displayed. Each
query asks for 128. Results are sorted in an ascending order by
an non-indexed column. . . . . . . . . . . . . . . . . . . . . . . . 103
Chapter 1
Introduction
This document is a Master thesis report conducted by two final year Computer
science and Engineering students. It corresponds to 60 ECTS, 30 ECTS each.
The work has been carried out at Saab Aerosystems Linköping and examined
at Department of Computer & Information science at Linköping University.
The report starts with a description of the thesis and some background
information. Following this, the system design and work results are described.
Finally, the document ends with an analysis, and a discussion and conclusion
section.
1.1 Background
Safety-critical aircraft systems often run on a single computer to prevent mem-
ory inconsistency and to ensure that real-time deadlines are held. If multiple
applications are used on the same hardware they may affect each other’s mem-
ory or time constraints. However, a need for multiple applications to share
hardware has arisen. This is mostly due to the fact that modern aircrafts are
full of electronics; no space is left for new systems. One solution to this prob-
lem is to use an Integrated Modular Avionics, IMA, architecture. The IMA
architecture provides means for running multiple safety-critical applications on
the same hardware. The usage of Integrated Modular Avionics is an increasing
trend in most aircraft manufacturers and Saab Aerosystems is no exception.
ARINC 653 is a software specification for space and time partitioning in
the IMA architecture. Each application is run inside its own partition which
isolates it from other applications. Real-time operating systems implementing
this standard will be able to cope with the memory and real-time constraints.
This increases the flexibility but also introduces problems such as how to com-
municate between partitions and how the partition scheduling will influence the
design. A brief overview of an ARINC 653-system can be seen in figure 1.1.
1.2 Purpose
As described in the previous section the trend of avionics software has shifted
towards Integrated Modular Avionics systems. ARINC 653 is a specification for
this kind of systems and is used in many modern aircrafts [8]. Saab is interested
7
CHAPTER 1. INTRODUCTION
Hardware
in knowing how databases can be used for sharing data among partitions of an
ARINC 653-system. Therefore the purpose of this master thesis is to imple-
ment different databases in an ARINC 653-compliant system and study their
behavior.
1.3.1 Objectives
Goals and objectives for this master thesis:
• Port and integrate alternative databases within an ARINC 653-partition.
1.3.2 Method
The workload will be divided into two parts since there are two participating
students in this thesis.
8
CHAPTER 1. INTRODUCTION
Martin’s part is focused towards ARINC 653 and the interpartition com-
munication. The goals for this part is to abstract the underlying ARINC 653
layer and provide a database API to an application developer. The application
developer shall not notice that the system is partitioned, he/she should be able
to use the database as usual. This part is responsible for the communication
between the application and the database.
Jon’s part is to study databases and how to port them to the ARINC 653
system. This includes to choose two databases and motivate the choices. One
of the databases shall be an open source database. The goals of Jon’s part is to
port the chosen databases and make them work in an ARINC 653-compatible
real-time operating system. This requires for instance an implementation of a
file system and one database adapter per database.
Testing and benchmarking of the implemented system has been done by both
Martin and Jon.
1.3.3 Limitations
Limitations for this project are:
• VxWorks shall be the ARINC 653-compatible real-time operating system.
Saab already uses this operating system in some of their projects.
9
Chapter 2
Background
This section will provide the reader with essential background information.
2.1.1 DO-178B
DO-178B, Software Considerations in Airborne Systems and Equipment Cer-
tification, is an industry accepted guidance about how to satisfy airworthiness
of aircrafts. It focuses only on the software engineering methods and processes
used when developing aircraft software and nothing on the actual result. This
means that if an airplane is certified with DO-178B, you know that the devel-
oping process for developing an airworthiness aircraft has been followed, but
you do not really know if the airplane can actually fly. A system with a poor
requirement specification can go thorough the entire product development life
cycle and fulfill all of the DO-178B requirements. However, the only thing you
10
CHAPTER 2. BACKGROUND
know about the result is that it fulfills this poor requirement specification. In
other words, bad input gives bad, but certified output. [3] [5]
Failure categories
Failures in an airborne system can be categorized in five different types according
to the DO-178B document:
Catastrophic A failure that will prevent a continuous safe flight and landing.
Results of a catastrophic failure are multiple fatalities among the occupants and
probably loss of the aircraft.
• Physical distress or higher workload such as the crew could not be relied
upon to perform their tasks accurately or completely.
Major A failure that would reduce the capabilities of the aircraft and the
ability of the crew to do their work to any extent of:
• Reduction of safety margins or functional capabilities.
Minor A failure that would not significantly reduce the aircraft safety and the
crew’s workload are still within their capabilities. Minor failures may include
slight reduction of safety margins and functional capabilities, a slight increase
in workload or some inconvenience for the occupants.
No Effect A failure of this type would not affect the operational capabilities
of the aircraft or increase crew workload. [3] [4]
Level A Software of level A that doesn’t work as intended may cause a failure
of catastrophic type.
Level B Software of level B that doesn’t work as intended may cause a failure
of Hazardous/Severe-Major type.
11
CHAPTER 2. BACKGROUND
Level C Software of level C that doesn’t work as intended may cause a failure
of Major type.
Level D Software of level D that doesn’t work as intended may cause a failure
of Minor type.
Level E Software of level E that doesn’t work as intended may cause as failure
of No Effect type. [3] [4]
Flight Mission
Air Data Air Data FMS
Mission computer Management Computer
computer computer
Bus
12
CHAPTER 2. BACKGROUND
with the other disadvantages of the federated approach forced the developers to
find a new solution: Integrated Modular Avionics.
Advantages of a federated architecture are:
• Traditionally used
• ”Easy” to certify
13
CHAPTER 2. BACKGROUND
• Partition management
14
CHAPTER 2. BACKGROUND
• Process management
• Time management
• Memory allocation
• Interpartition communication
• Intrapartition communication
There is also a section about the fault handler called Health Monitor. [1]
Partition management
Partitions are an important part of the ARINC 653 specification. They are used
to separate the memory space of applications so each application has its own
”private” memory space.
Time
Figure 2.2: One cycle using the round robin partition scheduling. [6]
Modes A partition can be in four different modes: Idle, normal, coldstart and
warmstart.
IDLE When in this mode, the partition is not initialized and it is not executing
any processes. However, the partition’s assigned timeslots are unchanged.
15
CHAPTER 2. BACKGROUND
NORMAL All processes have been created and are ready to run.
Process management
Inside a partition an application resides which consists of one or more processes.
A process has its own stack, priority, and deadline. The processes in a partition
run concurrently and can be scheduled in both a periodic and an aperiodic way.
It is the partition OS that is responsible for controlling the processes inside
its partition.
Dormant Cannot receive resources. Processes are in this state before they are
started and after they have been terminated.
Waiting Not allowed to use resources because the process is waiting for an
event. E.g. waiting on a delay.
Time management
Time management is extremely important in ARINC 653 systems. One of the
main points of ARINC 653 is that systems can be constructed so applications
will be able to run before their deadlines. This is possible since partition schedul-
ing is, as already mentioned, time deterministic. A time deterministic scheduling
means that the time each partition will be assigned to a CPU is already known.
This knowledge can be used to predict the system’s behavior and thereby create
systems that will fulfill their deadline requirements. [1]
16
CHAPTER 2. BACKGROUND
Memory allocation
An application can only use memory in its own partition. This memory alloca-
tion is defined during configuration and initialized during start up. There is no
memory routines specified in the core OS that can be called during runtime. [1]
Interpartition communication
The interpartition communication category contains definitions for how to com-
municate between different partitions in the same core module. Communication
between different core modules and to external devices is also covered.
Interpartition communication is performed by message passing, a message
of finite length is sent from a single source to one or more destinations. This is
done through ports and channels.
Ports and Channels A channel is a logical link between a source and one or
more destinations. The channel also defines the transfer mode of the messages.
To access a channel, partitions need to use ports which work as access points.
Ports can be of source or destination type and they allow partitions to send or
receive messages to/from another partition through the channel. A source port
can be connected to one or more destination ports. Each port has its own buffer
and a message queue which both are of predefined sizes. Data which is larger
then this buffer size must be fragmented before sending and then merged when
receiving.
All channels and all ports must be configured by the system integrator before
execution. It is not possible to change these during runtime.
Transfer modes There are two transfer modes available to chose from when
configuring a channel. They are sampling mode and queuing mode.
17
CHAPTER 2. BACKGROUND
Intrapartition communication
Intrapartition communication is about how to communicate within a partition.
This can also be called interprocess communication because it is about how
processes communicate with each other. Mechanisms defined here are buffers,
blackboards, semaphores and events.
Semaphores and Events Semaphores and events are used for process syn-
chronization. Semaphores control access to shared resources while events coor-
dinate the control flow between processes.
Semaphores in ARINC 653 are counting semaphores and they are used to
control partition resources. The count represents the number of resources avail-
able. A process accessing a resource has to wait on a semaphore before accessing
it and when finished the semaphore must be released. If multiple processes wait
for the same semaphore, they will be queued in FIFO or priority order depending
on the configuration.
Events are used to notify other processes of special occurrences and they
consist of a bi-valued state variable and a set of waiting processes. The state
variable can be either ”up” or ”down”. A process calling the event ”up” will
put all processes in the waiting processes set into the ready mode. A process
calling the event ”down” will be put into the waiting processes set.[1]
Health Monitor
Fault handling in ARINC 653 is performed by an OS function called Health
Monitor(HM). The health monitor is responsible for finding and reporting faults
and failures.
Errors that are found have different levels depending on where they occurred.
The levels are process level, partition level and OS level. Responses and actions
taken are different depending on which error level the failure has and what has
been set in the configuration. [1] [10]
18
CHAPTER 2. BACKGROUND
• Logbook System
The only relevant topic among these is file systems. However, this file system
specification will not be used in this master thesis, see 3.4.1 for more information.
[1] [2]
Part 3 of the ARINC 653 specification defines how to test part 1, required
services. This is out of the scope of this master thesis.
2.4 VxWorks
VxWorks 653 is an ARINC 653 compatible real-time operating system. This
section mostly consists of information regarding VxWorks’ configuration. De-
tails below are described in VxWorks 653 Configuration and Build Guide 2.2
[11].
19
CHAPTER 2. BACKGROUND
• To build applications, stub files for shared libraries used by the applica-
tions must be created. This is done as a part of the shared library build.
2.4.4 Memory
The memory configuration is made up of two parts; the physical memory con-
figuration and the virtual memory configuration. An example of the memory
organization can be seen in figure 2.3.
App-1 App-2
App-2
Blackboard Blackboard
App-1 POS-2
POS-1
App-2 Blackboard
App-1 POS-1
POS-1 POS-2
POS-2 ConfigRecord ConfigRecord ConfigRecord
ConfigRecord
COS COS COS
COS
Rom Ram App-1 App-2
Physical memory
The physical memory is made up of the read-only memory, ROM, and the
random-access memory, RAM.
20
CHAPTER 2. BACKGROUND
As figure 2.3 illustrates, applications and the core OS consumes more mem-
ory in RAM than in ROM. This is because each application requires additional
memory for the stack and the heap. This also applies to the core OS. Each
application has its own stack and heap, since there is no memory sharing al-
lowed between applications. If an application is using any shared libraries, it
also needs to set aside memory for the libraries’ stacks and heaps. How much
memory each application gets allocated is specified in the configuration record.
Partition OSes, POS, and shared libraries, SL, require no additional space in
RAM. This is because the application that is using the POS/SL is responsible
for the stack and the heap space.
SDR-Blackboard is a shared data region (SDR). It is a memory area set
aside for two or more applications as a place to exchange data.
App-1 and App-2, seen in figure 2.3 are loaded in to separate RAM areas.
They share no memory excepts for the SDR areas.
Virtual memory
Every component, except for the applications, has a fixed, unique address in the
virtual memory. All applications have the same address. This makes it possible
to configure an application as if it is the only application in the system. Each
application exists in a partition, which is a virtual container. The partition
configuration controls which resources that are available to the application.
vThreads
VxWorks 653 comes with a partition OS called vThreads. vThreads is based
on VxWorks 5.5, and is a multithreading technology. It consists of a kernel and
21
CHAPTER 2. BACKGROUND
a subset of the libraries supported in VxWorks 5.5. vThreads runs under the
core OS in user mode.
The threads in vThreads are scheduled by the partition scheduler. The core
OS is not involved in this scheduling. To communicate with other vThreads
domains, the threads are making system calls to the core OS.
vThreads get its memory heap from the core OS during startup. The heap
is used by vThreads to manage memory allocations from its objects. This heap
memory is the only memory available for vThreads; it’s unable to access any
other memory. All attempts to access memory outside the given range will be
trapped by the core OS.
2.4.7 Simulator
VxWorks comes with a simulator. It makes it possible to run VxWorks 653
applications on a host computer. Since it’s a simulator, not an emulator, there
are some limitations compared to the target environment. The simulator’s per-
formance is affected by the host hardware and other software running on it.
The only clock available in the simulator is the host system clock. The simu-
lator’s internal tick counter is updated with either 60 Hz or 100 Hz, which gives
very low resolution measures. 60 Hz implies that partitions can’t be scheduled
with a timeslot shorter than 16 ms. With 100 Hz it’s possible to schedule as
short as 10 ms.
One feature that isn’t available in the simulator is the performance monitor,
which is used for monitoring CPU usage.
2.5 Databases
Today Saab is using their own custom storage solutions in their avionics soft-
ware. They are often specialized for a particular application and not that gen-
22
CHAPTER 2. BACKGROUND
eral. A more general solution would save both time and money, since it would
be easier to reuse.
This chapter contains information about the databases that have been evalu-
ated for implementation in the system. It also contains some general information
that may be good to know for better understanding of the system implementa-
tion.
2.5.1 ODBC
Open Database Connectivity, ODBC, is an interface specification for accessing
data in databases. It was created by Microsoft to make it easier for companies
to develop Database Management System, DBMS, independent applications.
Applications call functions in the ODBC interface, which are implemented in
DBMS-specific modules called drivers.
ODBC is designed to expose the database capabilities, not supplement them.
This does not mean that just because one is using an ODBC interface to access
a simple database, the database will transform into a fully featured relational
database engine. If the driver is made for a DBMS that does not use SQL, the
developer of the driver must implement at least some minimal SQL functionality.
[17]
2.5.2 MySQL
MySQL is one of the worlds most popular open source databases. It is a high
performance, reliable relational database with a powerful transactional sup-
port. It includes complete ACID (atomicity, consistency, isolation, durability)
transaction support, unlimited row-level locking, and multi-version transaction
support. The latter means that readers never block writers and vice-versa.
The embedded version of MySQL has a small footprint with preserved func-
tionality. It supports stored procedures, triggers, functions, ANSI-standard
SQL and more. Even though MySQL is open-source, the embedded version is
released under a commercial license. [16]
2.5.3 SQLite
SQLite is an open source embedded database that is reliable, small and fast.
These three factors come as a result of the main goal with SQLite - to be a
simple database. One could look at SQLite as a replacement to fopen() and
other filesystem functions, almost like an abstraction of the file system. There
is no need for any configuration or any server to start or stop.
SQLite is serverless. This means that it does not need any communication
between processes. Every process that needs to read or write to the database
opens the database file and reads/writes directly from/to it. One disadvantage
with a serverless database is that it does not allow more complicated and finer
grained locking methods.
SQLite supports in-memory databases. However, it’s not possible to open
a memory database more than once since a new database is created at every
opening. This means that it’s not possible to have two separate sessions to one
memory database.
23
CHAPTER 2. BACKGROUND
The database is written entirely in ANSI-C, and makes use of a very small
subset of the standard C library. It’s very well tested; with tests that have 100%
branch coverage. The source has over 65.000 lines of code while test code and
test scripts have over 45.000.000 lines of code.
It’s possible to get the source code as a single C file. This makes it very easy
to compile and link into an application. When compiled, SQLite only takes
about 300 kB memory space. To make it even smaller, it’s possible to compile
it without some features to make it as small as 190 kB.
Data storage
All data is stored in a single database file. As long as the file is readable for the
process, SQLite can perform lookups in the database. If the file is writable for
the process, SQLite can store or change things in the database. The database file
format is cross-platform, which means that the database can be moved around
among different hardware and software systems. Many other DBMS require
that the database is dumped and restored when moved to another system.
SQLite is using a manifest typing, not static typing as most other DBMS.
This means that any type of data can be stored in any column, except for an
integer primary key column. The data type is a property of the data value itself.
Records have variable lengths. This means that only the amount of disk
space needed to store data in a record is used. Many other DBMS’ have fixed
length records, which mean that a text column that can store up to 100 bytes,
always takes 100 bytes of disk space.
One thing that is still experimental but could be useful, is that it is possible
to specify which memory allocation routines SQLite should use. This is probably
necessary if one wants to be able to certify a system that is using SQLite.
Locking technique
A single database file can be in five different locking states. To keep track of
the locks, SQLite uses a page of the database file. It uses standard file locks to
lock different bytes in this page. One byte for each locking state. The lock page
is never read or written by the database. In a POSIX system setting locks is
done via fcntl() calls.[13]
24
CHAPTER 2. BACKGROUND
write-set, there is a read-set. It records the state of the database with intended
changes. If there is a conflict between the database state and the read-set, a
rollback will be performed and the conflict is reported. This could happen e.g.
if one user deletes a row that is to be updated by another user. It’s up to the
application how to handle this. [14]
2.5.5 Raima
Raima is designed to be an embedded database. It has support for both in-
memory and on-disk storage. It also has native support for VxWorks, and
should therefore needs less effort to get up and running in VxWorks 653. It is
widely used, among others is the aircraft vendor Boeing.
Raima has both an SQL API and a native C API. The SQL API conforms
to a subset of ODBC 3.5.
Raima uses a data definition language, DDL, to specify the database design.
Each database needs its own DDL file. The DDL file is parsed with a command
line tool that generates a database file and a C header file. The header file
contains C struct definitions for the database records defined in the DDL and
constants for the field names. These structs and constants are used with Raima’s
C API. The DDL parsing tool also generates index files for keys. Each index
file must be specified in the DDL.
There are two types of DDL in Raima. One which has a C like syntax,
DDL, and one with a SQL like syntax, SDDL. Both types must be parsed by
a special tool. Its not possible to create databases or tables during runtime
with SQL queries. Fortunately, its possible to link Raima’s DDL/SDDL parsers
into applications. This allows creation of DDL/SDDL files during runtime, that
could be parsed by the linked code.
Raima is storing data in fixed length records. This is not per table basis,
but file basis. If there are multiple tables in the same file, the record size will
be the size of the biggest record needed by any of the tables. This means that
there may be a lot of space wasted, but it should result in better performance.
[15]
25
Chapter 3
26
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Partition scheduling Both client and server partitions are scheduled with
a cyclic schedule that resembles round robin, i.e. all timeslots are of the same
length and each partition has only one timeslot per major cycle. See Partition
Management in section 2.3.1 for more information about partition scheduling
in ARINC 653.
Adapter
Application
Database
27
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
The benefit with this design is the application independence. E.g. if one
port buffer is overflowed, only this port’s connection will be affected. The other
connections can still continue to function. Another benefit is that the server will
know which client sent a message. This is because all channels are configured
before the system is run.
The drawback with this approach is that it requires many ports and channels
to be configured which requires a lot of memory.
28
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Database API
The database API follows the ODBC standard interface. However, not all rou-
tines specified in the ODBC interface are implemented, only a subset of the
ODBC routines that provide enough functionality to get the system to work.
The names and short descriptions of the implemented routines are listed in
table 3.1.
Design
ODBC specifies three different datatypes. They are as follows:
See figure 3.3 for information about how these data types relate to each
other.
Before a query could be sent, an environment must be created. Inside this
environment multiple connections can be created. And inside every connection,
29
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
* 1 * 1
«datatype» «datatype» «datatype»
Statement Connection Environment
1 1 1 1
multiple statements can be created. A statement will contain the query and,
when the response comes from the server, a rowset.
Send queries When a statement has been created, it can be loaded with a
query and then sent by calling function SQLExecDirect with the specific state-
ment as a parameter.
Efficiency would be greatly diminished if the execute/send routine would be
of a ”blocking” type, i.e. the execution would halt while waiting for a response
to return. If this was the case, every client would only be able to execute one
query per timeslot. This is due to the fact that a query answer cannot be
returned to the sender application before the query has been executed in the
database. The database partition must therefore be assigned to the CPU and
process the query before a response can be sent back. Since the partitions are
scheduled with a round robin scheduling with only one timeslot per partition,
the sender application will get the response at earliest at its next scheduled
timeslot.
Our design does not use a ”blocking” execute. Instead it works as ODBC
describes and passes a handle to the execute routine which sends the query and
then continue its execution. The handle is later needed by the fetch/receive
routine for knowing which answer should be fetched. This approach supports
multiple sent queries during one timeslot. However, the fetch routine gets a bit
more complicated.
30
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Receive and
merge messages
to get one result
from the inport
YES
Exit SQLFetch
A short summary : All queries with results stored earlier in the inport queue,
in front of the matching query, will get their results moved into their statements
for faster access in the future. The result belonging to the specified query is
moved into the statement’s bound columns and are made available for layer
three applications. All queries who have their result stored behind the specified
query in the inport queue will not get their result moved into their statements.
These results have to stay in the port queue until another SQLFetch routine is
called in which these operations are repeated.
31
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Multitasking
The server module must be able to be connected to multiple client applications.
The server process will therefore spawn additional tasks to handle these appli-
cations. Each server-client connection is run inside its own single task. This is
to deny connections from occupying too much time and to prevent the database
from being locked down in case of a connection deadlock.
A task is similar to a thread. They are stored in the partition memory and
can access most of the resources inside the partition.
Each task handles one and only one of the connections. Its job is to manage
this connection, i.e. receive queries, processes queries and return answers until
the task is terminated. See figure 3.5 for a task job flowchart.
The advantage of a threaded environment is that if a deadlock occurs in
one connection, it won’t affect any other connections. Also, threads can be
scheduled to make the server process queries in different ways depending on the
system’s purpose. One disadvantage with threading is that the system becomes
more complex. Added issues are for example synchronization and multitasking
design.
32
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Scheduling
In ARINC, scheduling of tasks is done in a preemptive priority-based way. In
the implemented system the priority is set to the same number for all tasks.
This implies that the tasks will be scheduled in a first in first out, FIFO, order.
There are two schedules of interest here; Round robin with yield and Yield
only.
Round robin with yield works as a round robin scheduling but has the possi-
bility to release the CPU earlier at its own command. This scheduling doesn’t
force the task to occupy its entire timeslot. Instead it will automatically relin-
quish the processor at send and receive timeouts. Timeslot lengths are deter-
mined by the number of ticks set by the KernelTimeSlice routine. Since all tasks
have the same priority, all tasks will enjoy about the same execution time. If
the partition timeslot is large enough and the KernelTimeSlice is small enough,
all clients will get some responses every cycle.
Another available scheduling is Yield only. This scheduling lets a task run
until itself relinquishes the CPU. The tasks do not have a maximum continuous
execution time. When used in this system, the CPU is released only at send
and receive timeouts.
Header
The header contains an id, a statement pointer, a data size and a command
type as showed in table 3.2.
The statement pointer is used to distinguish to which statement a query
belongs. This is to make sure that database responses are moved into the correct
statements. An id is required because there are some special occurrences where
the statement pointer isn’t enough to determine which statement is the correct
one. The id is a sequence number that is increased and added to the header at
every send called by SQLExecDirect in the client application.
33
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
NONE is set when nothing should be sent, i.e. the module will skip sending
messages of this type.
QUERY indicates that the package contains a SQL query. Messages of this
type will always generate a response.
34
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
35
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Connect to db adapter
Parse incoming
data
Execute query in
[QUERY] Type of data [OTHER] Print error mesage
database adapter
[END TRAN]
[YES]
36
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.3.1 Ports
The implementation of our test system uses ports of queuing type. The rea-
son for this is that no messages are allowed to be lost, as might be the case
when using sampling type ports. The transmission module shall also be able to
transfer data of large sizes. This also requires the usage of queuing ports since
large data must be fragmented to fit inside port buffers. This is impossible with
sampling ports.
Fragmentation
Buffer size and queue length are two important settings for a queuing port. Total
size required by a port is the buffer size times the queue length, P ortsize =
buf f er size ∗ queue length.
Large data streams that cannot fit inside one port buffer must be split into
several smaller parts. There are three different cases that can occur during the
fragmentation:
Case 2 Message doesn’t fit in one buffer but it fits in several buffers. This
message occupies several queue spots but the queue is not full.
Case 3 Message is too large and fills all queue buffers. This means that the
queue is full and the port is overflowed. The port’s behavior in this case
depends on which port protocol that is used.
Note that several large messages can be fragmented into the queue at the
same time which increases the risk of port overflow occurrences.
Port protocols
As mention before in section 2.4.6 VxWorks supports two different port proto-
cols: Sender Block and Receiver Discard.
The transmission module uses the Sender Block port protocol. The reason
for this is that it removes the issue of retransmissions entirely for our test sys-
tem. Another reason is that if the Receiver Discard port protocol were to be
used, some kind of block functionality would still be necessary. This is because
37
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Receiver Discard drops messages if port queues are full. As no data is allowed to
be lost, retransmits must occur. Since the receiver’s buffer queue is full, the data
cannot be resent. Therefore the partition will be blocked from sending messages
to the receiver. Even though the partition can continue doing other stuff, the
best argument of using the Receiver Discard port protocol is diminished.
Deadlocks
Deadlocks that regard the entire server’s uptime has been taken care of by
special design and configuration at another level. A multithreaded server and
the channel design will prevent one client to block the entire server because
only a single thread becomes locked. See section 3.1.1 and section 3.2.2 for
more information. However, deadlocks inside a single database thread might
still occur. This kind of deadlock prevents the specific database thread and its
corresponding application to execute successfully.
This deadlock type occurs when the client application executes a lot of
queries and thereby overflowing the server thread’s port buffer. This causes the
client application to block itself waiting for the occupied spots in the database
port queue to diminish. Then the server partition is switched in and it will re-
ceive and execute a few of the queries in the database and send back the results
to the client application. If the database responses are large enough to over-
flow the client application’s port buffer the server will be blocked too. Now the
client application is switched back in and since the database did some receives
before getting blocked, the client is no longer blocked. The client application
therefore resumes its execution at the same place where it was stopped before,
i.e. it continues sending a lot of queries. This causes the client application to
be blocked again soon. At this point both the client application and the server
thread are blocked since both of their inbound ports are overflowed. See figure
3.6.
To handle single thread deadlocks as described above, there are three options:
prevent, avoid, or detect and recover. In this system deadlock prevention has
been used.
38
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
too much data too often. This is an effective but not a very flexible solution.
You also need to use large margins to make it work safely.
When using this deadlock solution the receive routine works as follows; it
will receive data from the ports with the amount of data to receive specified as
a parameter. I.e, it will read from port, merge messages and then return a data
stream of the specified length.
39
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Free space
Occupied space
Client is switched in
Server is switched in
Client is switched in
$VWKHVHUYHU¶VLQSRUW
queue is not full anymore,
the client can resume its
Queries
execution where it halted,
i.e. continue sending
queries
40
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.4 Databases
SQLite and Raima has been implemented and benchmarked. Unfortunately, it
was impossible to get Mimer up and running, since they only had pre-compiled
binaries for standard VxWorks. When linking these binaries with an applica-
tion there ware many linking errors because of missing libraries in VxWorks
653. MySQL fell off since the embedded version only exists under a commercial
license.
3.4.1 Filesystem
The ARINC 653 specification part 2 specifies a filesystem. This is not im-
plemented in VxWorks 653. However, there is a filesystem implemented in
VxWorks. This filesystem is not used in this project. One reason for this is
to avoid too much coupling to VxWorks 653-specific implementations. Another
reason is the VxWorks simulator. Since it’s a simulator, not an emulator, the
benchmarking may be even more inaccurate if disk read and write accesses are
done via the host operating system.
The filesystem that is used is an in-ram filesystem. It has been implemented
by overloading filesystem related functions like open() and close(). The over-
loading can be made since VxWorks lets you specify which functions that should
be linked from shared libraries and which shouldn’t.
Filesystem usage
Before using the filesystem it must be initialized. The initialization needs an
array with pointers to memory regions, one region per file, and the size of the
region. These memory regions can be of arbitrary sizes and are configured by
the application developer.
When creating a file, using open() or creat(), the first unused memory region
that was given during the initialization is used. This means that one must open
the files in the correct order to make sure that a file gets the wanted memory
region.
E.g. if one needs two files in an application, 1MB.txt and 1kB.txt, the
filesystem needs to be initialized with two pointers in the initialization array. If
the first pointer points to a 1MB memory region and the second pointer points
to a 1kB memory region, the 1MB.txt file must be created first since the 1MB
memory region is the first element of the pointer array.
Filesystem structure
To keep track of open files two different structs are used, as seen in figure 3.7.
The first one, file t, is the actual file struct. It has a pointer to where in the
memory the file data is located. This struct also holds all locks, the filename,
and the capacity and the size of the file. The second struct is ofile t. The o is for
open. This struct keeps track of access mode flags and the current file pointer
position. The last struct, fd t, is the file descriptor struct. It holds information
about operation mode flags.
The file descriptor struct and the open file struct could be merged to one,
but then it would be impossible to implement file descriptor duplication func-
tionality.
41
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
The filesystem can have fixed numbers of files, open files and file descriptors.
Since there is a fixed maximum number of each struct, they are allocated in fixed
sized C arrays.
To avoid unnecessary performance issues, stacks are used to keep track of
free file descriptors and free files. With stacks, free descriptors can be found in a
constant time since there is no need of iterating the array with descriptors. The
filesystem is using files with static sizes. This makes reading and writing fast,
since all data is stored in one big chunk, and not spread over blocks. However,
this may lead to unused memory or full files. This means that the filesystem
must be configured carefully.
File locking
At first, file level locking was implemented. An entire file could be locked, not
parts of the file. However, it turned out that SQLite needs byte level locking,
and the ability to use both read and write locks. All locking related operations
are done via the fcntl() and the flock() functions.
42
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.5.1 Interface
The interface for the adapters is very simple. It basically provides support for
running queries and fetching results.
dba affected rows() Records how many rows that were affected by last query.
43
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
ready to be fetched. If the status code is SQLITE DONE there is nothing more
to be done with this statement.
The status code is used to determine whether the query is a select or not.
If the return value is SQLITE ROW the query must have been a select, or
something else that resulted in one or more rows. However, if the query is a
select that returns no rows, its impossible to determine whether the query is a
select or not. Luckily, the database module only needs to know if there are any
rows, not what type of query that just got executed.
Transactions
SQLite is using pessimistic concurrency control. This, together with it’s locking
techniques makes transactions inefficient when the database is used by multiple
connections.
When a query that results in a write to the database is executed in a trans-
action, this transaction will lock the entire database with an exclusive lock until
it’s committed. This means that all other transactions have to wait for the lock
to be released before they can execute any more queries.
If there is one transaction per task in the database module and one of these
transactions holds an exclusive lock, and the task suddenly is switched out, the
other tasks won’t be able to perform any database queries. This will decrease
the database module’s performance. A solution could be to use mutex locks
and semaphores in the database module or disabling task switching to prevent
nested transactions.
Result handling
When a select query has been executed the result should be stored in a rowset.
To build the rowset the result from the database is iterated. For each iteration
a row is created and added to the rowset. To be able to add data to the rowset,
the size of the data must be known. This is done by checking the data type for
each column in the database.
Initialization
Since Raima is unable to create tables via normal SQL queries, a specialized
initialization function must be used. This initialization function takes a pointer
to a pointer to a string containing Raima SDDL statements. The SDDL cannot
parsed right away. It has be written to a file first.
Normally, this file would be parsed with Raima’s command-line SDDL tool,
which is not possible in VxWorks since you cant use exec() to execute other
applications. However, it is possible to link Raima’s binaries with your own and
make a call to the SDDL tool’s main function sddl main().
44
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
When the SDDL has been parsed the database must be initialized. This is
also, usually, made with a command-line tool, but it’s possible to link these into
the application too and call the main function initdb main(). The result of the
parsing is the database files specified by the SDDL.
Files
Raima is using more files than SQLite, so the filesystem must be planned more
than with SQLite. At least three files is used during runtime, after the database
has been created; database, index and log. During database creation the SDDL
file is needed among others.
Result handling
To fetch the result from a query in Raima is very alike the way it’s done i
SQLite. The result must be iterated, column value types checked to get the size
needed in the row etc.
45
Chapter 4
Benchmarking
4.1 Environment
The benchmarking is performed in a VxWorks 653 simulator on Windows desk-
top computer. This section describes how measurements are performed and
what default values are used for the benchmarking variables.
4.1.1 Simulator
The simulator simulates the VxWorks 653 real-time operating system, i.e. par-
titions, scheduling and the interpartition communication.
The low resolution of the simulator’s tick counter forces us to mostly use
long timeslots. We use 50 ms, 100 ms and 150 ms as our standard lengths but
in a real system these intervals are very long. A real system often uses partition
timeslots of length 5 ms or shorter. These short timeslots cannot be simulated,
but have to be run on a target hardware.
4.1.2 Variables
Table 4.1 shows the default values for the variables used during benchmarking.
If nothing else is mentioned in each test case, these were the values that were
used.
Number of client partitions describes how many client partitions are sched-
uled by the round robin partition scheduling in the core OS. Timeslot length
is the length in milliseconds of each partition window in the above scheduling.
Task scheduling describes the process scheduling inside a partition. Primary
key is if the query is using the primary column for selecting single rows. Port
buffer size and Port queue length are port attributes needed for communication
46
CHAPTER 4. BENCHMARKING
between partitions. Table size is the initialization size of the table used in the
benchmarks. Query response size is the size of the resulting rowset that will be
sent back to the client. Sort result is if the resulting rowset is sorted. Client
send quota is the number of queries each client will send.
Variable Value
Number of client partitions 1 .. 8.
Timeslot length 50,100,150 ms.
Task scheduling Round robin.
Primary key Yes.
Port buffer size 1024 bytes.
Port queue length 512.
Table size 1000 rows, 16 columns
Query response size 1 Row(16 cols).
Sort result No
Client send quota 1024
4.1.3 Measurement
The benchmarks are created by setting different numbers of clients sending a
predefined number of queries to the server. The server will then process these
queries as fast as it can and measure its average and maximum number of
processed queries for each of its timeslots. The minimum value was skipped
since it will almost always show the value from the last timeslot where the final
remaining queries are processed.
The number of queries sent by each client is 1024. This value is large enough
to cause full load on the database, but low enough to not need too much memory.
Benchmarks in this thesis measure the server’s throughput, that is the num-
ber of queries fully processed in the server during one timeslot. A fully pro-
cessed query is a query that has been received from the in port, executed in the
database and its result has been sent to the out port. A query that is processed
in the database when a context switch occur, will be recorded in the next times-
lot. The average number of the processed queries are calculated at the end of a
test run by dividing the total number of queries sent to the server by the total
number of timeslots the server required to finish, see equation (4.1).
This measurement approach was chosen because of the simulator’s inaccurate
time measurement and that queries processed is an easily understandable unit.
47
CHAPTER 4. BENCHMARKING
the database adapter. The difference between the adapters are very small and
adds as almost no overhead to the database performance and should not affect
the result.
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.1: Average inserts processed during one timeslot for different number of
client partitions.
48
CHAPTER 4. BENCHMARKING
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
4 Clients
500
5 Clients
6 Cleints
7 Clients
400 8 Clients
300
50 100 150
Timeslot [ms]
Figure 4.2: Average number of inserts processed during one timeslot of various
length.
49
CHAPTER 4. BENCHMARKING
1000
50 ms
900 100 ms
150 ms
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.3: Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. Task scheduling used is Yield only.
50
CHAPTER 4. BENCHMARKING
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.4: Average selects processed during one timeslot for different numbers of
client partitions.
51
CHAPTER 4. BENCHMARKING
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.5: Average selects processed during one timeslot for different number of
client partitions. The lines represent the average processed queries using different
timeslot lengths.
52
CHAPTER 4. BENCHMARKING
4.3.1 Deviation
Five runs with the same configuration were run to ensure better values. The
number of partition switches required in the benchmarks were almost always
the same during the five runs. When the values did differ, it was only by one
partition switch and this only occurred when many partitions were scheduled.
The worst deviation case that occurred during the benchmarks were that
three test runs returned the value 11 while two test runs gave 12 partition
switches. The mean value for this case is 11,4 partition switches, which gives
the standard deviation 0,49. The relative standard deviation is 4, 3%.
The maximum performance benchmarks do not rely on partition switches.
Instead they use the mean value of the maximum number of queries processed
in one timeslot. The relative standard deviation here is lower than 5, 3%.
Dip at beginning
The ”dip at beginning” appearance occurs in the SQLite insert, update and
select graphs (See figures A.1, A.5 and A.9). This strange curve is explained by
the average calculation method, see equation (4.1) on page 47. Since this method
uses the number of sent queries, divided by the number of required timeslots,
the calculation is very dependent on the timeslot length. A longer timeslot
length leads to more queries getting processed per timeslot and thereby fewer
partition switches are needed. With few clients, a small change in the number
of required partition switches will have huge affect on the calculation result.
If the average number of processed queries is less than a client’s total send
quota, in the benchmarks set to 1024, the server will need more than one timeslot
per client to finish. This means that the denominator in the average calculation
increases and thereby the average performance will be reduced.
With only one client running, the server manages to perform all of the client’s
1024 queries in one timeslot. When the number of clients is more than one,
context switches will occur in the server that adds some overhead. This overhead
makes, in the case of two clients, the server unable to perform all 2∗1024 queries
in two timeslots. This means that an extra timeslot is needed, and the average
performance is therefore heavily decreased. The calculations for the two cases
are 1024/1 and (2 ∗ 1024)/3. See figure 4.6 and figure 4.7.
For every additionally added client, the affect of the extra required partition
switch will be less and less.
53
CHAPTER 4. BENCHMARKING
Figure 4.6: With one client, the server manages to process all 1024 queries in one
time frame.
Figure 4.7: With two clients, the server cannot process 2*1024 in two timeslots due
to context switches. An extra time frame is required.
Always max
The average calculation can also explain some other 150 ms curves. For exam-
ple figure 4.3 shows select with Yield only scheduling. Its 150 ms curve is a
horizontal straight line which always has the value 1024.
In this case, the average value is in reality slightly above the calculated
average value of 1024. This leads to that the server will finish its processing
faster. However it is not fast enough to gain an entire timeslot and skip the last
partition switch. The number of partition switches are therefore the same and
leads to an erroneous calculated average value. See figure 4.8.
Figure 4.8: The average processing speed is faster than 1024 queries per timeslot,
but it is not fast enought to earn an entire timeslot.
54
CHAPTER 4. BENCHMARKING
to the task scheduling in the simulator, which seems to work better with five
threads in combination with the partition timeslot length. This is just a guess
from our side since we have no verified explanation.
4.3.4 Scaling
Looking at the insert and update curves, it can be seen how they scale compared
to the timeslot length. Generally, it seems that between 50 ms and 150 ms they
are linear. However, for SQLite, they do not double the number of queries
processed when the timeslot length is doubled which is the case for Raima.
55
Chapter 5
Comparisons between
SQLite and Raima
This chapter contains comparison graphs between SQLite and Raima. The
150 ms curves have been omitted since they are often affected by the average
calculation issue as described in section 4.1.3 and section 4.3.2, i.e. 150 ms curves
are too unreliable. Mostly, only the 50 ms curves are shown in the figures. This
is because too many curves makes the graphs hard to decipher. Also, 50 ms is
the time that mostly resembles a real timeslot length. Only average values are
presented since they are more interesting than the less informative maximum
values.
56
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
Insert comparison
600
500
400
Queries
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.1: Comparison between average insert values in SQLite and Raima. Times-
lots used in the graphs are 50 ms and 100ms.
57
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
Update comparison
700
600
500
Queries
400
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.2: Comparison between average update values in SQLite and Raima. Times-
lot lengths are 50 ms and 100 ms.
58
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
Select comparison
600
500
400
Queries
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.3: Comparison between average select values in SQLite and Raima. Times-
lot lengths are 50 ms and 100 ms.
59
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
300
250
Queries
200
50
0
1 2 3 4 5 6 7 8
Clients
Figure 5.4: Comparison between average select values in SQLite and Raima with
and without primary key. Timeslot length is 50 ms.
60
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
350
Queries
300
250
200
150
1 2 3 4 5 6 7 8
Client
Figure 5.5: Comparison between average select queries using different task schedules
in SQLite and Raima. Timeslot length is 50 ms.
61
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
7
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure 5.6: Comparison between average selects with and without sorting in SQLite
and Raima. The resulting rowsets are of size 128 rows and timeslot length is 50 ms.
Figure 5.7 compares the number of rows fetched from the database for single
row queries and 128 rows queries. As expected, both databases can fetch many
more rows during one timeslot when requesting larger rowsets as results. The
database will fetch rows faster and the interpartition communication will be
more effective. Fewer sends decreases the overhead.
Large responses comparison
1500
SQLite − 1 row per query
Raima − 1 row per query
SQLite − 128 rows per query
Raima − 128 rows per query
1000
Rows
500
0
1 2 3 4 5 6 7 8
Clients
Figure 5.7: Comparison between single row select and 128 rows select queries in
SQLite and Raima. Average values are showed in the graph with timeslot length of
50 ms.
62
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
5.7 Summary
In this section we summarize our observations for the selected timeslots and
scheduling approaches given the interpartition communication protocols and
adapters implemented.
• In general it appears that the system perform at its best for five clients,
both in SQLite and Raima, given the selected client loads and timeslots.
• The 150 ms measurements are often strange and unexpected. This can be
explained by the average metric used (equation 4.1) that depends on the
length of the time slot. used. Therefore 150 ms curves are less accurate
than the 50 ms and 100 ms curves.
• Most graphs are linear but a doubled timeslot won’t double the number
of processed queries. Excluded from this is the select performance graph.
• SQLite is overall much faster than Raima. Raima can only match SQLite
in the update performance benchmark.
63
Chapter 6
6.1 Performance
The relation between performance and scalability, database comparisons, and
simulation details are discussed here.
6.1.2 Scalability
Most benchmarking graphs are quite linear which is good since then it is possible
to estimate results. If we use extrapolation on the curves we notice that curves
in insert and update graphs won’t cross origo. This implies that the linearity
won’t hold for low timeslot values. If the linearity did hold, it would be possible
64
CHAPTER 6. DISCUSSION AND CONCLUSION
to use timeslots of zero length and still be able to process queries. This is
obviously impossible. Raima does not have this behavior at all which further
strengthens our conclusion that Raima’s performance is easier to estimate.
It should be noted that there are few different timeslot lengths measured
which makes the above discussion a bit uncertain. To validate these results,
more timeslots must be tested. The three timeslot lengths used in the bench-
mark were chosen because they are large enough to work with the simulator’s
low resolution, while they are low enough to avoid inappropriate large port
buffers. The timeslot lengths are also multiples of 50 which makes it easy to see
scalability issues.
As seen in most of the graphs, there is a peak at five clients. Even though
it’s not verified, this peak might be due to the combination of timeslot lengths
in the task and partition schedulings, and the VxWorks simulator. To make
things clear about this, new benchmarks in a target environment are needed to
get more trustworthy values.
65
CHAPTER 6. DISCUSSION AND CONCLUSION
66
CHAPTER 6. DISCUSSION AND CONCLUSION
67
Bibliography
68
BIBLIOGRAPHY
[12] Alan Burnes, Andy Welling. 2001. Real-Time Systems and Pro-
gramming Languages. Third edition. Pages 486 - 488.
[13] SQLite Documentation. [Available at http://www.sqlite.org/
docs.html][viewed 2010-01-15].
69
Glossary
70
BIBLIOGRAPHY
71
Appendix A
Benchmark graphs
A.1 Variables
Table A.1 the default values for the variables used during benchmarking. If
nothing else is mentioned in each test case, these were the values that were
used.
Variable Value
Number of client partitions 1 .. 8.
Timeslot length 50,100,150 ms.
Task scheduling Round robin.
Primary key Yes.
Port buffer size 1024 bytes.
Port queue length 512.
Table size 1000 rows, 16 columns
Query response size 1 Row(16 cols).
Sort result No
Client send quota 1024
A.2 SQLite
This section shows benchmarks concerning SQLite. They are divided into six
categories: Insert, Update, Select, No primary key, Alternate task scheduling
and Large response sizes.
A.2.1 Insert
Insert query benchmarks are discussed here. Table A.2 shows non-default vari-
able values for these benchmarks.
72
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Insert
Query response size 0
Average performance
This benchmark shows the average performance regarding inserts. In figure A.1
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.2 the lines represent the average processed queries
for different number of clients.
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.1: Average inserts processed during one timeslot for different number of
client partitions.
73
APPENDIX A. BENCHMARK GRAPHS
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
4 Clients
500
5 Clients
6 Cleints
7 Clients
400 8 Clients
300
50 100 150
Timeslot [ms]
Figure A.2: Average no. inserts processed during one timeslot of various length.
74
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding the insert queries.
In figure A.3 the lines represent the maximum processed queries for one timeslot
using different timeslot lengths. In figure A.4 the lines represent the maximum
processed queries during one timeslot for a specific number of clients.
1000
900
50 ms
100 ms
800
150 ms
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.3: Maximum inserts processed during one timeslot for different number of
client partitions.
1000
800
Queries
600
1 Client
2 Clients
3 Clients
400 4 Clients
5 Clients
6 Clients
7 Clients
8 Clients
200
0
50 100 150
Timeslot [ms]
75
APPENDIX A. BENCHMARK GRAPHS
A.2.2 Update
The update query benchmarks are showed here. Table A.3 shows non-default
variable settings for these benchmarks.
Variable Value
Query type Update
Query response size 0
Average performance
This benchmark shows the average performance regarding updates. In figure A.5
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.6 the lines represent the average processed queries
during one timeslot for a specific number of clients.
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.5: Average updates processed during one timeslot for different number of
client partitions.
76
APPENDIX A. BENCHMARK GRAPHS
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
500 4 Clients
5 Clients
6 Clients
400
7 Clients
8 Clients
300
50 100 150
Timeslot [ms]
Figure A.6: Average no. updates processed during one timeslot of various length.
77
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding update queries. In
figure A.7 the lines represent the maximum processed queries for one timeslot
using different timeslot lengths. In figure A.8 the lines represent the maximum
processed queries during one timeslot for a specific number of clients.
1000
900
800
700
Queries
600
500
400
300
50 ms
200 100 ms
150 ms
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.7: Maximum updates processed during one timeslot for different number
of client partitions.
1000
900
800
Queries
700 1 Client
2 Clients
600 3 Clients
4 Clients
5 Clients
500 6 Clients
7 Clients
8 Clients
400
300
50 100 150
Timeslot length [ms]
78
APPENDIX A. BENCHMARK GRAPHS
A.2.3 Select
The select query benchmarks are showed here. Table A.4 lists the non-default
variable values for these benchmarks.
Variable Value
Query type Select
Average performance
This benchmark shows the average performance regarding selects. In figure
A.9 the lines represent average processed queries for one timeslot using different
timeslot lengths. In figure A.10 the lines represent the average processed queries
during one timeslot for a specific number of clients.
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.9: Average selects processed during one timeslot for different numbers of
client partitions.
79
APPENDIX A. BENCHMARK GRAPHS
1000
900
800
700
Queries
600
500 1 Client
2 Clients
400
3 Clients
4 Clients
300
5 Clients
200 6 Clients
7 Clients
100 8 Clients
0
50 100 150
Timeslot [ms]
Figure A.10: Average no. selects processed during one timeslot of various length.
80
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding selects. In figure
A.11 the lines represent maximum processed queries using different timeslot
lengths. In figure A.12 the lines represent the maximum processed queries during
one timeslot for a specific number of clients.
1000
900
800
700
Queries
600
500
400
300
200
50 ms
100 100 ms
150 ms
0
1 2 3 4 5 6 7 8
Client
Figure A.11: Maximum selects processed during one timeslot for different numbers
of client partitions.
1000
900
800
700
Queries
600
500
1 Client
400 2 Clients
3 Clients
300 4 Clients
5 Clients
200 6 Clients
7 Clients
100 8 Clients
0
50 100 150
Timeslot [ms]
Figure A.12: Maximum no. selects processed during one timeslot of various length.
81
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Select
Task scheduling Yield only
1000
50 ms
900 100 ms
150 ms
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.13: Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths.
82
APPENDIX A. BENCHMARK GRAPHS
1000
900 50 ms
100 ms
800 150 ms
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.14: Maximum selects processed during one timeslot for different numbers
of client partitions and timeslot lengths.
83
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Select
Primary key No
Average performance
This benchmark shows the average performance obtained when using selects
without a primary key. See figure A.15 and figure A.16 for the results.
140
120
100
Queries
80
60
40
50 ms
20
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.15: Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths.
84
APPENDIX A. BENCHMARK GRAPHS
140
120
Queries
100
1 Client
2 Clients
80 3 Clients
4 Clients
5 Clients
60 6 Clients
7 Clients
8 Clients
40
50 100 150
Timeslot [ms]
Figure A.16: Average no. selects processed during one timeslot of various length.
85
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
The figures A.17 and A.18 displays the maximum numbers of queries processed
in the server during one timeslot.
160
140
50 ms
100 ms
120 150 ms
Queries
100
80
60
40
20
0
1 2 3 4 5 6 7 8
Clients
Figure A.17: Maximum selects processed during one timeslot for different numbers
of client partitions and timeslot lengths.
160
140
Queries
120
100 1 Client
2 Clients
3 Clients
80 4 Clients
5 Clients
6 Clients
60 7 Clients
8 Clients
40
50 60 70 80 90 100 110 120 130 140 150
Timeslot [ms]
Figure A.18: Maximum no. selects processed during one timeslot of various length.
86
APPENDIX A. BENCHMARK GRAPHS
Without sort
This benchmark shows average and maximum performance regarding large re-
sponse sizes. No sorting is applied in the database on the results. See table A.7
for non default variable values and figure A.19 for benchmark result.
Variable Value
Query type Select
QUery response size 128 rows
12
10
Queries
2
Average 50 ms
Maximum 50 ms
0
1 2 3 4 5 6 7 8
Client
Figure A.19: Average and maximum processed select queries. These selects ask
for128 rows. No sorting is applied.
87
APPENDIX A. BENCHMARK GRAPHS
With sort
This benchmark shows average and maximum performance regarding large re-
sponse sizes when sorting is applied, see table A.8 for non default variable values.
Resulting rows are sorted ascending order by an unindexed column. See figure
A.20 for the benchmark result.
Variable Value
Query type Select
Query response size 128 rows
Sort result Yes
SQLite: Average and maximum select performance, 128 sorted rows response
11
Average 50 ms
Maximum 50 ms
10
7
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure A.20: Average and maximum processed selects are displayed. Each query
asks for a 128 rows response. Results are sorted in an ascending order by an unindexed
column.
88
APPENDIX A. BENCHMARK GRAPHS
A.3 Raima
This section contains benchmark results for Raima. The benchmarks are di-
vided into six categories: Insert, Update, Select, No primary key, Alternate
task scheduling and Large response sizes.
A.3.1 Insert
The benchmarks in this section shows Raima’s performance with respect to
insert queries. Table A.9 shows the non-default values for variables used during
the insert benchmarks.
Variable Value
Query type Insert
Query response size 0
Average performance
This benchmark show the average performance regarding inserts. In figure A.21
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.22 the lines represent the average number of queries
processed during one timeslot for a specific number of clients.
800
700
600
Queries
500
400
300
200
50 ms
100 100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.21: Average inserts processed during one timeslot for different number of
client partitions.
89
APPENDIX A. BENCHMARK GRAPHS
1 clients
800 2 clients
3 clients
4 clients
700 5 clients
6 clients
7 clients
Queries
600 8 clients
500
400
300
200
50 100 150
Timeslot [ms]
90
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding inserts. In figure
A.23 the lines represent the maximum processed queries using different timeslot
lengths. In figure A.24 the lines represent the maximum number of queries
processed during one timeslot for a specific number of clients.
700
600
500
Queries
400
300
200
50 ms
100
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.23: Maximum inserts processed during one timeslot for different number
of client partitions.
700
600
Queries
500
1 clients
2 clients
400 3 clients
4 clients
5 clients
300 6 clients
7 clients
8 clients
200
50 100 150
Timeslot [ms]
91
APPENDIX A. BENCHMARK GRAPHS
A.3.2 Update
In this section update queries has been benchmarked. Table A.10 shows non-
default values for variables used during the update benchmarks.
Variable Value
Query type Update
Query response size 0
Average performance
This benchmark shows the average performance regarding updates. In figures
A.25 the lines represent the average processed queries for one timeslot using
different timeslot lengths. In figure A.26 the lines represent the average number
of queries processed during one timeslot for a specific number of clients.
1000
900
800
700
Queries
600 50 ms
100 ms
500 150 ms
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.25: Average number of updates processed during one timeslot for different
number of client partitions.
92
APPENDIX A. BENCHMARK GRAPHS
1000
900
800
Queries
700
600
1 clients
500 2 clients
3 clients
4 clients
5 clients
400 6 clients
7 clients
8 clients
300
50 100 150
Timeslot [ms]
Figure A.26: Average number of updates processed for various timeslot lengths.
93
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding updates. In figure
A.27 the lines represent the maximum processed queries for one timeslot us-
ing different timeslot lengths. In figure A.28 the lines represent the maximum
number of queries processed during one timeslot for a specific number of clients
1000
800
Queries
50 ms
600 100 ms
150 ms
400
200
0
1 2 3 4 5 6 7 8
Clients
Figure A.27: Maximum updates processed during one timeslot for different number
of client partitions.
94
APPENDIX A. BENCHMARK GRAPHS
1100
1000
900
1 clients
2 clients
Queries
800 3 clients
4 clients
5 clients
700 6 clients
7 clients
8 clients
600
500
400
300
50 100 150
Timeslot [ms]
95
APPENDIX A. BENCHMARK GRAPHS
A.3.3 Select
In this section the select performance has been benchmarked. It is performed
with the non default variable values seen in table A.11.
Variable Value
Query type Select
Average performance
This benchmark shows the average average performance regarding selects. In
figure A.29 the lines represent the average processed queries for one timeslot
using different timeslot lengths. In figure A.30 the lines represent the number
of select queries processed during one timeslot for different number of clients.
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.29: Average selects processed during one timeslot for different number of
client partitions.
96
APPENDIX A. BENCHMARK GRAPHS
600
550
500
450
Queries
400
1 clients
350
2 clients
3 clients
300
4 clients
5 clients
250
6 clients
7 clients
200
8 clients
150
50 100 150
Timeslot [ms]
97
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding selects. In figure
A.31 the lines represent the maximum processed queries for one timeslot us-
ing different timeslot lengths. In figure A.32 the lines represent the maximum
processed queries for one timeslot using different number of clients.
600
500
Queries
400
300
200
100 50 ms
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.31: Maximum selects processed during one timeslot for different number
of client partitions.
600
500
Queries
400
1 clients
2 clients
300 3 clients
4 clients
5 clients
200 6 clients
7 clients
8 clients
100
50 100 150
Timeslot [ms]
98
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Select
Task scheduling Yield only
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.33: Average selects processed during one timeslot for different number of
client partitions.
99
APPENDIX A. BENCHMARK GRAPHS
600
500
400
Queries
300
200
100
50 ms
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.34: Maximum selects processed during one timeslot for different number
of client partitions. The lines represent the maximum processed queries using different
timeslot lengths.
100
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Select
Primary key No
Average performance
This benchmark shows the average queries processed when no primary key
is used. In figure A.35 the lines represent the average processed queries for
one timeslot using different timeslot lengths. In figure A.36 the lines represent
the number of queries processed during one timeslot using different number of
clients.
50 ms
30 100 ms
150 ms
25
Queries
20
15
10
0
1 2 3 4 5 6 7 8
Clients
Figure A.35: Average selects processed during one timeslot for different number of
client partitions. No primary key is used.
101
APPENDIX A. BENCHMARK GRAPHS
30
25
Queries
1 clients
20
2 clients
3 clients
4 clients
5 clients
15
6 clients
7 clients
8 clients
10
50 100 150
Timeslot [ms]
Figure A.36: Average selects processed for various timeslot lengths. No primary key
is used.
102
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding selects when no
primary key is used. In figure A.37 the lines represent the maximum processed
queries for one timeslot using different timeslot lengths. In figure A.38 the lines
represent the maximum number of queries processed during one timeslot for
different number of clients.
35
30
25
Queries
20
15
10
50 ms
5
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.37: Maximum selects processed during one timeslot for different number
of client partitions. No primary key is used.
35
30
Queries
25
1 clients
2 clients
20 3 clients
4 clients
5 clients
15 6 clients
7 clients
8 clients
10
50 100 150
Timeslot [ms]
Figure A.38: Maximum select processed for various timeslot lengths. No primary
key is used.
103
APPENDIX A. BENCHMARK GRAPHS
Without sort
This benchmark shows average and maximum performance regarding large re-
sponse sizes when sorting is applied, see figure A.39. Table A.14 shows non-
default variables used in this benchmark.
Variable Value
Query type Select
Query response size 128 Rows (16 cols)
5
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure A.39: Average and maximum processed selects are displayed. Each query
asks for 128 rows.
With sort
This benchmark shows average and maximum performance regarding large re-
sponse sizes when sorting is applied, see table A.15 for non default variable
values. Resulting rows are sorted in ascending order by an non indexed column.
See figure A.40 for the benchmark result.
104
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Select
Query response size 128 Rows (16 cols)
Sort result Yes
Raima: Average and maximum select performance, 128 sorted rows response
8
5
Queries
2 Average 50 ms
Maximum 50 ms
1
0
1 2 3 4 5 6 7 8
Clients
Figure A.40: Average and maximum processed selects are displayed. Each query
asks for 128. Results are sorted in an ascending order by an non-indexed column.
105
På svenska
In English
The publishers will keep this document online on the Internet - or its possible
replacement - for a considerable time from the date of publication barring
exceptional circumstances.
The online availability of the document implies a permanent permission for
anyone to read, to download, to print out single copies for your own use and to
use it unchanged for any non-commercial research and educational purpose.
Subsequent transfers of copyright cannot revoke this permission. All other uses
of the document are conditional on the consent of the copyright owner. The
publisher has taken technical and administrative measures to assure authenticity,
security and accessibility.
According to intellectual property law the author has the right to be
mentioned when his/her work is accessed as described above and to be protected
against infringement.
For additional information about the Linköping University Electronic Press
and its procedures for publication and for assurance of document integrity,
please refer to its WWW home page: http://www.ep.liu.se/
Språk T yp av publikation
ISBN (licentiatavhandling)
Svenska Licentiatavhandling
ISRN LIU-IDA/LITH-EX-A--10/010--SE
X Annat (ange nedan) X Examensarbete
C-uppsats Serietitel (licentiatavhandling)
D-uppsats
Engelska/English Rapport
A ntal sidor Annat (ange nedan) Serienummer/ISSN (licentiatavhandling)
105
Publikationens titel
Usage of databases in ARINC 653-compatible real-time system
Författare
Martin Fri och Jon Börjesson
Sammanfattning
The Integrated Modular Avionics architecture , IMA, provides means for runningmultiple safety-critical
applications on the same hardware. ARINC 653 is a specification for this kind of architecture. It is a specification
for space and time partition in safety-critical real-WLPHRSHUDWLQJV\VWHPVWRHQVXUHHDFKDSSOLFDWLRQ¶V integrity.
This Master thesis describes how databases can be implemented and used in an ARINC 653 system. The
addressed issues are interpartition communication, deadlocks and database storage. Two alternative embedded
databases are integrated in an IMA system to be accessed from multiple clients from different partitions.
Performance benchmarking was used to study the differences in terms of throughput, number of simultaneous
clients, and scheduling. Databases implemented and benchmarked are SQLite and Raima. The studies
indicated a clear speed advantage in favor of SQLite, when Raima was integrated using the ODBC interface.
Both databases perform quite well and seem to be good enough for usage in embedded systems. However,
since neither SQLite or Raima have any real-time support, their usage in safety-critical systems are limited. The
testing was performed in a simulated environment which makes the results somewhat unreliable. To validate the
benchmark results, further studies must be performed, preferably in a real target environment.
Nyckelord
ARINC 653, INTEGRATED MODULAR AVIONICS, EMBEDDED DATABSES, SAFETY-CRITICAL, REAL-TIME
OPERATING SYSTEM, VXWORKS