You are on page 1of 44

Mini Project keport

Cperat|ng Systems and Secur|ty


Course Code: Ik-L1kC-11802
!ohn Mark Ssebunnya
uecember 2008
SAlL1? C8l1lCAL CuSCPLuuLlnC ALCC8l1PMS lC8
8LAL-1lML CL8A1lnC S?S1LMS
2
2008 !ohn Mark Ssebunnya
1hls report ls based on a mlnl pro[ect submltted
uecember 2008 by the author lor the course ol Cperatlng Systems and Securlty 6 study
olnts as part ol requlrements lor degree ol Master ol Sclence ln Applled Lnglneerlng :
Applled Computer Sclence ol the vrl[e unlversltelt 8russel.
1he report ls avallable vla the lnternet:
bttp.//ems.vub.oc.be/
3
Abstract
Catastrophes that lead to the loss ol human llle can be a result ol soltware lallure ln
cars, planes, house appllances etc. 1herelore, to reduce such catastrophes, there ls need
lor salety crltlcal schedullng algorlthms lor real tlme operatlng systems. 1hls pro[ect
lnvestlgates operatlng systems schedullng. We evaluate how schedullng algorlthms have
been lmplemented ln some ol the exlstlng operatlng systems wlth LCos, LynxCS,
81Llnux and vxWorks as our case studles. uslng CSlx, we lnstall LCCS on Wlndows x
platlorm. We use vlrtuallzatlon to patch 8ed Pat 7.1 kernel wlth 81Llnux kernel that
lmplements Larllest ueadllne llrst and 8ate Monotonlc Analysls algorlthms. 1he
algorlthms are commonly utlllsed lor schedullng ln salety crltlcal real tlme systems.
4
Contents
1. Introduction................................................................................................................. 7
1.1. Cvervlew ol Cperatlng Systems (CS).................................................................. 7
1.1.1. Cperatlng System uellnltlon ................................................................. 7
1.1.2. Cperatlng System as 8esource manager............................................... 8
1.1.3. Cperatlng system as control program................................................... 9
1.2. 1ypes ol Cperatlng systems................................................................................ 9
1.3. Ceneral urpose Systems vS Lmbedded Systems............................................ 11
1.3.1. Ceneral urpose Systems.................................................................... 11
1.3.2. Lmbedded Systems (LMS) ................................................................... 11
1.4. 8eal-1lme LMS .................................................................................................. 12
1.3. Pard 8eal 1lme vS Solt 8eal 1lme Cperatlng Systems ..................................... 13
2. Scheduling.................................................................................................................. 15
2.1. Cvervlew ol Schedullng .................................................................................... 15
2.2. 1ypes ol Cperatlng System Schedulers ............................................................ 16
2.2.1. Long-term Scheduler ........................................................................... 16
2.2.2. Mld-term Scheduler............................................................................. 16
2.2.3. Short-term Scheduler .......................................................................... 16
2.3. Pow Schedullng Works ..................................................................................... 17
2.4. Schedullng Algorlthms ...................................................................................... 17
2.4.1. Schedullng Algorlthms lor 8eal 1lme CS............................................. 18
2.3. Salety Crltlcal Schedullng Algorlthms ............................................................... 18
2.3.1. 8ate Monotonlc Algorlthm (8MA)....................................................... 19
2.3.2. Larllest ueadllne llrst (Lul) ................................................................ 19
3. Case Study A ECOS............................................................................................... 21
3.1. Cvervlew ol LCos .............................................................................................. 21
3.2. LCos lmplementatlon ol Cu schedullng ......................................................... 21
3.2.1. 8ltmap schedullng ............................................................................... 22
3.2.2. MLC schedullng ................................................................................... 22
3.3. lS LCos a Salety Crltlcal CS................................................................................ 23
3.4. Lcos lnstallatlon uslng CSlx on Wlndow x ................................................... 23
3.4.1. CSlx.................................................................................................... 23
3.4.2. Cygwln ................................................................................................. 24
3.4.3. eCos on Wlndows x latlorm. ........................................................... 24
3.4.4. eCos Conllguratlon 1ool ...................................................................... 25
3.4.3. Selectlng the kernel Scheduler............................................................ 28
3.3. lndustry deployment ol LCos............................................................................ 29
4. Case Study B - RTLinux and VxWorks.................................................................. 30
4.1. Schedullng 8evlewed........................................................................................ 30
4.2. 81Llnux.............................................................................................................. 30
5
4.2.1. Schedullng ln 81Llnux .......................................................................... 30
4.3. vxWorks ............................................................................................................ 31
4.3.1. Schedullng ln vxWorks ........................................................................ 31
4.3.2. lndustry deployment ol vxWorks........................................................ 32
4.4. 81Llnux kernel uslng vlrtuallsatlon .................................................................. 32
4.4.1. 8ed Pat 7.1 under vMware................................................................. 33
4.4.2. lnstalllng vMware 1ools ...................................................................... 37
4.4.3. 81Llnux kernel ..................................................................................... 39
5. Case Study C LynxOS ........................................................................................... 41
3.1. Cvervlew ol LynxCS .......................................................................................... 41
3.2. LynxCS lmplementatlon ol Cu schedullng ..................................................... 41
3.3. lndustry deployment ol LynxCS........................................................................ 42
6. Conclusion.................................................................................................................. 43
Appendix A: References ................................................................................................. 44
6
1ab|e of I|gures
llgure 1: Layer Structure ol computer system................................................................... 8
Eigure 2: Task UseIulness aIter Deadline......................................................................... 13
Eigure 4 : Selecting Cygwin Packages.............................................................................. 24
Eigure 5 : Using Cygwin to download and install Ecos.................................................... 25
Eigure 6: Ecos ConIiguration tool start under Windows XP ............................................ 26
Eigure 7 : Building the Ecos Template Ior Kernel............................................................ 27
Eigure 8: SpeciIying Pentium CPU Ieatures..................................................................... 28
Eigure 9: Ecos Kernel ....................................................................................................... 29
Eigure 10: NASA Mars Reconnaissance Orbiter Wlklpedla hoto.................................. 32
Eigure 11: WMWare 2.0 with web Access....................................................................... 33
Eigure 12: Using ISO image Ior Red Hat Virtual Machine .............................................. 34
Eigure 13 : Notice to install plugin Ior remote Console ................................................... 34
Eigure 14: Graphical Installation oI Red Hat with VMWare Remote Console................ 36
Eigure 15: Starting Red Hat in Remote console using text mode..................................... 37
Eigure 16 : Installing VMware tool under Red Hat Guest................................................ 38
Eigure 17: Using Wget Ior RTlinux Ior Red Hat Virtual Machine................................... 39
Eigure 18 : Unpacking Eile Ior Rtlinux and Linux Kennel 2.4......................................... 40
Eigure 19 : Patching Linux Kernel with RTLinux............................................................ 40
7
1. Introduct|on
1.1. Cverv|ew of Cperat|ng Systems (CS)
Wlth the growlng technologlcal advancement, nearly everythlng we work wlth has a
computlng element. lrom baby toys to bank A1Ms, we wltness the contlnued
deployment ol technologles ln our everyday llle. Most ol these technologles contaln
some sort ol operatlng system to ensure that they operate as expected though thls lact
ls not always known to users ol these systems. lor example, most people are unaware
that cellular phones contaln operatlng systems such as the nokla Symblan CS and !ava-
based embedded operatlng systems. Powever, avlonlc systems that help control
alrcralt, artlllclal satellltes and spacecralt contaln more complex operatlng systems such
as LynxCS. 1he operatlng systems ensure that communication and navigation
management leatures ol avlonlc systems work as expected to avold potentlally
calamltous consequences. 1he llrst alm ol thls mlnl pro[ect ls to understand how
operatlng systems work. 1herealter, we shall locus on galnlng a detalled understandlng
ol Cu schedullng algorlthms lor salety crltlcal systems such as those lor used ln
avlonlcs. We shall lnstall eCos on wlndows x platlorm and uslng vlrtuallsatlon , we
shall patch red hat 7.1 kernel wlth 81Llnux.
1.1.1. Cperat|ng System Def|n|t|on
ulllerent people provlde varylng dellnltlons ol the concept ol an operatlng system.
Sllberschatz Calvln (2004) dellnes an operatlng system as a program that ls runnlng at
all tlme on the computer (usually called the kernel). Accordlng to Wlklpedla, an
Cperatlng system ls a soltware component ol a computer system responslble lor the
management and coordlnatlon ol actlvltles and the sharlng ol the llmlted resources ol
the computer. 1he baslc dellnltlon above cltes keywords such as 'soltware' and
'computer system'. Computer systems dlvlde soltware systems lnto three ma[or classes:
8
system soltware, programmlng soltware and appllcatlon soltware. Although the
dlstlnctlon ls arbltrary, and ls olten blurred, operatlng systems lall ln the category ol
system soltware.
Pavlng consldered CS dellnltlons above, next we locus on thelr lunctlonallty.
1.1.2. Cperat|ng System as kesource manager
A computer system ls made ol dlllerent components and resources. 1he CS lles
between the hardware resources and the appllcatlon program resources ln a lour layer
structure.
I|gure 1: Layer Structure of computer system
Clven lts posltlon ln the layered structure, the CS serves as the resource manager.
Accordlng to 1anenbaum (2007), the operatlng system provldes lor an orderly and
controlled allocatlon ol processors, memorles, and l/C devlces among the varlous
programs competlng lor system resources. 1hus, lts prlmary tasks lnclude keeplng track
ol resource usage, grantlng resource requests, accountlng lor usage and medlatlng
conlllctlng requests lrom dlllerent programs and users.
9
1.1.3. Cperat|ng system as contro| program
As a control program, the CS ensures that programs execute wlthout errors and
prevents lmproper use ol the computer. lt so happens that the Cu can handle only one
task at a tlme yet largely ln general purpose computer systems, users may want to
perlorm several tasks ln the same duratlon ol tlme. Concurrent user tasks could lnclude
sendlng emall, playlng muslc and soltware downloads. lor multltasklng to be achleved,
an CS controls whlch tasks, process and threads would be executed by the Cu at a
glven tlme. 1hls ls schedullng that we shall look at ln more detall ln the lollowlng
sectlons.
1.2. 1ypes of Cperat|ng systems
PowStullworks (2007 ) classllles operatlng systems based on the types ol computers
they control and the sort ol appllcatlons they support.
S|ng|e-user, s|ng|e task
1hls CS ls deslgned to manage the computer so that one user can ellectlvely do one
thlng at a tlme. 1he alm CS lor alm handheld computers ls a good example ol a
modern slngle-user, slngle-task operatlng system. 1he alm CS has a slmple, slngle-
tasklng envlronment to allow launchlng ol lull screen appllcatlons wlth a baslc, common
Cul set.
S|ng|e-user, mu|t|-task|ng
1hls ls the type ol CS most people use on thelr desktop and laptop computers today.
Mlcrosolt's Wlndows and Apple's MacCS platlorms are examples ol operatlng systems
that wlll let a slngle user have several programs ln operatlon at the same tlme. lor
example, a Wlndows user may be wrltlng a note ln a word processor, downloadlng an
lnternet llle and prlntlng a document.
Mu|t|-user
10
A multl-user CS allows many dlllerent users to take advantage ol the computer's
resources slmultaneously. 1he CS must ensure that varlous user requlrements are
balanced, and that each ol the programs utlllsed has sulllclent and separate resources
to avold a problem wlth one user allectlng the entlre communlty. unlx, vMS and
malnlrame operatlng systems, such as MvS, are examples ol multl-user operatlng
systems.
kea| 1|me Cperat|ng System (81CS)
A real-tlme CS ls a multltasklng operatlng system lntended lor real-tlme appllcatlons.
Such appllcatlons lnclude embedded systems (programmable thermostats, household
appllance controllers, moblle telephones), lndustrlal robots, spacecralt, and sclentlllc
research equlpment. We shall explore real-tlme operatlng systems ln detall later.
Cther c|ass|f|cat|ons: GUI VS CLI
1here are several lnternet resources whlch classlly operatlng system lnto Craphlcal user
lnterlace (Cul) CS such as Wlndows and Command Llne lnterlace (CLl) CS such as
typlcal unlx. Wlklpedla states, that whlle technlcally a graphlcal user lnterlace ls not
an operatlng system servlce, lncorporatlng support lor one lnto the operatlng system
kernel can allow the Cul to be more responslve by reduclng the number ol context
swltches requlred lor the Cul to perlorm lts output lunctlons."
Many computer operatlng systems allow users to lnstall or create any lnterlace they
deslre. 1he x Wlndow System ln con[unctlon wlth CnCML or kuL ls a commonly-lound
setup on most unlx and unlx-llke (8Su, Cnu/Llnux, Mlnlx) systems. A number ol
Wlndows shell replacements have been released lor Mlcrosolt Wlndows, whlch oller
alternatlves to the lncluded Wlndows shell, but the shell ltsell cannot be separated lrom
Wlndows.
11
As stated above, an Cs can allow a user to lnstall or create any lnterlace they deslre.
Powever, ll a mllllon users created thelr lnterlaces, lt wlll be hard to classlly Cperatlng
Systems based on lnterlaces. ln addltlon, ll we are to lollow the dellnltlon ol an
operatlng system as a program that ls runnlng at all tlme on the computer (kernel),
then a Cul ls not a kernel and thus not an operatlng system.
1.3. Genera| urpose Systems VS Lmbedded Systems
1.3.1. Genera| urpose Systems
1lmmerman (2007) states that general purpose computers contaln a central processlng
unlt and memory (be lt volatlle, llxed, on chlps, on a dlsk, or a dlskette), and run a stored
program (soltware). Ceneral purpose computer perlpherals lnclude equlpment such as
monltors, keyboards, mouse, docklng statlons, prlnters, scanners, dlsk drlves, tape
drlves, modems, wlreless cards and web cameras. 1he computers can be programmed
to do dlllerent klnds ol tasks, rather than speclal purpose computers that are llmlted by
deslgn to a speclllc task. Ceneral purpose computers conslst ol malnlrames, servers and
mlcrocomputers (desktops and laptops).
1.3.2. Lmbedded Systems (LMS)
1lmmerman (2007) quotes !avld's dellnltlon ol an LMS: "embedded systems ls a
computer that does not look llke a computer." 1hls ls a pretty slmple and lnterestlng
dellnltlon ol an embedded system. lt does not look llke a computer so what does lt look
llke? 1o glve thls more sclentlllc llght the lollowlng dellnltlon ol LMS ls adapted ln the
same publlcatlon.
An Lmbedded system (LMS) ls an electronlc system wlth dedlcated lunctlonallty bullt
lnto lts hardware and soltware. 1he hardware ls mlcroprocessor based and uses some
memory to keep the soltware and data and provldes an lnterlace to the world or system
lt ls part ol."
12
LMS dlller lrom general purpose systems because (lor cost and slze reasons) they
mostly have less processlng capablllty, use llmlted memory resources and are much
more power aware. Some examples are alrcralt lllght control systems, car crulse
control, tralllc llght control, and car entertalnment systems, lactory automatlon
systems, cell hones, lod, tablets and klosks.
1.4. kea|-1|me LMS
A real-tlme system responds ln a (tlmely) predlctable way to all lndlvldual
(unpredlctable) external stlmull arrlvals. 1he most lmportant word ls predlctablllty. 1he
system should respond to each lndlvldual external event ln a predlctable way, thls
means belore the deadllne dellned ln the system requlrement. lt ls lmportant to note
that average perlormance ls not the lssue! (1lmmerman 2007)
kea|-t|me system types
An embedded system does not necessarlly need to have a predlctable behavlour, and ln
that case lt ls not a real-tlme system. ln a well-deslgned 81 system, each lndlvldual
deadllne should be met. ueadllnes are the maxlmum tlme llmlt lor any event.1here are
dlllerent types ol real-tlme systems:
nard rea|-t|me (1): mlsslng an lndlvldual deadllne results ln catastrophlc lallure
ol the system
nard rea|-t|me (2): mlsslng a deadllne entalls an unacceptable quallty reductlon.
Soft rea|-t|me: deadllnes may be mlssed and can be recovered lrom.
Ceneral purpose computers do not have to meet speclllc deadllnes. lnstead, the
system's requlrements are expressed ln terms ol average perlormance.
13
1.S. nard kea| 1|me VS Soft kea| 1|me Cperat|ng Systems
Sllberschatz, Calvln (2004) states that a solt real-tlme system provldes prlorlty ol real-
tlme tasks over non real-tlme tasks. A hard real-tlme system guarantees that real-tlme
tasks be completed wlthln thelr requlred deadllnes.
lor hard real-tlme systems one can say that the deadllne must be met. lor solt real-tlme
systems one wlll then say that the deadllne should be met.( 1lmmerman 2007).
A salety-crltlcal system ls a real-tlme system wlth catastrophlc results ln case ol lallure.
llgure 2: 1ask UseIulness aIter Deadline represents the uselulness ol a task alter a
deadllne ls mlssed. 1he curve clearly lndlcates that the task uselulness automatlcally
drops to 0 ll a hard real tlme deadllne ls not met.
Figure 2: Task Usefulness after Deadline
14
1he achlevement ol salety and hlgh quallty ol servlce ln hard real tlme systems ls a
result ol a comblnatlon ol many hardware and soltware deslgn lactors. ln the lollowlng
sectlon we look at schedullng wlth an emphasls on Salety Crltlcal Schedullng Algorlthms
lor Pard 8eal 1lme Cperatlng Systems.
15
2. Schedu||ng
2.1. Cverv|ew of Schedu||ng
Cu schedullng ls a way ol determlnlng whlch processes run when there are multlple
processes ln some cases wlth prlorltles. Schedullng ls a key concept ln computer
multltasklng and multlprocesslng and real-tlme operatlng system deslgn. Schedullng ls
handled by the kernel scheduler. ln sectlon 1.3 we looked at multl tasklng and multl
processlng Cperatlng Systems that are lncreaslngly on demand. Computer systems are
lncreaslng changlng lrom one processor to multl processors glven the power advantage
multlple processors have over slngle processors. Multltasklng and multlprocesslng
demand ellectlve Cu task schedullng.
As stated above, schedullng ls carrled out by the Cperatlng System Scheduler. 1he
schedu|er ls concerned malnly wlth:
Cu utlllsatlon,
1hroughput - number ol processes that complete thelr executlon per tlme unlt,
1urnaround - Amount ol tlme to execute a partlcular process,
Waltlng tlme - 1lme a process has been waltlng ln the ready queue, and
8esponse tlme -1lme lt takes lrom when a request was submltted untll the llrst
response ls produced.
ln real-tlme envlronments, such as moblle devlces, lor automatlc control ln lndustry (lor
example robotlcs), the scheduler must also ensure that processes can meet deadllnes.
1hls ls cruclal lor keeplng the system stable and sale.
16
2.2. 1ypes of Cperat|ng System Schedu|ers
Stalllngs (2004) ldentllles dlllerent types ol schedulers. 1hese lnclude long-term, mld-
term or medlum-term and short-term schedulers.
2.2.1. Long-term Schedu|er
1he long-term, or admlsslon scheduler, decldes whlch [obs or processes are to be
admltted to the ready queue. 1yplcally lor a desktop computer, there ls no long-term
scheduler as such, and processes are admltted to the system automatlcally. Powever,
thls type ol schedullng ls cruclal lor a real-tlme operatlng system, as the system's ablllty
to meet process deadllnes may be compromlsed by the slowdowns and contentlon
resultlng lrom the admlsslon ol more processes than the system can salely handle.
2.2.2. M|d-term Schedu|er
1he mld-term scheduler ls present ln all systems wlth vlrtual memory. lt temporarlly
removes processes lrom maln memory and places them on secondary memory (such as
a dlsk drlve) or vlce versa. 1hls ls commonly relerred to as "swapplng out" or "swapplng
ln" (also lncorrectly as "paglng out" or "paglng ln").
2.2.3. Short-term Schedu|er
1he short-term scheduler (dlspatcher) decldes whlch ol the ready, ln-memory processes
are to be executed next lollowlng a clock lnterrupt, an lC lnterrupt, an operatlng system
call or another lorm ol slgnal. 1hls scheduler can be preemptlve, lmplylng that lt ls
capable ol lorclbly removlng processes lrom a Cu when lt decldes to allocate that Cu
to another process, or non-preemptlve, ln whlch case the scheduler ls unable to "lorce"
processes oll the Cu.
17
2.3. now Schedu||ng Works
A task can have three states: tuoo|oq, teoJy, blockeJ. Cnly one task per Cu ls runnlng
at a tlme. Most tasks are blocked, most ol the tlme and other are ln ready state.
Schedulers use llst data structures. 1he cholce ol data structure depends on the
maxlmum number ol tasks that can be on the tasks ready llst
1he data structure ol the ready llst ln the scheduler ls deslgned to mlnlmlse the worst-
case length ol tlme spent ln the scheduler's crltlcal sectlon, durlng whlch pre-emptlon ls
lnhlblted, and, ln some cases, all lnterrupts are dlsabled.
1he crltlcal response tlme, sometlmes called the lly back tlme, ls the tlme lt takes to
queue a new ready task and restore the state ol the hlghest prlorlty task.
2.4. Schedu||ng A|gor|thms
A schedullng algorlthm ls the method by whlch threads, processes or data llows are
glven access to system resources (e.g. processor tlme, communlcatlons bandwldth). 1he
need lor a schedullng algorlthm arlses lrom the requlrement lor most modern systems
to perlorm multltasklng (execute more than one process at a tlme) and Multlplexlng
(transmlt multlple llows slmultaneously). 1he maln purposes ol schedullng algorlthms
are to mlnlmlse resource starvatlon and to ensure lalrness amongst the partles utlllslng
the resources. ln salety crltlcal systems, schedullng algorlthms should ensure that tlght
deadllnes are met.
18
2.4.1. Schedu||ng A|gor|thms for kea| 1|me CS
1he algorlthms dlscussed ln the lollowlng sectlon have been deployed by real-tlme
systems llke vxWorks and LynxCS. Sectlons 3 to 3 wlll cover case studles ol real tlme
operatlng systems that have lmplemented these and other algorlthms.
kound-rob|n schedu||ng
8ound-robln (88) ls one ol the slmplest schedullng algorlthms lor processes ln an
operatlng system. lt asslgns tlme sllces to each process ln equal portlons and ln order,
handllng all processes wlthout prlorlty. 8ound-robln schedullng ls both slmple and easy
to lmplement, and starvatlon-lree.
I|rst-|n-f|rst-out (IIIC) schedu||ng
1he scheduler runs the task wlth the hlghest prlorlty llrst. ll there are two or more tasks
that share the same prlorlty level, they get scheduled ln order ol thelr arrlval completlng
the llrst arrlved task llrst belore contlnulng wlth the next one. Lach task ls occupylng the
Cu untll lt llnlshes or another task wlth hlgher prlorlty arrlves.
2.S. Safety Cr|t|ca| Schedu||ng A|gor|thms
A hard real-tlme system guarantees that real-tlme tasks be completed wlthln thelr
requlred deadllnes. A salety-crltlcal system ls a real-tlme system wlth catastrophlc
results ln case ol lallure. Most ol hard real tlme systems are salety crltlcal. Salety crltlcal
schedullng algorlthms are lmplemented ln a pre-emptlve scheduler.
re-empt|ve schedu||ng: re-emptlon ls the act ol temporarlly lnterruptlng a task belng
carrled out by a computer system, wlthout requlrlng lts cooperatlon, and wlth the
lntentlon ol resumlng the task at a later tlme. Such a change ls known as a context
swltch.
re-empt|ve t|me s||c|ng: 1he perlod ol tlme lor whlch a process ls allowed to run ln a
pre-emptlve multltasklng system ls generally called the "tlme sllce". 1he scheduler ls run
19
once every tlme sllce to choose the next process to run. ll the tlme sllce ls too short
then the scheduler wlll consume too much processlng tlme.
8elow we look at the dlllerent algorlthms that address the hard deadllne requlrements
ol hard real tlme systems - Salety Crltlcal systems
2.S.1. kate Monoton|c A|gor|thm (kMA)
8MA ls a procedure lor asslgnlng llxed (statlc) prlorltles to tasks to maxlmlze thelr
schedulablllty. A task set ls consldered schedulable and determlnlstlc ll all tasks meet all
deadllnes all the tlme.
now |t works
Asslgn the prlorlty ol each task accordlng to lts perlod, so that the shorter the perlod the
hlgher the prlorlty. ll 1ask 1 perlod ls shorter than the perlod ol 1ask 2, the hlgher
prlorlty ls asslgned to 1ask 1.
8MA ls an optlmal statlc-prlorlty algorlthm. ll a task set cannot be scheduled uslng the
8MA algorlthm, lt many not be scheduled uslng other statlc-prlorlty algorlthm.
Cne ma[or llmltatlon ol llxed-prlorlty schedullng such as 8MA ls that lt ls not always
posslble to lully utlllse the Cu. Lven though 8MA ls the optlmal llxed-prlorlty scheme,
lt has a worst-case schedulable bound ol: Wn = n * (21/n - 1) where n ls the number ol
tasks ln a system. As you would expect, the worst-case schedulable bound lor one task
ls 100. 8ut as the number ol tasks lncreases, the schedulable bound decreases,
eventually approachlng lts llmlt ol about 69.3.
2.S.2. Lar||est Dead||ne I|rst (LDI)
Lul ls a dynamlc schedullng algorlthm used ln real-tlme operatlng systems. lt places
processes ln a prlorlty queue such that when an event occurs (task llnlshes, new task
20
released, etc.) the queue wlll be searched lor the process closest to lts deadllne so that
process ls executed next.
Cn preemptlve unlprocessors, Lul ls an optlmal schedullng algorlthm. ll a collectlon ol
lndependent [obs, each characterlsed by an arrlval tlme, an executlon requlrement, and
a deadllne, can be scheduled such that all the [obs complete by thelr deadllnes, the Lul
wlll schedule thls collectlon ol [obs such that they all complete by thelr deadllnes.
When perlodlc processes have deadllnes that are equal to thelr perlods, Lul has a
utlllsatlon bound ol 100 thus guaranteelng that all deadllnes are met provlded that the
total Cu utlllsatlon ls not more than 100.
Powever, when the system ls overloaded, the set ol processes that wlll mlss deadllnes ls
largely unpredlctable (lt wlll be a lunctlon ol the exact deadllnes and tlme at whlch the
overload occurs.)
lnstead, most real-tlme computer systems use llxed prlorlty schedullng (usually rate-
monotonlc schedullng). Wlth llxed prlorltles, lt ls easy to predlct that overload
condltlons wlll cause the low-prlorlty processes to mlss deadllnes, whlle the hlghest-
prlorlty process wlll stlll meet lts deadllne.
21
3. Case Study A - LCCS
3.1. Cverv|ew of LCos
eCos (embeJJeJ coof|qutoble opetot|oq system) ls as an open source, royalty-lree, real-
tlme operatlng system. ( ecos.sourceware 2008) eCos ls deslgned to be customlsable to
appllcatlon requlrements ol run-tlme perlormance and hardware needs. lt ls
programmed ln the C programmlng language, and has compatlblllty layers and Als lor
CSlx.
eCos was deslgned lor devlces wlth memory slze ln the tens to hundreds ol kllobytes, or
wlth real-tlme requlrements. lt can be used on hardware wlth too llttle 8AM to support
embedded Llnux, whlch currently needs a mlnlmum ol about 2 M8 ol 8AM, not
lncludlng appllcatlon and servlce needs.
3.2. LCos Imp|ementat|on of CU schedu||ng
LCos has two klnds ol schedulers lmplemented, bltmap and MLC (Multl Level Cueue)
schedullng. 8oth ol them support preemptlon. lt ls posslble to extend the eCos kernel to
handle other schedulers as well. 8oth schedulers use numerlcal prlorlty levels that range
lrom 0 to 31 where 0 ls the hlghest prlorlty. Accordlng to Salenby and Lundgren (2006),
MLC schedullng also supports SM (Symmetrlc Multlprocesslng). Symmetrlc
multlprocesslng lnvolves a multlprocessor computer-archltecture where two or more
ldentlcal processors can connect to a slngle shared maln memory.
22
3.2.1. 8|tmap schedu||ng
8ltmap schedullng conslsts ol a queue wlth threads that are loaded lnto memory. Lach
thread ln the queue has an assoclated prlorlty. 1here are 32 dlllerent prlorlty levels
avallable and each prlorlty level can only be assoclated wlth one thread. 1hls llmlts the
amounts ol total ol queued threads ln the systems to 31 (one belng the ldle thread). 1he
scheduler always runs the thread wlth the hlghest current prlorlty. 1he scheduler can be
conllgured wlth or wlthout preemptlon. ll preemptlon ls enabled and a thread wlth a
hlgher prlorlty than the current runnlng thread enters the queue, the scheduler wlll
then preempt and run the thread wlth the hlghest prlorlty.
Ana|ys|s of b|tmap schedu||ng
8ecause ol the slmpllstlc deslgn ol the scheduler (conslstlng only ol a slmple prlorlty
queue) llndlng the hlghest prlorltlsed thread and swltchlng to that wlll be last and thus
the dlspatch latency wlll be low. 1he slmpllstlc deslgn also makes lt easy to predlct
system behavlour, spawnlng the posslblllty lor a determlnlstlc system overall. As noted
above, the schedule ls constralned to 31 threads ln the system.
3.2.2. ML schedu||ng
1he MLC schedullng ls a blt more compllcated than bltmap schedullng. lnstead ol [ust
havlng one queue lor all threads and all prlorltles, thls scheduler uses a set ol queues
where every queue contalns a number ol threads all wlth the same prlorlty. Lach queue
has lts own schedullng algorlthm that by delault ls a tlme sllclng algorlthm that shares
the Cu among the threads ln the queue equally. 1he dlllerent prlorlty levels may have
dlllerent schedulers. 1hese queues are scheduled by a normal prlorlty queue where
hlgher prlorlty queues are scheduled llrst. 1he threads are dlvlded lnto the dlllerent
prlorltles based on some property ol the thread. 1hls scheduler also supports
preemptlon. MLC also has support lor multlple processors wlth SM (Symmetrlc
Multlprocesslng) where each processor uslng lts own scheduler.
23
Ana|ys|s of ML schedu||ng
A more compllcated schedullng algorlthm makes the dlspatch latency sllghtly hlgher
than wlth bltmap schedullng. 1he MLC scheduler contalns more advanced leatures than
the bltmap scheduler. 1hls allows the developer to bulld systems wlth a dlstlnct
separatlon ol background and loreground tasks, where tasks have dlllerent tlme
requlrements. Slnce thls scheduler uses a more advanced algorlthm lt ls harder to
predlct system behavlour, whlch makes thls scheduler hard to use when trylng to create
a determlnlstlc system. ln contrast to the bltmap scheduler, the MLC schedullng can
handle as many threads as can llt lnto the memory.
3.3. IS LCos a Safety Cr|t|ca| CS
Wlth a llmlted number ol tasks, eCos should be conllgured wlth the bltmap scheduler.
Wlth thls cholce the systems can be made determlnlstlc and, therelore, be sultable lor
hard real tlme systems. lt, thus, would quallly to be salety crltlcal.
Wlth a large number ol tasks wlth dlllerent tlme requlrements eCos MLC ls good ln
handllng large amounts ol dlverse tasks. Systems handllng large amounts ol dlverse
tasks are most tlmes not consldered to be hard real tlme system but ln event that such
as system ls hard real tlme, then eCos may not guarantee predlctablllty. 1hus, lt would
not be salety crltlcal.
3.4. Lcos Insta||at|on us|ng CSIk on W|ndow k
3.4.1. CSIk
ortable Cperatlng System lnterlace (CSlx) ls the collectlve name ol a lamlly ol related
standards specllled by the lLLL to dellne the appllcatlon programmlng lnterlace (Al),
along wlth shell and utllltles lnterlaces lor soltware compatlble wlth varlants ol the unlx
24
operatlng system (lLLL 2001). Powever, the standard can apply to any operatlng
system.
3.4.2. Cygw|n
Cygwln ls a Llnux-llke envlronment lor Wlndows. lt conslsts ol two parts: A uLL
(cygwln1.dll) whlch acts as a Llnux Al emulatlon layer provldlng substantlal Llnux Al
lunctlonallty. A collectlon ol tools whlch provlde Llnux look and leel. 1he Cygwln uLL
currently works wlth all recent, commerclally released x86 32 blt and 64 blt verslons ol
Wlndows, wlth the exceptlon ol Wlndows CL (Cygwln 2007).
3.4.3. eCos on W|ndows k |atform.
ln order to lnstall LCos under wlndows x, CSlx - Cygwln tool ls a requlrement and
thus lt was llrst lnstalled. uetalled steps to lnstall Cygwln and ecos are avallable at the
lrom http://www.cygwln.com/setup.exe. note when lnstalllng Cygwln Selected
addltlonal packages: gcc, make, sharutlls, tcltk, wget are requlred ll lt ls golng to be used
lor Lcos. 1o select these packages See llgure 3 : Selectlng Cygwln ackages
Figure 3 : Selecting Cygwin Packages
25
Insta|||ng eCos
Under cygw|n bash she||:
use Command below to download.
wget --passlve-ltp ltp://ecos.sourceware.org/pub/ecos/ecos-lnstall.tcl
See llgure 3 : uslng Cygwln to download and lnstall Lcos.
Figure 4 : Using Cygwin to download and install Ecos.
3.4.4. eCos Conf|gurat|on 1oo|
1he eCos Conllguratlon 1ool ls a graphlcal tool to help conllgure and bulld a custom
verslon ol the eCos operatlng system. ln settlng up the Lcos conllguratlon tool there
were a couple ol hardshlps. Alter several attempts trylng to lnstall the conllg tool that
was downloaded lrom http://ecos.sourceware.org , lt was dlscovered that the tool was
not worklng on wlndow x. When started, lt could pop up a screen and the screen
would lmmedlately dlsappear alter a lew seconds. l.e. the wlndow would crash. A new
tool was downloaded lrom www.ecoscentrlc.com/ . 1hls new tool when started could
pop up error messages ol unread dll(s) lor cygwln . 1o solve thls lssue, the cygwln ddl
were copled and placed ln the Wlndows lolder (C:/Wlndows) and ln the root lolder ol
cygwln (C:/cgwln) and the root lolder ol ecos (C:/cygwln/opt/ecos). 1he errors stopped
26
popplng up and the conllguratlon tool screen was latter attalned. See llgure 3: Lcos
Conllguratlon tool start under Wlndows x
Figure 5: Ecos Configuration tool start under Windows XP
Conf|gurat|on 1emp|ate
8ulldlng the conllguratlon template lor a partlcular hardware platlorm enables you to
speclly whlch leatures ol eCos to add. 1he conllguratlon template was developed wlth
the eCos kernel. See llgure 6 : 8ulldlng the Lcos 1emplate lor Kernel
27
Figure 6 : Building the Ecos Template for Kernel
Pentium processors
uslng l386 archltecture, you can choose to enable leatures lor entlum Cu. 1he eCos
lnstallatlon was done on entlum dual core thus lt was lmportant to speclly leatures lor
entlum processors. See llgure 7: Specllylng entlum Cu leatures
28
Figure 7: Specifying Pentium CPU features
3.4.S. Se|ect|ng the kerne| Schedu|er
As dlscussed earller, eCos lmplements 8ltmap scheduler and Multl Level Cueue
Scheduler. ?ou can select whlch scheduler to work wlth. See llgure 8: Lcos kernel
29
Figure 8: Ecos Kernel
3.S. Industry dep|oyment of LCos
numerous companles are uslng eCos, and many successlul products have been
launched runnlng eCos, lncludlng the 8rother PL-2400 Cen network color laser prlnter,
uelphl Communlport, and the lomega Plp Zlp ulgltal Audlo layer.
30
4. Case Study 8 - k1L|nux and VxWorks
4.1. Schedu||ng kev|ewed
ln sectlon 2, we dlscussed schedullng and dlllerent schedullng algorlthms lor hard real
tlme system whlch lncluded Larllest ueadllne llrst and 8ate-Monotonlc schedullng. Cur
case study lor Lcos does not have a dlrect lmplementatlon ol these algorlthms but lt ls
posslble to modlly Lcos to use Lul schedullng. 81Llnux and vxWorks are some ol the
successlul hard real-tlme operatlng systems on the market today. 1herelore, they oller
good examples ol how elllclent schedullng can be lmplemented uslng these algorlthms.
4.2. k1L|nux
81Llnux (or 8eal-1lme Llnux) ls an extenslon ol Llnux to a real-tlme operatlng system.
81Llnux supports hard real-tlme (determlnlstlc) operatlon through lnterrupt control
between the hardware and the operatlng system. lnterrupts needed lor determlnlstlc
processlng are processed by the real-tlme core, whlle other lnterrupts are lorwarded to
the non-real tlme operatlng system. 1he operatlng system (Llnux) runs as a low prlorlty
thread. llrst-ln-llrst-Cut (lllCs) plpes or shared memory can be used to share data
between the operatlng system and the real-tlme core.
4.2.1. Schedu||ng |n k1L|nux
81Llnux allows users to wrlte thelr own scheduler to be used at runtlme but glve the
user a cholce ol three dlllerent standard algorlthms.
1. Slmple prlorlty drlven scheduler,
2. 8ate monotonlc (8M) scheduler, and
3. Larllest ueadllne llrst (Lul) scheduler.
31
4.3. VxWorks
vxWorks ls deslgned lor use ln embedded systems. unllke "natlve" systems such as
unlx, vxWorks development ls done on a "host" machlne runnlng unlx or Wlndows,
cross-complllng target soltware to run on varlous "target" Cu archltectures. 1he key
leatures ol the current vxWorks CS are a multltasklng kernel wlth preemptlve and
round-robln schedullng and last lnterrupt response.
4.3.1. Schedu||ng |n VxWorks
All work carrled out ln vxWorks ls ln the lorm ol tasks. Lach task can be ln one ol lour
dlllerent states, ready, delay, pendlng and suspend state. 8eady" tasks are avallable lor
runnlng. uelayed" tasks are sleeplng lor a set amount ol tlme. endlng" tasks are
waltlng lor a resource to be avallable. 1he suspended" state ls what newly created
tasks are set to untll they get actlvated. Slnce thls ls usually done when the task ls
created, thls state ls prlmarlly used lor debugglng purposes. vxWorks uses a prlorlty
based preemptlve round-robln schedullng algorlthm. Lach task has a prlorlty level
between 0 and 233 where 0 ls the hlghest prlorlty and 233 the lowest prlorlty. ll a task
wlth hlgher prlorlty than the task currently runnlng ln the Cu ls called, the scheduler
suspends the llrst task and sets the Cu to runnlng the second task. ll the second task
has the very same prlorlty level as the llrst task, round-robln schedullng ls provlded. As
mentloned earller, a task can also turn lnto a pendlng state, lor example ll a resource lor
that task ls not avallable. 1he scheduler then swaps back to a task wlth lower prlorlty
untll the resource becomes avallable agaln. vxWorks also supports the CSlx Al lor
real tlme threads, whlch makes avallable both the above descrlbed round robln as well
as lllC schedullng. vxWorks schedullng ls thus llexlble and adapts easlly to the needs ol
the customer.
32
4.3.2. Industry dep|oyment of VxWorks
vxWorks has been deployed ln a number ol products lncludlng the Ponda 8obot ASlMC,
the Alrbus A400M Alrlllter, the 8oelng 787 alrllner, the Mars 8econnalssance Crblter,
the hoenlx Mars Lander. llgure 9: NASA Mars Reconnaissance Orbiter
Eigure 9: NASA Mars Reconnaissance Orbiter Wlklpedla hoto
4.4. k1L|nux kerne| Us|ng v|rtua||sat|on
As dlscussed ln sectlon 4.2, 81Llnux ls a hard real-tlme Cperatlng system that
lmplements Larllest ueadllne llrst and 8ate Monotonlc Schedullng. ln thls sectlon we
shall use vlrtuallzatlon vMware Server 2.0 to lnstall 8ed Pat Llnux 7.1 and conllgure
81Llnux kernel.
virtuo/itotion relers to the abstractlon ol computer resources: latlorm vlrtuallzatlon
separates an operatlng system lrom the underlylng platlorm resources.
vMwore ls a vlrtuallzatlon tool. vMware 2.0 provldes a web lnterlace uslng tomcat web
server.
33
vmare was lnstalled under Wlndows x rolesslonal base operatlng system. See llgure
10: WMWare 2.0 wlth web Access . A new vlrtual server lor 8ed Pat 7.1 was created
uslng lSC lmages that were placed ln the vlrtual Cu/uvu drlve. lt ls also posslble to
lnstall 8ed Pat Llnux 7.1 ln a vlrtual machlne uslng the standard 8ed Pat dlstrlbutlon
CuS. As you go through the steps to create a 8ed Pat vlrtual Machlne you speclly
memory and rocessors, hard dlsc, Cu, network ropertles as 8rldged.
4.4.1. ked nat 7.1 under VMware
1. A 8ed Pat vlrtual machlne was created uslng vMware management lnterlace.
uetalled Steps lrom the llnk below can be ol great help
http://www.vlrtuatopla.com/lndex.php/Creatlng_vMware_Server_2.0_vlrtual_Mac
hlnes
Eigure 10: WMWare 2.0 with web Access
note - At the stage ol conllgurlng Cu/uvu, lSC lmages was chosen and added to the
vlrtual Cu urlve. Eigure 11: Using ISO image Ior Red Hat Virtual Machine,
demonstrates the dlllerent ltem that would have to be specllled whlle settlng up the
vlrtual machlne.
34
Figure 11: Using ISO image for Red Hat Virtual Machine
2. Alter creatlng the 8ed Pat vlrtual machlne, a browser plug-ln was added to llrelox
to be able to access the 8emote control panel ol the machlne otherwlse vMware
glves thls message below when you try to use the control panel ln a browser that
lacks the plugln. (5ee l|qute 1J . Not|ce to |ostoll pluq-|o fot temote coosole)
Figure 12 : Notice to install plugin for remote Console
3. 1he vMware Management lnterlace was used to verlly that the vlrtual machlne's
devlces are set up as expected belore startlng the lnstallatlon. ll you lnstall the guest
operatlng system lrom a physlcal Cu-8CM dlsc, the Cu-8CM drlve ls connected to
the vlrtual machlne. Ctherwlse ln thls pro[ect, the llrst lSC lmage was lnserted ln
the vlrtual Cu. 1he vlrtual machlne should start bootlng lrom the vlrtual Cu or
hyslcal Cu and the lnstallatlon process wlll begln.
35
Whlle golng through thls lnstallatlon there are a couple ol error message got due to
compatlblllty lssues wlth the lnlrastructure on whlch thls was runnlng. 1he challenge
was to llgure out a solutlon to the several errors. Many errors were related to the
vlrtual server database lnventory.
8ed Pat Llnux 7.1 was lnstalled uslng the graphlcal mode lnstaller ( 5ee Figure 13.
Graphical Installation of Red Hat with JMWare Remote Console) whlch you may
choose when you llrst boot the lnstaller. 1here ls also an optlon ol text mode
lnstaller. At the 8ed Pat Llnux 7.1 boot prompt, you are ollered the lollowlng
cholces:
1o lnstall or upgrade a system ln graphlcal mode
1o lnstall or upgrade a system ln text mode, type: text <Ln1L8>.
1o choose the text mode lnstaller, type text lollowed by Lnter.
Note: ln golng through thls lnstallatlon ol 8ed Pat under vlrtuallzatlon the x-Wlndow
system should be sklpped untll when vMware tools are lnstalled ln the guest operatlng
system. When x-Wlndow system was lnstalled wlthout vMware tools, the remote
console was lllckerlng to the screen and dlsappearlng every two seconds maklng lt
lmposslble to do anythlng.
36
Figure 13: Graphical Installation of Red Hat with VMWare Remote Console
4. Alter the llrst steps, the lnstallatlon steps were lollowed as could be lor lnstallatlon
on a physlcal machlne. Some lmportant cholces made lncluded --
Pockoqes - lt ls very lmportant that when lnstalllng 8ed Pat you perlorm a
Custom lnstall and select uevelopment, kernel uevelopment, utllltles, and
Select lndlvldual ackages. At the lndlvldual ackage Selectlon screen, go to
uevelopment --> Languages and select compat-egcs. next go to
uevelopment --> Llbrarles and select compat-gllbc. Select any other packages
you wlsh to lnstall and contlnue the lnstallatlon. - 1o be sure ol the
selectlon ln thls pro[ect all development packages were selected. note:
selectlon ol ALL comes wlth need lor more storage requlrements.
ln vldeo Card Selectlon Gener|c VGA compat|b|e was chosen, Cenerlc vCA ls
recommended ll you not sure ol the vCA ol your machlne. 1hls lnstallatlon
was done on a laptop machlne.
At one stage as near to the close ol the conllguratlon, Gener|c Standard
VGA, was chosen.
37
1he k-W|ndow lnstallatlon was sklpped. As mentloned earller, vMware
guest tools wlll be needed ll the x Wlndow ls to be used.
Alter the completlon ol lnstallatlon ol the 8ed Pat Llnux 7.1 the guest operatlng system,
the vlrtual machlne was started uslng the remote console. (5ee Figure 14. Starting Red
Hat in Remote console using text mode.)
Eigure 14: Starting Red Hat in Remote console using text mode.
4.4.2. Insta|||ng VMware 1oo|s
1hough the conllguratlon ol the 81Llnux kernel does not requlre a graphlcal lnterlace
lor a typlcal unlx user, a graphlcal lnterlace ls handy lor none unlx users. 1o have the
graphlcal lnterlace, you need to lnstall vMware tools. As noted earller, whlle golng
through tests ln thls pro[ect, whenever 8ed Pat was started ln graphlcal mode, lt could
not load the Craphlcal lnterlace but lnstead lt could load the Command Llne whlch
lllckered on the screen every lew seconds. 1hls was a result ol mlsslng vMware tools
that enable the x-Wlndow system on unlx platlorms.
38
Note: Wlth a 8ed Pat Llnux 7.1 guest, vMware 1ools are lnstalled lrom the Llnux
console.
1he lollowlng steps were lound close to achlevlng the graphlcal lnterlace but may need
to be ad[usted lrom one platlorm to another.
1. uownload the vMware tools lor Llnux guest operatlng systems. 1he llle lor the vMware
tools was an lSC lmage. Whlch was added to the vlrtual Cu and loaded mounted - note
that the vMware tools lSC may by delault have been placed ln (C:/rogram llles
/Wmare) . ln thls case no lresh download ls requlred.
2. use these commands under root account on red hat 7.1 ( 5ee l|qute 15 . lostoll|oq
vMwote tool uoJet keJ not Cuest)
mount /dev/cdrom /mnt
cd /tmp
tar zxl /mnt/vMware-llnux-tools.tar.gz
umount /mnt
Figure 15 : Installing VMware tool under Red Hat Guest
3. rocceed to the vMware web lnterlace and used vMware 1ools lnstaller. 1hen proceed
to the console panel and used the commands
cd vMware-tools-dlstrlb
./vMware-lnstall.pl
39
Cn runnlng ./vMwote-|ostoll.pl the lnstaller asked a lew questlons about dlrectory
paths where to place llles such as blnary llles. 1he delault paths where used ln thls
pro[ect. 1hough there was a challenge ol the lnstallatlon abortlng at some stage ln the
process. Alter successlul lnstallatlon ol the vMware tools you can 5tott\ and your
graphlcal envlronment ls expected to work though not guaranteed to work!
4.4.3. k1L|nux kerne|
Alter successlul lnstallatlon ol 8ed Pat 7.1 guest CS under vMware server, the 81Llnux
kernel was conllgured.
1. uownloadlng 81Llnux 3.1 and Llnux 2.4.4 uslng wget -- Wget ls a package lor
retrlevlng llles uslng P11, P11S and l1. lt ls a non-lnteractlve command llne
tool, so lt may easlly be called lrom scrlpts, cron [obs, termlnals wlthout x-
Wlndows support (See llgure 16: uslng Wget lor 81llnux lor 8ed Pat vlrtual
Machlne)
uownload 81Llnux 3.1:rtllnux-3.1.tar.gz
http://seg.ee.upatras.gr/ootcp/81Llnux203.1/rtllnux-3.1.tar.gz
uownload the Llnux 2.4.4 kernel: llnux-2.4.4.tar.gz
http://seg.ee.upatras.gr/ootcp/81Llnux203.1/llnux-2.4.4.tar.gz
Figure 16: Using Wget for RTlinux for Red Hat Virtual Machine
40
2. Unpack the f||es unzlp/ un1ar the downloaded llles (5ee l|qute 17 . uopock|oq l|le fot
ktl|oux ooJ l|oux keooel 2.4)
Use the command
rm -rl /usr/src/rtllnux
mkdlr /usr/src/rtllnux
cd /usr/src/rtllnux
tar - xzl /var/tmp/llnux-2.4.4.tar.gz
tar - xzl /var/tmp/rtllnux-3.1.tar.gz
Figure 17 : Unpacking File for Rtlinux and Linux Kennel 2.4
3. atch the kerne| (See llgure 18 : atchlng Llnux kernel wlth 81Llnux)
cd llnux
patch - p1 < /usr/src/rtllnux/rtllnux-3.1/kernel_patch-2.4.4
Figure 18 : Patching Linux Kernel with RTLinux
41
S. Case Study C - LynxCS
S.1. Cverv|ew of LynxCS
LynxCS 81CS ls a unlx-llke real-tlme operatlng system lrom LynuxWorks (lormerly "Lynx
8eal-1lme Systems"). Sometlmes known as the Lynx Cperatlng System, LynxCS leatures
lull ortable Cperatlng System lnterlace (CSlx ) conlormance and, more recently, Llnux
compatlblllty. LynxCS ls mostly used ln real-tlme embedded systems, ln appllcatlons lor
avlonlcs, aerospace, the mllltary, lndustrlal process control and telecommunlcatlons
(Wlklpedla 2007).
LynxCS components are deslgned lor absolute determlnlsm (hard real-tlme
perlormance), whlch means that they respond wlthln a known perlod ol tlme.
redlctable response tlmes are ensured even ln the presence ol heavy l/C due to the
kernel's unlque threadlng model, whlch enables lnterrupt routlnes to be extremely short
and last.
Lynuxworks the developers ol LynxCS holds a patent on the technology that LynxCS
uses to malntaln hard real-tlme perlormance. atent #3,469,371 was granted to
LynuxWorks november 21, 1993: "Cperatlng System Archltecture uslng Multlple rlorlty
Llght Welght kernel 1ask-based lnterrupt Pandllng."
S.2. LynxCS |mp|ementat|on of CU schedu||ng
Meetlng hard real tlme perlormance requlrement ls a dllllcult challenge laced by
deslgners ol embedded systems. Many multlple lnput-output (l/C) devlces and system
processes glves rlse to complexltles ln tlmlng and can degrade system perlormance. As
42
we dlscussed above, the LCos bltmap scheduler works well as hard real-tlme CS wlth
determlnlstlc behavlour when handllng lew tasks. Powever, as tasks lncrease, the
determlnlstlc behavlour wlll be lost wlth MLC scheduler ln LCos.
LynuxWorks developed "rlorlty based quantum" a new way ol managlng kernel threads
and prlorlty processlng, and was awarded a patent (uS atent 3469371) lor lts
technology, whlch ls part ol the LynxCS real-tlme operatlng system.
Schedu||ng po||c|es used by LynxCS are:
1. lllC (llrst-ln, llrst-Cut),
2. Standard CSlx lllC pollcy. A preemptlble llxed prlorlty scheduler,
3. 8ound robln, and
4. roprletary Lynx schedullng pollcy named "rlorlty based quantum"
"rlorlty based quantum" ls slmllar to round robln pollcy, except that a conllgurable
tlme quantum ls dellned lor each level ol prlorlty. ut dlllerently, the length ol the tlme-
sllce ls not llxed but ls a varlable lor each prlorlty level.
1he scheduler works wlth a total ol 312 prlorlty levels, 236 lor user tasks and 236 lor
kernel threads.
S.3. Industry dep|oyment of LynxCS
LynxCS ls mostly used ln real-tlme embedded systems, ln appllcatlons lor avlonlcs,
aerospace, the mllltary, lndustrlal process control and telecommunlcatlons. 8elow ls
one ol the recent deployments ol LynxCS lor Alrbus A380 Super[umbo lllght 1est and
Slmulatlon .
1he Alrbus A380 Super[umbo [et ls a
double-deck, wlde-body, lour-englne
alrllner the largest Alrbus yet, seatlng
333 passengers.
43
6. Conc|us|on
1he lmportance ol soltware ln every aspect ol llle ls growlng. As dlscussed ln thls mlnl
pro[ect report, some ol the systems we lnterlace wlth are salety-crltlcal. lallure ol
soltware could cause catastrophlc consequences lor human llle ln such systems. lt ls
hard to lmaglne the Alrbus 380 crushlng wlth lull board! lmaglne the antllock brake
system (A8S) ln your car lalllng at the tlme you need lt most. We reemphaslze the quote
ln, Lmbedded Systems: uellnltlons - 1axonomles, where rol. ur. Martln 1lmmerman ,
quotes !avld's dellnltlon ol an LmS: "embedded systems ls a computer that does not
look llke a computer." ln addltlon, we reemphaslze that lor hard real-tlme systems
the deadllne must be met. lor solt real-tlme systems the deadllne should be met."
Pow the deadllnes are met ls a result ol my deslgn lactor among whlch ls the scheduler.
8ate monotonlc and Larllest deadllne llrst are two schedullng algorlthm popular wlth
Pard 8eal tlme systems. uslng our cases studles we dlscovered that LCos could meet
hard real-tlme requlrements wlth lts 8ltmap Scheduler but would not be able to be
determlnlstlc wlth MLC schedullng.
8tllnux ls a typlcal demonstratlon ol deployment ol Lul and 8ate Monotonlc. 81Llnux
supports hard real-tlme (determlnlstlc) operatlon through lnterrupt control between the
hardware and the operatlng system. vxWorks deploys a multltasklng kernel wlth pre-
emptlve and round-robln schedullng and last lnterrupt response. LynxCS a salety crltlcal
(hard real tlme ) has largely been dellned as hard determlnlstlc. 1hus, lt meets hard
real-tlme requlrements wlth lts patented approach to schedullng.
44
Append|x A: keferences
1. Andrew 1anenbaum (2001) Modern Cperatlng Systems 2
nd
Ldltlon
2. Sllberschatz Calvln (2004) Cperatlng systems concepts 7
th
edltlon
3. Wlklpedla
4. PowStullworks (2007) (www.computer.howstullworks.com/operatlng-system3.htm)
3. rol. ur. Martln 1lmmerman(2007) Lmbedded Systems: uellnltlons - 1axonomles
6. Anthony !. Massa (2002 )Lmbedded Soltware uevelopment wlth eCos
http://www.phptr.com -- 8y Anthony !. Massa 2002
7. http://www.cots[ournalonllne.com/home/artlcle.php?ld=100164&pg=1
8. Custav Salenby and uanlel Lundgren (2006 ) ,Comparlson ol schedullng ln lree81CS
and
9. eCos Clossary : Cheap36.com
10. Cygwln - http://www.cygwln.com/
11. Lcocentrlc -- http://www.ecoscentrlc.com/
12. lree Lcos -- http://www.ecos.sourceware.org/
13. uanlel lorsberg Magnus nllsson (2006) Comparlson between schedullng algorlthms ln
81Llnux and vxWorks
14. 8tllnux - http://www.rtllnuxlree.com/
13. vlrtuallzatlon
http://www.vlrtuatopla.com/lndex.php/Creatlng_vMware_Server_2.0_vlrtual_Machlne
s
16. 8en[amln lp CCMS (2006) erlormance Analysls ol vxWorks and 81Llnux , 8en[amln lp
CCMS W4993-2, Languages ol Lmbedded Systems uepartment ol Computer Sclence,
Columbla unlverslty, n?
17. http://www.lynxworks.com
18. Stalllngs, Wllllam (2004). Opetot|oq 5ystems lotetools ooJ ues|qo lt|oc|ples (f|ftb
|otetoot|oool eJ|t|oo), rentlce Pall. lS8n 0-13-147934-7.
19. lLLL CSlx Certlllcatlon Authorlty(2001)
20. "CSlx 1003.1 lAC verslon 1.12 (2006) CpenCroup -
http://www.opengroup.org/austln/papers/poslx_laq.html

You might also like