Professional Documents
Culture Documents
Chapter 4
Real-time Operating systems
Destaw M
1
Outlines
Introduction
Context switching mechanisms
Scheduling policies
Message passing and shared memory communications
Inter-process communication
2
Introduction
An operating system is a program that
Provides an “abstraction” of the physical
machine
Provides a simple interface to the machine
An OS is also a resource manager
provides access to the physical resources of a
machine
provides abstract resources (for example, a file, a
virtual page in memory, etc.)
3
Introduction
OS tasks
1. Process Management
Process creation
Process loading
Process execution control
Interaction of the process with signal events
Process monitoring
CPU allocation
Process termination
4
Introduction
2. Inter-process Communication
Synchronization and coordination
Deadlock detection
Process Protection
Data Exchange Mechanisms
3. Memory Management
Services for file creation, deletion, reposition and
protection
4. Input / Output Management
Handles requests and release subroutines for a
variety of peripherals and
read, write and reposition programs
5
Introduction
What is real time operating system (RTOS)?
From its name RTOS has two components:
Real-time
Operating system
First of all It is an Operating system which is:
Suitable for embedded system application
Constrained by timing (deadline) requirements
A program that
schedules execution in a timely manner,
Manages system resources, and
Provides a consistent foundation for developing real
time embedded system application.
6
Cont.…….
Are those systems in which the correctness of
the system depends on the logical result of
computation and also on the time at which the
results are produced
Are time bounded system
if the timing constraints of the system are not
met system failure is said to have occurred
7
Introduction
Classifications of RTOS
1. Hard Real Time System
Failure to meet deadlines is fatal
example : Flight Control System
2. Soft Real Time System
Late completion of jobs is undesirable but not fatal
System performance degrades as more & more jobs miss
deadlines
Example : Online Databases
8
RTOS Architecture
usually it is just a kernel
but for complex system it includes modules like
networking protocol
stacks debugging facilities,
device I/Os.
9
RTOS Architecture
kernel acts as an abstraction layer between the hardware
and the applications
It is an instruction which runs all the time in a computer
program.
The Kernel provides
an interrupt handler
task scheduler
resource sharing flags and
memory management.
10
Cont.…..
11
Context switching mechanisms
A context switch
process switch or a task switch
switching of the CPU from one process/task to another
is an essential feature of multitasking operating systems
How multitasking is working with single CPU?
A multitasking operating system
CPU seemingly simultaneously execute multiple tasks
it is an illusion of concurrency
It is achieved by means of context switching
12
Context switching
Task
Decomposed application into small, schedulable, and
sequential program units
governed by three time-critical properties;
Release time refers to the point in time from which the task can
be executed.
Deadline is the point in time by which the task must complete.
Execution time denotes the time the task takes to execute.
13
Context switching
Each task may exist in following states
14
Context switching
Dormant : Task doesn’t require CPU time
Ready: Task is ready to go active state, waiting
processor time
Active: Task is running
Suspended: Task put on hold temporarily
Pending: Task waiting for resource.
15
Context switching
What happened during switching ?
context of the to-be-suspended task will be saved
context of the to-be-executed task will be retrieved
Task Control block(TCB):
Task uses TCBs to remember its context.
accessible only by RTOS
Information on TCB could be
Task_ID
Task_State
Task_Priority
Task_Stack_Pointer
Task_Prog _Counter
16
Scheduling policies
keeping record of the state of each task and allocates
the CPU to one of them
More information about the tasks are required
Number of tasks
Resource Requirements
Execution time
Deadlines
Scheduling algorithms
Clock Driven Scheduling
Weighted Round Robin Scheduling
Priority Scheduling
17
Scheduling Algorithms
Clock Driven
All parameters about jobs (execution time/deadline)
known in advance
Schedule can be computed at some regular time
instances
Minimal runtime overhead
Not suitable for many applications
18
Scheduling Algorithms
Weighted Round Robin
Jobs scheduled in FIFO manner
Time quantum given to jobs is proportional to it’s weight
Example use : High speed switching network
Not suitable for precedence constrained jobs.
Job A can run only after Job B.
19
Scheduling Algorithms
Priority Scheduling
Processor never left idle when there are ready tasks
Processor allocated to processes according to priorities
Priorities
Static - at design time
Dynamic - at runtime
20
Priority Scheduling
Earliest Deadline First (EDF)
Process with earliest deadline given highest priority
Least Slack Time First (LSF)
slack = relative deadline – execution left
Rate Monotonic Scheduling (RMS)
Tasks priority inversely proportional to it’s period
For periodic tasks
21
Schedulers(Dispatchers)
are parts of the kernel responsible for determining
which task runs next
Most real-time kernels use priority-based scheduling
Each task is assigned a priority based on its importance
The priority is application-specific
Priority-Based Kernels
Non-preemptive
Preemptive
22
Non-Preemptive Kernels
Perform “cooperative multitasking”
Each task must explicitly give up control of the CPU
This must be done frequently to maintain the illusion of
concurrency
Asynchronous events are still handled by ISRs
ISRs can make a higher-priority task ready to run
But ISRs always return to the interrupted tasks
23
Non-Preemptive Kernels
Advantages
Interrupt latency is typically low
Task-response is now given by the time of the longest
task
Less need to guard shared data
Disadvantage
Responsiveness
A higher priority task might have to wait for a long
time
Response time is nondeterministic
24
Preemptive Kernels
The highest-priority task ready to run is always given
control of the CPU
If an ISR makes a higher-priority task ready, the higher-
priority task is resumed (instead of the interrupted task)
Execution of the highest-priority task is deterministic
Task-level response time is minimized
25
Message passing and shared memory communications
Message Passing
is a form of communication used in inter process communication.
Communication is made by the sending of messages to recipients.
Each process should be able to name the other processes.
The producer typically uses send() system call to send messages,
and the consumer uses receive() system call to receive messages.
synchronous or asynchronous,
could either be between processes running on a single machine,
or could be done over the networked machine
26
Message passing and shared memory communications
Message Queue
kernels provide an object called a message queue
used to place task send or received messages
a buffer-like object through which tasks and ISRs send and receive
messages to communicate and synchronize with each others
Message queues have
an associated message queue control block (QCB),
a name,
a unique ID,
memory buffers,
a message queue length,
a maximum message length,
27
Message passing and shared memory communications
Shared Memory communication
Is an OS provided abstraction
allows a memory region to be simultaneously accessed by
multiple programs
One process will create an area in RAM which other
processes can access
Since both processes can access the shared memory area
like regular working memory, this is a very fast way of
communication
it is less powerful, as for example the communicating
processes must be running on the same machine
28
Inter process communication
Tasks usually need to communicate and synchronize with
each other for different reasons such as:
Accessing a shared resource
To signal the occurrence of events to each other
RTOS provides inbuilt inter-task primitives , which
are kernel objects that facilitate these synchronization and
communication
Examples of such objects include
Semaphores
Message queues
Signal, pipes and so on
29
Semaphores
A semaphore is a kernel object that one or more tasks can
acquire or release for :
Mutual exclusion
Signaling the occurrence of an event
Synchronizing activities among tasks
semaphores have
An associated semaphore control block (SCB),
A unique ID,
A user-assigned value (binary or a count), and
A task-waiting list.
30
Semaphores (cont.)
There are two types
Binary semaphores
0 or 1
If value =0 => semaphore is not available
If value = 1 => semaphore available
Counting semaphores
>= 0
uses a count to allow it to be acquired or released multiple times
31
Semaphore Operations
Initialize (or create)
Value must be provided
Waiting list is initially empty
Wait (or pend)
Used for acquiring the semaphore
If the semaphore is available (the semaphore value is positive), the
value is decremented, and the task is not blocked
Otherwise, the task is blocked and placed in the waiting list
Most kernels allow you to specify a timeout
If the timeout occurs, the task will be unblocked and an error code
will be returned to the task
32
Semaphore Operations (cont.)
Signal (or post)
Used for releasing the semaphore
If no task is waiting, the semaphore value is incremented
Otherwise, make one of the waiting tasks ready to run
but the value is not incremented
Which waiting task to receive the key?
Highest-priority waiting task
First waiting task
33
Sharing I/O Devices
34
Sharing I/O Device (cont.)
In the example, each task must know about the
semaphore in order to access the device
A better solution:
Encapsulate the semaphore
35
Encapsulating a Semaphore
36
Applications of Counting Semaphores
A counting semaphore is used when a resource can be
used by more than one task at the same time
Example:
Managing a buffer pool of 10 buffers
37
RTOS need to provide
Multitasking Capabilities:
A RT application is divided into multiple tasks.
The separation into tasks helps to keep the CPU busy.
Short Interrupt Latency :
Interrupt Latency = Hardware delay to get interrupt
signal to the processor + time to complete the current
instruction + time executing system code in
preparation for transferring execution to the devices
interrupt handler.
38
RTOS need to provide
Fast Context Switch :
The time between the OS recognizing that the awaited
event has arrived and the beginning of the waiting task is
called context switch time(dispatch latency).
This switching time should be minimum
Control Of Memory Management :
an OS should provide way for task to lock its code and data
into real memory so that it can guarantee predictable response
to a interrupt.
39
RTOS need to provide
Proper scheduling :
OS must provide facility to schedule properly time
constrained tasks.
Fine granularity Timer Services :
Millisecond resolution is bare minimum .
Microseconds resolution is required in some cases.
Rich set of Inter Task Communication Mechanism :
Message queues,
shared memory ,
Synchronization –Semaphores, event flags
40
Example of embedded system application
using RTOS
TCP/IP NETWORK streaming USING RTOS
VxWorks
AUTOMATIC CHOCOLATE VENDING MACHINE
(AVCM) USING µC/OS-II RTOS
DIGITAL CAMERA
AN ADAPTIVE CRUISE CONTROL (ACC)
SYSTEM IN A CAR
SMART CARD for security
For robotics
MOBILE PHONE hardware and software
41
What to look for in a good RTOS
Determinism – does the RTOS have published numbers on the minimum,
average and maximum cycles its functions require?
Documented and minimum Interrupt latency
Minimum context switch time
Available plug-ins – USB Host, USB Device, TCP/IP, Bluetooth, SSH,
SSL, etc.
Compatibility of RTOS with your chosen tool chain (processor,
programming language and so on)
Overall cot of RTOS –
initial cost,
procuring source code of RTOS,
support costs,
royalties,
maintenance costs
42
THANK
YOU!!!
43
Quiz 5 M
What to look for in a good RTOS?(2 M)
What are the two types of RTOS?(1 M)
What are the functions of kernel?(2 M)
44