You are on page 1of 21

Two-level scheduling:

 Low level (CPU) scheduler uses multiple queues to select the next
process, out of the processes in memory, to get a time quantum.
 High level (memory) scheduler moves processes from memory to
disk and back, to enable all processes their share of CPU time
 Low-level scheduler keeps queues for each priority
 Processes in user mode have positive priorities
 Processes in kernel mode have negative priorities (lower is higher)

Unix low-level Scheduling Algorithm:


 Pick process from highest (non-empty) priority queue
 Run for 1 quantum (usually 100 ms.), or until it blocks
 Increment CPU usage count every clock tick
 Every second, recalculate priorities:
o Divide cpu usage by 2
o New priority = base + cpu_usage + nice
o Base is negative if the process is released from waiting in
kernel mode
 Use round robin for each queue (separately)

Unix low-level scheduling Algorithm - I/O:


 Blocked processes are removed from queue, but when the
blocking event occurs, are placed in a high priority queue
 The negative priorities are meant to release processes quickly
from the kernel
 Negative priorities are hardwired in the system, for example, -5
for Disk I/O is meant to give high priority to a process released
from disk I/O
 Interactive processes get good service, CPU bound processes get
whatever service is left...
Priority Calculation in Unix:
CPU j ( i−1) GCPU k ( i−1)
P j ( i )=Base j + +
2 4×W k
U ( i−1) CPU j ( i−1)
CPU j ( i )= j +
2 2
GU k ( i−1) GCPU k ( i−1)
GCPU k (i −1)= +
2 2
Pj(i) = Priority of process j at beginning of interval i

Basej = Base priority of process j

Uj(i) = Processor utilization of process j in interval i

Guk(i) = Total processor utilization of all processes in group k during


interval i

CPUj(i) = Exponentially weighted average processor utilization by

process j through interval i

GCPUk(I)= Exponentially weighted average total processor utilization of


group k
through interval i

Wk = Weighting assigned to group k, with the constraint that

0 £ Wk £ 1

And

Unix Scheduling Priorities:


Unix processes have an associated system nice value which is used by the
kernel to determine when it should be scheduled to run. This value can be
increased to facilitate processes executing quickly or decreased so that the
processes execute slowly and thus do not interfere with other system
activities.

The process scheduler, which is part of the Unix kernel, keeps the CPU busy
by allocating it to the highest priority process. The nice value of a process is
used to calculate the scheduling priority of a process. Other factors that are
taken into account when calculating the scheduling priority for a process
include the recent CPU usage and its process state, for example "waiting for
I/O" or "ready to run".

Normally, processes inherit the system nice value of their parent process. At
system initialization time, the system executes the init process with a system
nice value of 20, this is the system default priority. All processes will inherit
this priority unless this value is modified with the command nice. The nice
value of 0 establishes an extremely high priority, whereas a value of 39
indicates a very low priority on SVR4 derived systems. On BSD derived
systems scheduling priorities range from 0 to 127. The higher the value, the
lower the priority, and the lower the value, the higher the priority.

Nice:
is a program found on Unix and Unix-like operating systems such as Linux.
It directly maps to a kernel call of the same name. nice is used to invoke a
utility or shell script with a particular priority, thus giving the process more
or less CPU time than other processes. A niceness of −20 is the highest
priority and 19 is the lowest priority. The default niceness for processes is
inherited from its parent process and is usually 0.
Unix scheduler:

Process Scheduling in Unix:


 Based on multi-level feedback queues

 Priorities range from -64 to 63 (lower number means higher priority)

 Negative numbers reserved for processes waiting in kernel mode


(that is, just woken up by interrupt handlers)
 Time quantum = 1/10 sec (empirically found to be the longest
quantum that could be used without loss of the desired response for
interactive jobs such as editors)

 short time quantum means better interactive response

 long time quantum means higher overall system throughput


since less context switch overhead and less processor cache
flush.

 Priority dynamically adjusted to reflect

 resource requirement (e.g., blocked awaiting an event)

 resource consumption (e.g., CPU time)

Unix CPU Scheduler:


 Two values in the PCB

 p_cpu: an estimate of the recent CPU use

 p_nice: a user/OS settable weighting factor (-20..20) for


flexibility;
default = 0; negative increases priority; positive decreases
priority

 A process' priority calculated periodically

priority = base + p_cpu + p_nice


and the process is moved to appropriate ready queue

 CPU utilization, p_cpu, is incremented each time the system clock


ticks and the process is found to be executing.

 p_cpu is adjusted once every second (time decay)

 Possible adjustment: divide by 2 (that is, shift right)

 Motivation: Recent usage penalizes more than past usage


 Precise details differ in different versions (e.g. 4.3 BSD uses
current load (number of ready processes) also in the adjustment
formula).

Example of process scheduling:

– Processes A, B, and C are created at the same time with base


priorities of 60

– Clock interrupts the system 60 times a second and increments


counter for running process
RT Communication System Needs:
Predictable communication service for real-time data

 Determinism
Timeliness
Low complexity
Testing
Active Redundancy (e.g., TMR)
Certification
 Multicast-independent non intrusive observation, TMR

 Uni-directionality: separate communication–computation


 Flexible best-effort communication service for the transmission
of non-real-time data coming from an open environment
 Support for streaming data
 Dependability
Limits in RT-Protocol Design:
 Temporal guarantees
 Synchronization domain
 Error containment
 Consistent ordering of events

Communication -Channel Characteristics:


 Bandwidth
 Propagation delay
 Bit length
 Protocol efficiency
Bandwidth:
Number of bits that can traverse the channel in a unit of time
Depends on:
 Physical characteristics of the channel (e.g., single wire, twisted
pair, shielding, optical fiber)
 Environment (disturbances)
Example:
Bandwidth limitation in cars due to EMI
(10Kbit/s for single wire, 1Mbit/s for unshielded twisted pair)
Propagation Delay:
time it takes a bit to travel from one end of the communication
channel to the other.
Determined by:
 the transmission speed of the electromagnetic wave
 the length of the channel
Bit Length:
 Number of bits that can traverse a channel during the
propagation delay
 Describes how many bits can “travel” simultaneously

Example:
Bandwidth of channel: b = 100Mbit/s
Length of channel: l = 1000m
➭Bit length of channel: bl = b/Cc ×l
10
8
bit/s /(2 ×10^8 m/s) ×1000 m = 500 bit
In embedded systems:
 Data flows :
• from the sensors and control panels to the central cluster of
processors
• between processors in the central cluster
• from processors to the actuators and output displays
 Communication overhead adds to the computer response time
In hard & soft RTS:
 Hard: use communication protocols that allow the communication
overhead to be bounded
 Soft: (exp: MulMed & video conf.)
• excessive delays in message delivery can significantly degrade
the quality of service
• occasional failure to meet message-delivery is not fatal
Comm. key performance:
 Traditional à system throughput
• How much data can be transferred over the network in one unit
time from source to destination
 RTS à probability of delivering a message by a certain deadline
• Lost message = infinite delivery time
• Measures:
• the speed with which messages are delivered
• the probability of losing messages
Overheads causing delay:
 Formatting and / or packetizing the message
 Queueing the message, as it waits for access to the communication
medium
 Sending the message from the source to the destination
 Deformatting the message
Real-Time Traffic:
 Typically classified by:
• Its deadline
• Arrival pattern
• Priority
 In hard RTS (exp: embedded appl.): the deadline of the traffic is
related to the deadline of the task to which that communication
belongs
 In soft RTS (exp: MulMed appl.): the deadline is related directly to
the application
 Priority is based on the importance of that message class to the
application
 If there is an overload traffic, message priority can be used to
determine which message are dropped to ensure that the more
important traffic is delivered in a timely fashion
RT Traffic Rates:
 Constant rate: fixed-size packets are generated at periodic intervals
• Many sensors produce such traffic
• Smooth and not bursty à easy to handle, small buffer
 Variable rate: fixed-size packet being generated at irregular intervals
(exp: voice –talkspurt–) or variable-sized packet being generated at
regular intervals (exp: video)
• Bursty traffic à greater demands on buffer space
Communications media:
 Three most important media:
• Electrical
• Optical
• Wireless
 Each medium has a distinct set of properties:
• Bandwidth
• Distance
• Fault / interference etc.

Network Topologies:
Network topology is the arrangement of various elements of computer
network.
 Must be carefully chosen, since it affects the system response time
and reliability
 Broadly classified: point-to-point & shared
 Popular topologies: bus, ring, dual-ring, star, n-dimensional
hypercube, multistage network
 Physical topology vs logical topology
Choosing one topology over another can impact :
 Type of equipment a network needs
 Capabilities of the equipment
 Network growth
 Way a network is managed
CHOOSING A TOPOLOGY:
 BUS
– network is small
– network will not be frequently reconfigured
– least expensive solution is required
– network is not expected to grow much
 STAR
– it must be easy to add/remove PCs
– it must be easy to troubleshoot
– network is large
– network is expected to grow in the future
 RING
– network must operate reasonably under heavy load
– higher speed network is required
– network will not be frequently reconfigured
Important features of topology:
 Diameter: max distance (number of hops) between any two nodes
 Node degree: number of edges adjacent to each node à determine the
number of I/O port per node and the number of links in the system
 Fault-tolerance: measure the extent to which the network can
withstand the failure of individual links and nodes while still
remaining functional
Sending message:
 Packet switching
 Circuit switching
 Wormhole routing à require less node buffer
• Pipelining packet transmission in a multihop network
• Each packet is broken down into a train of flits, each about one
or two bytes long
• The sender transmits one flit per unit time, and the flits are
forwarded from node to node until they reach their destination
• Only the header flit in a train has the destination information;
each node is simply forwards the next flit to the same node that
it sent the previous flit to in the train

Protocols:
 Contention-based protocols
• Virtual-Time Carrier-Sensed Multiple Access (VTCSMA)
• Window protocol
 Token-based protocols
• Timed-token protocol
• Token-Ring protocol (IEEE 802.5)
 Stop-and-Go Multihop protocol
 Polled bus protocol
 Hierarchical round-robin protocol
 Deadline-based protocols
CSMA:
 CSMA is an efficient communication scheme when the end-to-end
transmission delay is much less than the average time to transmit a
packet and when the load is not very high
 CSMA is a truly distributed algorithm: each node deciding when it
will transmit.
Exploiting CSMA:
 Facts:
• Nodes do see a consistent time if their clocks are synchronized
• Nodes observe the same channel
 Each node has information about:
• The state of the channel
• The priorities of the packets waiting in its transmission buffer
to be transmitted over the network
• The time according to the synchronized clock
 Node does not have any idea of the priorities of any packets that may
be awaiting transmission at the other nodes
 Simply using the state of the channel and the priorities of its packets is
not sufficient; the time information must also
Fault tolerance:
Fault-tolerance is the ability of a system to maintain its functionality, even in
the presence of faults , The three basic notions are fault, failure, and error.
Afault is a defect or flaw that occurs in some hardware or software
component.
An error is a manifestation of a fault.
Afailure is a departure of a system from the service required.

Consider for instance a system running on a multi-processor architecture: a


fault in one processor might cause it to crash (i.e., a failure), which will be
seen as a fault of the system. Therefore, the ability of the system to function
even in the presence of the failure of one processor will be regarded as fault-
tolerance instead of failure-tolerance.

Not all faults cause immediate failure: faults may be latent (activated but not
apparent at the service level), and later become effective. Fault-tolerant
systems attempt to detect and correct latent errors before they become
effective.

Faults are classified according to the following criteria:

 by their nature: accidental or intentional;


 by their origin: physical, human, internal, external, conception,
operational;
 by their persistence: transient or permanent.

Failures are classified according to the following criteria:

 by their domain: failures on values and/or timing failures;


 by their perception by the user;
 by their consequences on the environment.

The means for fault-tolerance are either:

 error processing (to remove errors from the system's state), which
can be carried out either with recovery (rolling back to a previous
correct state) or with compensation (masking errors using the
internal redundancy of the system).
 fault treatment (to prevent faults from being activated again), which
is carried out in two steps: diagnostic (determining the cause,
location, and nature of the error) and then passivation (preventing
the fault from being activated again).

The goal of the fault tolerance is to reduce the effects of the error if they
appear to eliminate or delay failures.

Type of faults:
 Transient faults that occur once and then disappear
 Intermittent faults that occur, disappear, and then reappear
 Permanent faults continue to exist until the system is repaired

FAULT TOLERANCE

Error processing: error removal, before failure occurs

Fault treatment: avoiding fault(s) to be activated again

FAULT TREATEMENT

Fault diagnosis
determination of error causes

Fault isolation
removing faulty components from
subsequent execution process

 system no longer able to


deliver same service

Reconfiguration
Modification of system structure, such that non-
failed components deliver degraded service
Reconfiguration :
is the process of eliminating faulty component from a system and restoring
the system to some operational state.

Fault Tolerant Strategies:


 Fault tolerance in computer system is achieved through redundancy in
hardware, software, information, and/or time.
Such redundancy can be implemented in static, dynamic, or hybrid
configurations.
 Fault tolerance can be achieved by many techniques:
– Fault masking is any process that prevents faults in a system
from introducing errors. Example: Error correcting memories
and majority voting.

– Reconfiguration is the process of eliminating faulty component


from a system and restoring the system to some operational state.

Redundancy:
Fault Tolerance requires some form of redundancy

 Time Redundancy

 Information Redundancy

Hardware Redundancy

There are two approaches that may be used to implement fault


tolerance in software:

1- Defensive programming:
It is an approach to program development where
programmers assume that there may be undetected faults or
inconsistences in there program.

2- Fault tolerance architecture :


Are hardware and software system architecture that provide
explicit support for fault tolerance.

Real time protocol:


RTP is the Internet-standard protocol for the transport of real-time data,
including audio and video. It can be used for media-on-demand as well as
interactive services such as Internet telephony.
RTP consists of a data and a control part. The latter is called RTCP.
The data part of RTP is a thin protocol providing support for applications
with real-time properties such as
• continuous media (e.g., audio and video),
• timing reconstruction,
• loss detection,
• security and content identification.
RTCP provides support for real-time conferencing of groups of any size
within an internet. This support includes:
• Source identification and support for gateways like audio and video
bridges
• Multicast-to-unicast translators.
• Quality-of-service feedback from receivers to the multicast group
• Support for the synchronization of different media streams.
RTP & RTCP Overview• Standardized packet format for delivering audio
and video over IP networks.• Used in communication and entertainment
systems that involve streaming media, such as telephony, video
teleconference applications, television services and web- based push-to-talk
features.• RTP is used in conjunction with the RTP Control Protocol
(RTCP). While RTP carries the media streams (e.g., audio and video), RTCP
is used to monitor transmission statistics and quality of service (QoS) and
aids synchronization of multiple streams.
RTP header contains the following:-
• Sequence number
– used for packet-loss detection
• Timestamp
– Timing information
– synchronization of media streams
• Payload type
– Identifies the media codec of the payload
Unix Scheduling
Real Time Communication And Network Topology
Fault Tolerance And Real Time Protocol
UNIVERSITY OF BAHRI

COLLAGE OF ENGINEERING AND


ARCHITECTURE

ELECTERICAL ENGINEERING (control)

5th year – 9th semester

ASSIGNEMENT (1)

Present by :
Shayma Ali Abdallah
Supervisor :
Dr. Zeinab Mahmoud
5>68_,-">x>!98H.C!=13*vW<1i

You might also like