You are on page 1of 32


B.Tech. Sem. (I mid Term Test 2018)
Duration: 1:15 Hrs Subject: Digital Image Processing (8CS2A)
Branch: Computer Science Engineering Maximum marks-20

Q.1 What are the fundamental components of general purpose image processing system. (8)

Q.1 What is image sampling and quantization? What are the various applications of digital?

Q.2 What do you mean by histogram. Explain histogram equalization of an image. (8)
Q.2 What is spatial filtering? Define spatial correlation and convolution with an example.

Q.3 Explain image restoration model in brief. (4)

Q.3 Differentiate between order statistic and adaptive filters.
Ans 1 :- As recently as the mid-1980s, numerous models of image processing systems being sold
throughout the world were rather substantial peripheral devices that attached to equally
substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image
processing hardware in the form of single boards designed to be compatible with industry
standard buses and to fit into engineering workstation cabinets and personal computers. In
addition to lowering costs, this market shift also served as a catalyst for a significant number of
new companies whose specialty is the development of software written specifically for image

Although large-scale image processing systems still are being sold for massive
imaging applications, such as processing of satellite images, the trend continues toward
miniaturizing and blending of general-purpose small computers with specialized image
processing hardware. Figure 3 shows the basic components comprising a typical general-purpose
system used for digital image processing. The function of each component is discussed in the
following paragraphs, starting with image sensing.

With reference to sensing, two elements are required to acquire digital images. The first is a
physical device that is sensitive to the energy radiated by the object we wish to image. The
second, called a digitizer, is a device for converting the output of the physical sensing device into
digital form. For instance, in a digital video camera, the sensors produce an electrical output
proportional to light intensity. The digitizer converts these outputs to digital data.

Specialized image processing hardware usually consists of the digitizer just mentioned, plus
hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), which
performs arithmetic and logical operations in parallel on entire images. One example of how an
ALU is used is in averaging images as quickly as they are digitized, for the purpose of noise
reduction. This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs functions that require
fast data throughputs (e.g., digitizing and averaging video images at 30 framess) that the typical
main computer cannot handle.
Fig.1. Components of a general purpose Image Processing System

The computer in an image processing system is a general-purpose computer and can range from
a PC to a supercomputer. In dedicated applications, some times specially designed computers are
used to achieve a required level of performance, but our interest here is on general-purpose
image processing systems. In these systems, almost any well-equipped PC-type machine is
suitable for offline image processing tasks.

Software for image processing consists of specialized modules that perform specific tasks. A
well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules. More sophisticated software packages allow the integration of
those modules and general-purpose software commands from at least one computer language.


Ans 1:- In order to become suitable for digital processing, an image function f(x,y) must be
digitized both spatially and in amplitude. Typically, a frame grabber or digitizer is used to
sample and quantize the analogue video signal. Hence in order to create an image which is
digital, we need to covert continuous data into digital form. There are two steps in which it is

 Sampling
 Quantization

The sampling rate determines the spatial resolution of the digitized image, while the quantization
level determines the number of grey levels in the digitized image. A magnitude of the sampled
image is expressed as a digital value in image processing. The transition between continuous
values of the image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of fine shading
details in the image. The occurrence of false contours is the main problem in image which has
been quantized with insufficient brightness levels.
In this lecture we will talk about two key stages in digital image processing. Sampling and
quantization will be defined properly. Spatial and grey-level resolutions will be introduced and
examples will be provided. An introduction on implementing the shown examples in MATLAB
will be also given in this lecture.
Sampling in Digital Image Processing:

 In this we digitize x-axis in sampling.

 There are some variations in the sampled signal which are random in nature. These
variations are due to noise.
 We can reduce this noise by more taking samples. More samples refer to collecting
more data i.e. more pixels (in case of an image) which will eventually result in better
image quality with less noise present.
Quantization in Digital Image Processing:

 It is opposite of sampling as sampling is done on the x-axis, while quantization is

done on the y-axis.
 Digitizing the amplitudes is quantization. In this, we divide the signal amplitude into
quanta (partitions).

application of digital image processing

The field of digital image processing has experienced continuous and significant expansion in
recent years. The usefulness of this technology is apparent in many different disciplines covering
medicine through remote sensing. The advances and wide availability of image processing
hardware has further enhanced the usefulness of image processing.

• medical applications
• restorations and enhancements
• digital cinema
• image transmission and coding
• color processing
• remote sensing
• robot vision
• hybrid techniques
• facsimile
• pattern recognition
• registration techniques
• multidimensional image processing
• image processing architectures and workstations
• video processing
• programmable DSPs for video coding
• high-resolution display
• high-quality color representation
• super-high-definition image processing
• impact of standardization on image processing.

Ans 2:- Histogram equalization is used to enhance contrast. It is not necessary that contrast will
always be increase in this. There may be some cases were histogram equalization can be worse.
In that cases the contrast is decreased.

Lets start histogram equalization by taking this image below as a simple image. Image

Histogram of this image

The histogram of this image has been shown below.
Now we will perform histogram equalization to it.

First we have to calculate the PMF (probability mass function) of all the pixels in this image. If
you donot know how to calculate PMF, please visit our tutorial of PMF calculation.

Our next step involves calculation of CDF (cumulative distributive function). Again if you
donot know how to calculate CDF , please visit our tutorial of CDF calculation.


Ans 2:- Basics of Spatial Filtering

The concept of filtering has its roots in the use of the Fourier transform for signal processing
in the so-called frequency domain.
Spatial filtering term is the filtering operations that are performed directly on the pixels of an
Filtering is a technique for modifying or enhancing an image. Spatial domain operation or
filtering (the processed value for the current pixel processed value for the current pixel depends
on both itself and surrounding pixels). Hence Filtering is a neighborhood operation, in which the
value of any given pixel in the output image is determined by applying some algorithm to the
values of the pixels in the neighborhood of the corresponding input pixel. A pixel's neighborhood
is some set of pixels, defined by their locations relative to that pixel.
In this lecture we will talk about spatial domain operations. Mask or filters will be defined. The
general process of convolution and correlation will be introduced via an example. Also
smoothing linear filters such as box and weighted average filters will be introduced.
spatial correlation and convolution
Correlation and Convolution are basic operations that we will perform to extract
information from images. They are in some sense the simplest operations that we can
perform on an image, but they are extremely useful. Moreover, because they are simple,
they can be analyzed and understood very well, and they are also easy to implement and
can be computed very efficiently. Our main goal is to understand exactly what
correlation and convolution do, and why they are useful. We will also touch on some of
their interesting theoretical properties; though developing a full understanding of them
would take more time than we have.
These operations have two key features: they are shift-invariant, and they are linear.
Shift-invariant means that we perform the same operation at every point in the image.
Linear means that this operation is linear, that is, we replace every pixel with a linear
combination of its neighbors. These two properties make these operations very simple;
it’s simpler if we do the same thing everywhere, and linear operations are always the
simplest ones.
We will first consider the easiest versions of these operations, and then generalize. We’ll
make things easier in a couple of ways. First, convolution and correlation are almost
identical operations, but students seem to find convolution more confusing. So we will
begin by only speaking of correlation, and then later describe convolution. Second, we
will start out by discussing 1D images. We can think of a 1D image as just a single row
of pixels. Sometimes things become much more complicated in 2D than 1D, but luckily,
correlation and convolution do not change much with the dimension of the image, so
understanding things in 1D will help a lot. Also, later we will find that in some cases it is
enlightening to think of an image as a continuous function, but we will begin by
considering an image as discrete, meaning as composed of a collection of pixels.

Ans 3:- Image Restoration is the operation of taking a corrupt/noisy image and estimating the
clean, original image. Corruption may come in many forms such as motion blur, noise and
camera mis-focus.[1] Image restoration is performed by reversing the process that blurred the
image and such is performed by imaging a point source and use the point source image, which is
called the Point Spread Function (PSF) to restore the image information lost to the blurring
Image restoration is different from image enhancement in that the latter is designed to emphasize
features of the image that make the image more pleasing to the observer, but not necessarily to
produce realistic data from a scientific point of view. Image enhancement techniques
(like contrast stretching or de-blurring by a nearest neighbor procedure) provided by imaging
packages use no a priori model of the process that created the image.
With image enhancement noise can effectively be removed by sacrificing some resolution, but
this is not acceptable in many applications. In a fluorescence microscope, resolution in the z-
direction is bad as it is. More advanced image processing techniques must be applied to recover
the object.
The objective of image restoration techniques is to reduce noise and recover resolution loss.
Image processing techniques are performed either in the image domain or the frequency domain.
The most straightforward and a conventional technique for image restoration is deconvolution,
which is performed in the frequency domain and after computing the Fourier transform of both
the image and the PSF and undo the resolution loss caused by the blurring factors. This
deconvolution technique, because of its direct inversion of the PSF which typically has poor
matrix condition number, amplifies noise and creates an imperfect deblurred image. Also,
conventionally the blurring process is assumed to be shift-invariant. Hence more sophisticated
techniques, such as regularized deblurring, have been developed to offer robust recovery under
different types of noises and blurring functions.


In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest
value.[1] Together with rank statistics, order statistics are among the most fundamental tools
in non-parametric statistics and inference.
Important special cases of the order statistics are the minimum and maximum value of a sample,
and (with some qualifications discussed below) the sample median and other sample quantiles.
When using probability theory to analyze order statistics of random samples from a continuous
distribution, the cumulative distribution function is used to reduce the analysis to the case of
order statistics of the uniform distribution.

ADAPTIVE FILTER:- An adaptive filter is a system with a linear filter that has a transfer
function controlled by variable parameters and a means to adjust those parameters according to
an optimization algorithm. Because of the complexity of the optimization algorithms, almost all
adaptive filters are digital filters. Adaptive filters are required for some applications because
some parameters of the desired processing operation (for instance, the locations of reflective
surfaces in a reverberant space) are not known in advance or are changing. The closed loop
adaptive filter uses feedback in the form of an error signal to refine its transfer function.
Generally speaking, the closed loop adaptive process involves the use of a cost function, which is
a criterion for optimum performance of the filter, to feed an algorithm, which determines how to
modify filter transfer function to minimize the cost on the next iteration. The most common cost
function is the mean square of the error signal.
As the power of digital signal processors has increased, adaptive filters have become much more
common and are now routinely used in devices such as mobile phones and other communication
devices, camcorders and digital cameras, and medical monitoring equipment.
B. Tech. IV Yr. VIII Sem. (I mid Term Solution 2017 - 2018)
. Subject: Distributed System Solution (8CS3A) Branch: CSE
Max marks-20

Q.1 (8)
(A) What is a distributed system? Explain design issues of distributed operating system.
Sol: Distributed computing is a field of computer science that studies distributed systems. A
distributed system is a model in which components located on networked computers
communicate and coordinate their actions by passing messages.

1. Openness
The openness of a computer system is the characteristic that determines whether the system can
be extended and re-implemented in various ways. The openness of distributed systems is
determined primarily by the degree to which new resource-sharing services can be added and be
made available for use by a variety of client programs.
2. Security
Many of the information resources that are made available and maintained in distributed systems
have a high intrinsic value to their users. Their security is therefore of considerable importance.
Security for information resources has three components: confidentiality, integrity, and
3. Scalability
Distributed systems operate effectively and efficiently at many different scales, ranging from a
small intranet to the Internet. A system is described as scalable if it will remain effective when
there is a significant increase in the number of resources and the number of users.
4. Failure handling
Computer systems sometimes fail. When faults occur in hardware or software, programs may
produce incorrect results or may stop before they have completed the intended computation.
Failures in a distributed system are partial – that is, some components fail while others continue
to function. Therefore the handling of failures is particularly difficult.
5. Concurrency
Both services and applications provide resources that can be shared by clients in a distributed
system. There is therefore a possibility that several clients will attempt to access a shared
resource at the same time. Object that represents a shared resource in a distributed system must
be responsible for ensuring that it operates correctly in a concurrent environment. This applies
not only to servers but also to objects in applications. Therefore any programmer who takes an
implementation of an object that was not intended for use in a distributed system must do
whatever is necessary to make it safe in a concurrent environment.
6. Transparency
Transparency can be achieved at two different levels. Easiest to do is to hide the distribution
from the users. The concept of transparency can be applied to several aspects of a distributed
a) Location transparency: The users cannot tell where resources are located
b) Migration transparency: Resources can move at will without changing their names
c) Replication transparency: The users cannot tell how many copies exist.
d) Concurrency transparency: Multiple users can share resources automatically.
e) Parallelism transparency: Activities can happen in parallel without users knowing.
7. Quality of service
Once users are provided with the functionality that they require of a service, such as the file
service in a distributed system, we can go on to ask about the quality of the service provided. The
main nonfunctional properties of systems that affect the quality of the service experienced by
clients and users are reliability, security and performance. Adaptability to meet changing system
configurations and resource availability has been recognized as a further important aspect of
service quality.
8. Reliability
One of the original goals of building distributed systems was to make them more reliable than
single-processor systems. The idea is that if a machine goes down, some other machine takes
over the job. A highly reliable system must be highly available, but that is not enough. Data
entrusted to the system must not be lost or garbled in any way, and if files are stored redundantly
on multiple servers, all the copies must be kept consistent. In general, the more copies that are
kept, the better the availability, but the greater the chance that they will be inconsistent,
especially if updates are frequent.
9. Performance
Always the hidden data in the background is the issue of performance. Building a transparent,
flexible, reliable distributed system, more important lies in its performance. In particular, when
running a particular application on a distributed system, it should not be appreciably worse than
running the same application on a single processor. Unfortunately, achieving this is easier said
than done.

(B) Explain Chandy - Lamport algorithm for consistent state recording.
Sol: Chandy-Lamport algorithm
The Chandy-Lamport algorithm uses a control message, called a marker
 Whose role in a FIFO system is to separate messages in the channels.
 After a site has recorded its snapshot, it sends a marker, along all of its outgoing channels
before sending out any more messages.
 A marker separates the messages in the channel into those to be included in the snapshot
from those not to be recorded in the snapshot.
 A process must record its snapshot no later than when it receives a marker on any of its
incoming channels

The Global State Recording ALGORITHM

 Sender. (Process p).

1. Record the state of (p).
2. For each outgoing channel (c) incident to (p), send a marker before sending ANY
other messages.
 Receiver (Process q receives marker on channel c1).
o If (q) has not yet recorded its state.
1. Record the state of (q).
2. Record the state of (c1) as null.
3. For each outgoing channel (c) incident to (q), send a marker before
sending ANY other messages.
o If (q) has already recorded its state.
1. Record the state of (c1) as all messages received since the last time the
state of (q) was recorded.

Q.2 (8)
(A)Where do you need RPC? Explain with a suitable example.
Sol: What is RPC:
RPC is a powerful technique for constructing distributed, client-server based applications. It is
based on extending the notion of conventional or local procedure calling, so that the called
procedure need not exist in the same address space as the calling procedure. The two processes
may be on the same system, or they may be on different systems with a network connecting
them. By using RPC, programmers of distributed applications avoid the details of the interface
with the network. The transport independence of RPC isolates the application from the physical
and logical elements of the data communications mechanism and allows the application to use a
variety of transports.

RPC makes the client/server model of computing more powerful and easier to program. When
combined with the ONC RPCGEN protocol compiler (Chapter 33) clients transparently make
remote calls through a local procedure interface.

How RPC Works

An RPC is analogous to a function call. Like a function call, when an RPC is made, the calling
arguments are passed to the remote procedure and the caller waits for a response to be returned
from the remote procedure. Figure 32.1 shows the flow of activity that takes place during an
RPC call between two networked systems. The client makes a procedure call that sends a request
to the server and waits. The thread is blocked from processing until either a reply is received, or
it times out. When the request arrives, the server calls a dispatch routine that performs the
requested service, and sends the reply to the client. After the RPC call is completed, the client
program continues. RPC specifically supports network applications.
Remote Procedure Calling Mechanism A remote procedure is uniquely identified by the triple:
(program number, version number, procedure number) The program number identifies a group of
related remote procedures, each of which has a unique procedure number. A program may
consist of one or more versions. Each version consists of a collection of procedures which are
available to be called remotely. Version numbers enable multiple versions of an RPC protocol to
be available simultaneously. Each version contains a number of procedures that can be called
remotely. Each procedure has a procedure number.

(B) Explain DCE architecture model and its components in detail.

Sol: Distributed Computing Environment
It is defined by the open software foundation (OSF). It is architecture, a set of standard service,
and application programs, built on top of the existing operating system which hides the
differences among individual is used to support the development and usage of
distributed applications in a single distributed system. It use client/server model.

DCE can run on many different computers, operating systems (Unix, Os/2, VMS,windows) and
networks in a distributed system, Provide a coherent seamless platform for running distributed
applications. Provide a mechanism for synchronizing clocks on different machines. Provide tools
to make it easier to write distributed applications in which multiple users at multiple locations
can work together. Provide extensive tools for authentication and access protection

DCE architecture
DCE cell: the basic unit of operation in the DCE. A cell is a group of users, systems, and
resources that are typically centered around a common purpose and that share a common DCE
services. It is an administrative domain that allows users, machines, and resources to be
managed through functions distributed within the network in which they are in. Members
working on the same project in an organization are likely belonging to the same cell.
Distributed services provided by the DCE
• Thread services;
• RPC;
• Time service;
• Directory services;
• Security service.
Advantages of DCE
• The services provided by DCE is much easier to use than the ones found in other
computer networking environments: i.e.: the DCE remote procedure call provide a much
simpler way for communicating between software modules running on different system
than using socket calls.
• The DCE security service provides a reliable way to determine if a user in a distributed
system should be allowed to perform certain action.
• 3)supports portability and interoperability by hiding differences among the various
hardware, software, and networking elements in a large network.
• Supports distributed file service which means files present on workstation in a network
are available to this network.
Q.3 (4)
(A) Give the model of static process scheduling?
Sol: Static Process Scheduling
 Given a set of partially ordered tasks, define a mapping of processes to processors before the
execution of the processes.
 Cost model: CPU cost and communication cost, both should be specified in prior.
 Minimize the overall finish time (make-span) on a non-preemptive multiprocessor system (of
identical processors)
 Except for some very restricted cases, scheduling to optimize the make-span are NP-
 Heuristic solution are usually proposed
Precedence Process and Communication

(B) Why do you need dynamic load sharing and balancing? Explain.
Sol: Distributed vs. Traditional scheduling differences
 Communication overhead is non-negligible
 Effects of underlying hardware cannot be ignored
 Dynamic behavior of system must be addressed
 Transfer policy: When does a node become the sender? SQ ST
 Selection policy: How does the sender choose a process for transfer? The last one.
 Location policy: Which node should be the target receiver? RQ PL
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Code: 8CS4.2 A
B.Tech. VIII Sem. (I mid Term Test 2018)
Duration: 1:15 Hrs Subject: Real Time System Branch: CSE
Maximum marks-20

Q.1 (8)
What is Real Time System? Explain radar signal processing system with Diagram.
Explain precedence graph with suitable example. & A system contain four tasks and their
periods & execution times of four periodic tasks are:- T1(4,1),T2(5,1.8),T3(20,1)&
T4(20,2) calculate individual utilization of tasks ,Total Utilization .

Q.2 (8)
Differentiate between Online & Offline Scheduling. Explain Priority Driven Approach for
Preemptive and Non-Preemptive
Explain precedence graph with suitable example & describe weighted round robin
approach to real time scheduling.

Q.3 (4)
Explain Rate-Monotonic algorithms with example.
Explain Deadline-Monotonic algorithms with example.
Code: 8CS4.2 A

Q.1 What is Real Time System? Explain radar signal processing system with Diagram.

Ans . 1 A real-time system is one in which the correctness of the computations not only depends
on their logical correctness, but also on the time at which the result is produced. That is, a late
answer is a wrong answer. For example, many embedded systems are referred to as real-time
systems. Cruise control, telecommunications, flight control and electronic engines are some of
the popular real-time system applications where as computer simulation, user interface and
Internet video are categorized as non-real time applications.

Types of Real-time Systems

A application or system can be classified as either real-time or non-real-time. For a non-

real-time system, there is no important deadlines, that is, all of its deadlines can be missed.

A real-time system can be further divided into soft and hard real-time system on the
basis of severity of meeting its deadlines. A hard real-time system can not afford missing even a
single deadline. That is, it has to meet all the deadlines to be branded as a hard one. Whereas a
soft real-time system takes the middlepath between a non-real-time system and a hard real-time
system. That is, it can allow occasional miss.

Radar signal processing system

A radar system has a transmitter that emits radio waves called radar signals in predetermined
directions. When these come into contact with an object they are usually reflected or scattered in
many directions. But some of them absorb and penetrate into the target to some degree. Radar
signals are reflected especially well by materials of considerable electrical conductivity—
especially by most metals, by seawater and by wet ground. Some of these make the use of radar
altimeters possible. The radar signals that are reflected back towards the transmitter are the
desirable ones that make radar work. If the object is moving either toward or away from the
transmitter, there is a slight equivalent change in the frequency of the radio waves, caused by
the Doppler effect.
Radar receivers are usually, but not always, in the same location as the transmitter. Although the
reflected radar signals captured by the receiving antenna are usually very weak, they can be
strengthened by electronic amplifiers. More sophisticated methods of signal processing are also
used in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which it passes is what enables
radar sets to detect objects at relatively long ranges—ranges at which other electromagnetic
wavelengths, such as visible light, infrared light, and ultraviolet light, are too strongly attenuated.
Such weather phenomena as fog, clouds, rain, falling snow, and sleet that block visible light are
usually transparent to radio waves. Certain radio frequencies that are absorbed or scattered by
water vapour, raindrops, or atmospheric gases (especially oxygen) are avoided in designing
radars, except when their detection is intended.
Q.1 (b) Explain precedence graph with suitable example. & A system contain four tasks and their
periods & execution times of four periodic tasks are:- T1(4,1),T2(5,1.8),T3(20,1)& T4(20,2)
calculate individual utilization of tasks ,Total Utilization .

Ans.- A precedence graph, also named conflict graph and serializability graph, is used in the
context of concurrency control in databases.
The precedence graph for a schedule S contains:

• A node for each committed transaction in S

• An arc from Ti to Tj if an action of Ti precedes and conflicts with one of Tj's actions.

A system contain four tasks and their periods & execution times of four periodic tasks are:-
T1(4,1),T2(5,1.8),T3(20,1)& T4(20,2)





Q.2(a)Differentiate between Online & Offline Scheduling. Explain Priority Driven Approach for
Preemptive and Non-Preemptive.

Ans.- Offline Scheduling

A clock driven scheduler makes use of a pre-computed schedule of all hard real time jobs. This schedule
is computed offline before the system begins to execute and the computation is based on the knowledge
of the release times and processor-time/resource requirement of all jobs for all times. When the operation
mode of the system changes, the new schedule specifying when each job in the new mode executes is also
pre-computed and stored for use. In this case we say that scheduling is done offline.

Offline scheduling has got disadvantage and that is inflexibility because it is possible only when
system is deterministic, meaning that the system provides some fixed set(s) of functions and that
the release times and processor time/resource demands of all its jobs are known and do not vary
or vary only slightly.

For deterministic system, offline scheduling has many advantages, the deterministic timing
behavior of the resultant system being one of them. Because the computation of the schedules is
done off-line, the complexity of scheduling algorithm used for this purpose is not important.

Online scheduling
Scheduling is done online when we use an online scheduling algorithm. If the scheduler makes
each scheduling decision without knowledge about the jobs that will be released in the future,
then scheduling is said to be done on-line. In this scheduling parameters of each job become
known to online scheduler only after job is released. It is the only option in a system whose
future workload is unpredictable.

Online scheduling can accommodate dynamic variation in user demands and resource
availability but it cannot make the best use of system resource. Without prior knowledge about
future jobs, the scheduler can’t make optimal scheduling decision.

Priority-Driven Scheduling
top-priority job gets to run until completion

• processor is never idle if jobs are waiting

• can be preemptive or non-preemptive
• can be applied to single or multiple processors

This is also known as list, greedy, and work-conserving scheduling.

Examples of Priority-Driven Scheduling

Precedence Constraints, Effective Release Times & Deadlines

• effective release time ri of job i =

o ri if i has no predecessors
o max{rj + ej | j is predecessor of i } otherwise
• effective deadline Di of job i =
o Di, if i has no successors
o min{Dj - e+j | j is successor of i }, otherwise

Q.2 (b) Explain precedence graph with suitable example & describe weighted round robin
approach to real time scheduling.

Ans - .- A precedence graph, also named conflict graph and serializability graph, is used in the
context of concurrency control in databases.
The precedence graph for a schedule S contains:

• A node for each committed transaction in S

• An arc from Ti to Tj if an action of Ti precedes and conflicts with one of Tj's actions.

Weighted RR Scheduling job executions are interleaved

• jobs are kept on a FIFO queue

• when the top job has used one time slice it goes around
to the end of the queue
• can approximate n processors
• time slices may vary
simulating processors of different speeds
• delays completion of all jobs
• can be good for pipelined jobs

Examples of Round-Robin Scheduling

Q.3 (a) Explain Rate-Monotonic algorithms with example.

Ans. - The Rate Monotonic Scheduling Algorithm (RMS) is important to real-time systems
designers because it allows one to guarantee that a set of tasks is schedulable. A set of tasks is
said to be schedulable if all of the tasks can meet their deadlines. RMS provides a set of rules
which can be used to perform a guaranteed schedulability analysis for a task set. This analysis
determines whether a task set is schedulable under worst-case conditions and emphasizes the
predictability of the system's behavior.

RMS is an optimal static priority algorithm for scheduling independent, preemptible,

periodic tasks on a single processor.

RMS is optimal in the sense that if a set of tasks can be scheduled by any static priority
algorithm, then RMS will be able to schedule that task set. RMS bases it schedulability analysis
on the processor utilization level below which all deadlines can be met.

RMS calls for the static assignment of task priorities based upon their period. The shorter a task's
period, the higher its priority. For example, a task with a 1 millisecond period has higher priority
than a task with a 100 millisecond period. If two tasks have the same period, then RMS does not
distinguish between the tasks. However, RTEMS specifies that when given tasks of equal
priority, the task which has been ready longest will execute first. RMS's priority assignment
scheme does not provide one with exact numeric values for task priorities. For example, consider
the following task set and priority assignments:

Task Period (in milliseconds) Priority

1 100 Low
2 50 Medium
3 50 Medium
4 25 High

RMS only calls for task 1 to have the lowest priority, task 4 to have the highest priority, and
tasks 2 and 3 to have an equal priority between that of tasks 1 and 4.
Q.3 (b) Explain Deadline-Monotonic algorithms with example.
Ans. Deadline-monotonic priority assignment is a priority assignment policy used with fixed
priority pre-emptive scheduling.

With deadline-monotonic priority assignment, tasks are assigned priorities according to their
deadlines; the task with the shortest deadline being assigned the highest priority.[1]
This priority assignment policy is optimal for a set of periodic or sporadic tasks which comply
with the following restrictive system model:

1. All tasks have deadlines less than or equal to their minimum inter-arrival times (or
2. All tasks have worst-case execution times (WCET) that are less than or equal to their
3. All tasks are independent and so do not block each other's execution (for example by
accessing mutually exclusive shared resources).
4. No task voluntarily suspends itself.
5. There is some point in time, referred to as a critical instant, where all of the tasks become
ready to execute simultaneously.
6. Scheduling overheads (switching from one task to another) are zero.
7. All tasks have zero release jitter (the time from the task arriving to it becoming ready to
If restriction 7 is lifted, then "deadline minus jitter" monotonic priority assignment is optimal.
If restriction 1. is lifted allowing deadlines greater than periods, then Audsley's optimal priority
assignment algorithm may be used to find the optimal priority assignment.
Deadline monotonic priority assignment is not optimal for fixed priority non-pre-emptive