You are on page 1of 6

CSIT123 Enroll. No.

-A85204920022
[ET]
END SEMESTER EXAMINATION: MAY - 2021
OPERATING SYSTEM CONCEPTS
Time :3 Hrs Maximum Marks :70

Note: Attempt questions from all sections as directed.

Section - A: Attempt any five questions out of six. Each question carries 06 marks. [30 Marks]
Q 1. Assuming a 1 KB page size, what are the page numbers and offsets for the following address
references (provided as decimal numbers):
a.2375
b.19366
c.30000
d.256
e.16385
f.7328
Q 2. Multi-programming (or multi-tasking) enables more than a single process to apparently execute
simultaneously. How is this achieved on a uniprocessor?
Q 3. In a variable partition scheme, the operating system has to keep track of allocated and free
space. Suggest a means of achieving this. Describe the effects of new allocations and process
terminations in your suggested scheme.
Q4. One of the decisions to be made in operating system design is whether to include the file system
as part of the core kernel or not. Give at least 2 reasons why making the file system part of the OS
would be a good idea, as well as at least 1 reason why implementing the file system outside the OS
might make sense.
Q 5. If the operating system were to know that a certain application is going to access the file data in
a sequential manner, how could it exploit this information to improve performance?
Q 6. File system design decision is to choose between “typed” or “untyped” files. Explain what is
meant by each of these terms. Which approach is used in the Linux file system?

Section –B: Attempt any two questions out of three. Each question carries 10 marks. [20
Marks]
Q 7. Consider the following information about resources in a system.
•There are two classes of allocatable resource labeled P1 and P2
•There are two instances of each resource
•There are four processes labelled P1 through P4
•There are some resource instances already allocated to processes as follows:
➢One instance of R1 held by P2, another held by P3
➢One instance of R2 held P1, another held by P4
•Some processes have requested additional resources, as follows:
➢P1wants one instance of R1
➢P3 wants one instance of R2
a) Draw the resource allocation graph for this system)
b) What is the state (runnable, waiting) of each process? For each process that is waiting indicate
what is waiting for.
c) Is this system deadlocked? If so, state which processes are involved. If not, give an execution
sequence that eventually ends, showing resource acquisition and release at each step.

Q 8. Consider the following set of processes, with the length of the CPU burst time in given MS:
i. Draw four Gantt chart illustrating the exaction of these processes using FCFS, SJF, Priority and RR
(quantum=2) scheduling.
ii. Also calculate waiting time and turnaround time for each scheduling algorithms

Q 9. Consider a file system on a disk that has both logical and physical block sizes of 512 bytes.
Assume that the information about each file is already in memory. For each of the three allocation
strategies (contiguous, linked, and indexed), answer these questions:
a. How is the logical-to-physical address mapping accomplished in this system? (For the indexed
allocation, assume that a file is always less than 512 blocks long.)
b. If we are currently at logical block 10 (the last block accessed was block 10) and want to access
logical block 4, how many physical blocks must be read from the disk?

Section -C: Compulsory question [20 Marks]


Q10. a) The filesystem buffer cache does both buffering and caching. Describe why buffering is
needed. Describe how buffering can improve performance (potentially to the detriment of file system
robustness). Describe how the caching component of the buffer cache improves performance. [8]
b) The traditional UNIX scheduler is a priority-based round robin scheduler (also called a multi-level
round robin scheduler). How does the scheduler go about favoring/O bound jobs over long-running
CPU-bound jobs? [6]
c) Multi-programming (or multi-tasking) enables more than a single process to apparently execute
simultaneously. How is this achieved on a uniprocessor? Explain with suitable example. [6]
*******

Section - A:

A2. Multi-programming (or multi-tasking) enables more than a single process to apparently
execute simultaneously. How is this achieved on a uniprocessor?

Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the
same time on a uniprocessor. Since there is only one processor, there can be no true simultaneous
execution of different programs. Instead, the operating system executes part of one program, then part of
another, and so on. To the user it appears that all programs are executing at the same time.

If the machine has the capability of causing an interrupt after a specified time interval, then the operating
system will execute each program for a given length of time, regain control, and then execute another
program for a given length of time, and so on in round-robin fashion. Sometimes a priority may be
assigned to a given program that increases its time slice. In the absence of this mechanism, the operating
system has no choice but to begin to execute a program with the expectation, but not the certainty, that
the program will eventually return control to the operating system.

A3. In a variable partition scheme, the operating system has to keep track of allocated and free
space. Suggest a means of achieving this. Describe the effects of new allocations and process
terminations in your suggested scheme.
In operating systems, Memory Management is the function responsible for allocating and
managing computer’s main memory. Memory Management function keeps track of the status of
each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.

There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In


Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous
Technique can be divided into:

Fixed (or static) partitioning


Variable (or dynamic) partitioning
Variable Partitioning –
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed
Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or
during system configure. Various features associated with variable Partitioning-

Initially RAM is empty and partitions are made during the run-time according to process’s need
instead of partitioning during system configure.

The size of partition will be equal to incoming process.

The partition size varies according to the need of the process so that the internal fragmentation
can be avoided to ensure efficient utilization of RAM.

Number of partitions in RAM is not fixed and depends on the number of incoming process and
Main Memory’s size.

A4. One of the decisions to be made in operating system design is whether to include the file
system as part of the core kernel or not. Give at least 2 reasons why making the file system part
of the OS would be a good idea, as well as at least 1 reason why implementing the file system
outside the OS might make sense.

Inside kernel: faster, protection and security, caching, buffering,


close coupling to storage system and I/O optimization.
Outside kernel: flexibility, portability, OS independence

A5. If the operating system were to know that a certain application is going to access the file
data in a sequential manner, how could it exploit this information to improve performance?

When a block is accessed, the file system could prefetch the subsequent blocks in anticipation of future
requests to these blocks. This prefetching optimization would reduce the waiting time experienced by the
process for future requests.

A6. File system design decision is to choose between “typed” or “untyped” files. Explain what
is meant by each of these terms. Which approach is used in the Linux file system?

Typed files are files of a strictly defined type. Most often, these are files consisting of records. They are
used to create various databases. A typed file is a file whose components are all of the same type
specified when declaring a file variable. The file components are stored on the disk in the internal
(binary) format and are numbered from 0. If you look at such a file with any text editor, you can
recognize only symbolic information, but in place of numbers in the file there will be spaces or pseudo
graphic characters.
Untyped files are files declared without specifying the type of its components. Read and write operations
with such files are performed in blocks. The absence of a component type makes these files compatible
with any other, and the execution of I / O blocks allows you to organize high-speed data exchange
between the disk and memory. Untyped files, like typed files, allow direct access. To define an untyped
file in the program, use the reserved word File.
Linux chooses to have a single hierarchical directory structure. Everything starts from the root directory,
represented by /, and then expands into sub-directories instead of having so-called 'drives'. In the
Windows environment, one may put one's files almost anywhere: on C drive, D drive, E drive, etc.
Section –B:
A8. Consider the following set of processes, with the length of the CPU burst time in given MS:
i. Draw four Gantt chart illustrating the exaction of these processes using FCFS, SJF, Priority
and RR (quantum=2) scheduling.
ii. Also calculate waiting time and turnaround time for each scheduling algorithms

i.
FCFS
P1 P2 2 P3 3 P4 4 P5 5

0 2 3 11 15 20
SJF

P P1 P4 P5 P3
2
0 1 3 7 12 20

Priority

P3 P5 P1 P4 P2

0 8 13 15 19 20

RR

P1 P P3 P4 P5 P3 P4 P5 P3 P5 P3
2

ii.

Turnaround time:

FCFS SJF Priority RR


P1 2 3 15 2
P2 3 1 20 3
P3 11 20 8 20
P4 15 7 19 13
P5 20 12 13 18

Waiting time (turnaround time minus burst time):

FCFS SJF Priority RR


P1 0 1 13 0
P2 2 0 19 2
P3 3 12 0 12
P4 11 3 15 9
P5 15 7 8 13

SJF has the shortest wait time


A9. Consider a file system on a disk that has both logical and physical block sizes of 512 bytes.
Assume that the information about each file is already in memory. For each of the three
allocation strategies (contiguous, linked, and indexed), answer these questions:
a. How is the logical-to-physical address mapping accomplished in this system? (For the
indexed allocation, assume that a file is always less than 512 blocks long.)
b. If we are currently at logical block 10 (the last block accessed was block 10) and want to
access logical block 4, how many physical blocks must be read from the disk?

(a) How is the logical-to-physical address mapping accomplished in this system? (For the
indexed allocation, assume that a file is always less than 512 blocks long.)
Answer: Let Z be the starting file address (block number).
– Contiguous. Divide the logical address by 512 with X and Y the resulting quotient
and remainder respectively. Add X to Z to obtain the physical block number. Y is the
displacement into that block.
– Linked. Divide the logical physical address by 511 with X and Y the resulting quotient and remainder
respectively. Chase down the linked list (getting X + 1 blocks).
Y + 1 is the displacement into the last physical block.
– Indexed. Divide the logical address by 512 with X and Y the resulting quotient and
remainder respectively. Get the index block into memory. Physical block address
is contained in the index block at location X. Y is the displacement into the desired
physical block.

(b) If we are currently at logical block 10 (the last block accessed was block 10) and want to
access logical block 4, how many physical blocks must be read from the disk?
Answer:
– Contiguous. 1
– Linked. 4
– Indexed. 2

Section -C:
A10 a) The filesystem buffer cache does both buffering and caching. Describe why buffering is
needed. Describe how buffering can improve performance (potentially to the detriment of file system
robustness). Describe how the caching component of the buffer cache improves performance. [8]
b) The traditional UNIX scheduler is a priority-based round robin scheduler (also called a multi-level
round robin scheduler). How does the scheduler go about favoring/O bound jobs over long-running
CPU-bound jobs? [6]
c) Multi-programming (or multi-tasking) enables more than a single process to apparently execute
simultaneously. How is this achieved on a uniprocessor? Explain with suitable example. [6]

a) Buffering is required when the unit of transfer/update is different between two entities (e.g. updating 1
byte in a disk block requires buffering the disk block in memory). Buffering can improve performance as
writes to disk can be buffered and written to disk in the background - this does reduce file system
robustness to power failures and similar. Caching is loading used data into the buffer cache, such that
subsequent reads to the same area of data are accelerated by using the data in cache and not having to
access disk. The acceleration is seen due to the principle of locality; if data is used, data nearby will be
used soon, so the application will run a lot faster if it is in RAM rather than requiring a disk seek, which is
several orders of magnitude slower.
b) In general, a scheduler will want to favor I/O bound jobs over CPU bound jobs. An I/O bound job is
one that does little actual computation and spends most of its time waiting for I/O. A CPU bound job is
the opposite of I/O bound: CPU bound jobs rarely do I/O and will hardly ever give up their time on the
CPU before the scheduler forces them off the CPU. Wanting to maximize the use of the hardware leads to
an overlap of I/O and computation (because I/O devices can be busy while other jobs are using the CPU).
Long-term Scheduler
The long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready
queue and how many processes should be admitted into the ready queue. This controls the total number of
jobs which can be running within the system. In practice, this limit is generally not reached, but if a
process attempts to fork off a large number of processes it will eventually reach an OS defined limit
where it will prevent any further processes from being created. This type of scheduling is very important
for a real-time operating system, as the system’s ability to meet process deadlines may be compromised
by the slowdowns and contention resulting from the admission of more processes than the system can
safely handle.

Short-term Scheduler
The short-term scheduler (also known as the dispatcher) decides which of the ready, in-memory processes
are to be executed (allocated a CPU) next following a clock interrupt, a I/O interrupt, or an operating
system call. Thus, the short-term scheduler makes scheduling decisions much more frequently than the
long-term schedulers - a scheduling decision will at a minimum have to be made after every time slice,
which can be as often as every few milliseconds. This scheduler can be preemptive, implying that it is
capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another
process, or non-preemptive, in which case the scheduler is unable to” force” processes off the CPU.

c) Multi-programming (or multi-tasking) enables more than a single process to apparently execute
simultaneously. How is this achieved on a uniprocessor? Multiprogramming is achieved on a
uniprocessor by the concept of “threading”. Every process' total running time is divided up into threads,
which are a subset of the process' instructions that can be completed in a certain amount of time, called a
time slice. When a thread's time slice is finished, CPU time has to switch to a different thread. On a large
scale, these time slices are nanoseconds long, so it appears to the user that the processor is processing
processes concurrently. The ultimate goal is to keep the system responsive while really maximizing the
processor's ability to process. The above scenario is known as Pre-Emptive multitasking. An alternative
scheme is Cooperative Multitasking, where each process occasionally yields the CPU to another process
so that it may run.

You might also like