Professional Documents
Culture Documents
2.
(i)Differentiate between Multi-programming and Time sharing system.
Answer :
Multiprogramming makes sure that the CPU always has something to execute, thus
increases the CPU utilization. On the other hand, Time sharing is the sharing of
computing resources among several users simultaneously.
computer system because they handle files and devices, manage resources, offer a
OPERATING SYSTEM EXAM PAPER 2016
3.
(i)What is paging ? Write advantage and disadvantage of paging.
Answer :Paging:
Paging is a memory management scheme where the operating system divides physical
memory into fixed-size blocks (frames) and logical memory into corresponding-sized
blocks (pages).
Advantages:
Disadvantages:
Introduces overhead.
May lead to page faults.
Complexity of implementation.
Possibility of thrashing.
Fixed page size limitation.
(ii) What is the need of paging the segment table paged segmentation
Answer :The need for combining paging with segmented memory management arises
from the desire to have the flexibility of logical segmentation and the efficiency of
fixed-size page allocation. This approach avoids external fragmentation, supports virtual
memory, provides efficient memory protection, and allows better utilization of system
memory. Paged segmentation optimally balances the advantages of both segmentation
and paging in an operating system.
OPERATING SYSTEM EXAM PAPER 2016
4.
(i) Difference between preemptive and non preemptive scheduling.
Answer :Preemptive Scheduling:
Definition:
● The set of all possible addresses generated by a program during its
execution is known as the logical address space.
Visibility:
● The logical address space is the view seen by the CPU or the program
itself.
Size:
● Larger than the physical address space.
Handling by OS:
● The operating system manages the mapping of logical addresses to
physical addresses through techniques like paging or segmentation.
Definition:
● The set of all addresses corresponding to the actual physical memory
locations in the hardware is the physical address space.
Visibility:
● The physical address space is what the memory hardware understands
and uses.
Size:
OPERATING SYSTEM EXAM PAPER 2016
● Limited by the actual size of the physical memory (RAM) installed in the
computer.
Handling by OS:
● The operating system is responsible for mapping the logical addresses to
physical addresses to facilitate the proper execution of programs.
Process State:
● Indicates whether the process is ready, running, or waiting.
Program Counter (PC):
● Keeps track of the address of the next instruction to be executed.
CPU Registers:
● Stores the values of CPU registers, including the accumulator and
general-purpose registers.
CPU Scheduling Information:
● Contains data related to priority, scheduling algorithm details, and other
parameters influencing process scheduling.
Memory Management Information:
● Includes base and limit registers, indicating the process's memory
location in the physical memory.
Accounting Information:
● Tracks resource usage, CPU time, and other statistics for the process.
I/O Status Information:
● Records the status of I/O operations, such as open files and pending I/O
requests.
Process Identifier:
● A unique identifier assigned to each process.
Link to Next PCB:
OPERATING SYSTEM EXAM PAPER 2016
Scheduling Queues:
Scheduling queues are data structures used by the operating system to organize
processes based on their current state and priority. These queues play a crucial role in
determining which process gets access to the CPU. Commonly used scheduling queues
include:
Job Queue:
● Contains all processes in the system, regardless of their current state.
Ready Queue:
● Holds processes that are ready to execute. The scheduler selects a
process from this queue for CPU allocation.
Waiting Queue:
● Stores processes waiting for a particular event or resource, such as I/O
completion.
Suspended Queue:
● Contains processes that are temporarily moved to secondary storage to
free up main memory.
Terminated Queue:
● Holds processes that have completed their execution or have been
terminated.
6. Enqueue Operation:
a. Adds a process to the appropriate queue when it transitions from one
state to another (e.g., from new to ready, or from running to waiting).
7. Dequeue Operation:
a. Removes a process from a queue when it transitions to a different state
(e.g., from ready to running).
8. Priority Scheduling:
a. Some queues may be further divided based on priority levels, allowing the
scheduler to give preference to higher-priority processes.
9. Dispatcher:
a. The dispatcher is responsible for transferring a process from the ready
queue to the running state, based on the scheduling algorithm's decision.
OPERATING SYSTEM EXAM PAPER 2016
Resource Utilization:
● Efficient CPU scheduling ensures that the CPU is continuously utilized,
maximizing the system's resource usage.
Response Time:
● Quick response times to user interactions are crucial for a responsive and
interactive computing experience. Effective CPU scheduling helps
minimize response times.
Throughput:
● CPU scheduling influences the system's throughput, which is the number
of processes completed within a given time period.
Fairness:
● Fair scheduling ensures that all processes get a fair share of CPU time,
preventing any single process from monopolizing the CPU.
Priority Management:
● Priority-based scheduling allows the system to allocate CPU time based
on the importance or priority assigned to each process.
OPERATING SYSTEM EXAM PAPER 2016
In static priority scheduling, each process is assigned a fixed priority that does not
change during its execution. The process with the highest priority gets the CPU first. If
multiple processes share the same priority, other scheduling criteria, such as
first-come-first-served, may be used.
Priority Assignment:
● Each process is assigned a static priority value, often based on factors like
process type, user priority, or system requirements.
Queue Formation:
● Processes are organized into priority queues, with each queue
representing a different priority level. Higher-priority queues are given
preference.
Selection of Process:
● The scheduler selects the process with the highest priority from the
highest-priority non-empty queue.
CPU Allocation:
● The selected process is allocated the CPU for execution.
Priority Adjustment:
● After a process completes its time slice or yields the CPU, its priority may
be adjusted based on predefined criteria.
Repeat the Process:
● Steps 3-5 are repeated to continuously select and execute processes
based on their static priorities.
7. What is semaphore ? Define the wait operation wait (s) and the wakeup
operation signal (s). Give an explanation and an example when and how they are
used.
Answer : Semaphore:
A semaphore is a synchronization primitive used in concurrent programming to
control access to a shared resource. It is a variable that is used to signal or
coordinate processes or threads in a multi-process or multi-threaded
environment. Semaphores can be used to enforce mutual exclusion, control
access to critical sections, and synchronize activities between different
processes or threads.
Wait Operation (Wait or P Operation):
The wait operation, often denoted as wait(s) or P(s), decrements the value of the
semaphore s. If the resulting value is negative, the process or thread performing the
wait operation is blocked, and if it's non-negative, the process continues. The wait
operation is used to acquire or "wait" for a resource, and it ensures that only one
process can enter a critical section at a time.
The signal operation, often denoted as signal(s) or V(s), increments the value of the
semaphore s. If the resulting value is non-negative, it indicates that a resource is now
available, and if it's negative, a waiting process is unblocked. The signal operation is
used to release or "signal" that a resource is available for use.
Consider a scenario where two processes need to share a printer. A semaphore can be
used to ensure that only one process can access the printer at a time:
# Process 1 (Printing)
wait(printer_semaphore) # Attempt to acquire the printer
# Critical section: Print a document
OPERATING SYSTEM EXAM PAPER 2016
# Process 2 (Printing)
wait(printer_semaphore) # Attempt to acquire the printer
# Critical section: Print another document
signal(printer_semaphore) # Release the printer
Answer :
(i) Critical Section Problem:
A system call is a mechanism that allows a program to request services from the
operating system. It serves as an interface between user-level applications and the
kernel, enabling user processes to perform privileged operations and access system
resources. System calls provide a way for programs to interact with hardware, perform
I/O operations, manage memory, and execute privileged instructions. Examples of
system calls include opening or closing files, creating processes, allocating memory,
and performing I/O operations. Each operating system defines its set of system calls,
and they play a crucial role in facilitating the interaction between user applications and
the underlying system.
Deadlocks are situations in a concurrent system where two or more processes are
unable to proceed because each is waiting for the other to release a resource. There are
four necessary conditions for a deadlock to occur, known as the Coffman conditions or
deadlock characterizations:
Mutual Exclusion:
● Processes must be prevented from accessing a resource concurrently.
Hold and Wait:
● A process holding at least one resource is waiting to acquire additional
resources held by other processes.
No Preemption:
● Resources cannot be forcibly taken away from a process; they must be
explicitly released.
Circular Wait:
● There must exist a cycle in the resource allocation graph, where each
process in the cycle is waiting for a resource held by the next process.
Address Binding:
● Assigning addresses to variables and instructions during compilation or
execution.
Memory Allocation:
● Allocating memory to processes, managing free memory space, and
dealing with memory fragmentation.
Memory Protection:
● Ensuring that processes do not interfere with each other's memory space
to maintain data integrity and system stability.
Virtual Memory:
● Using disk space as an extension of physical memory to allow for the
execution of processes larger than the available physical memory.
OPERATING SYSTEM EXAM PAPER 2016
Memory Mapping:
● Mapping files into memory for efficient file I/O operations.