You are on page 1of 11

OPERATING SYSTEM EXAM PAPER 2016

1. Explain in brief the following :


(i) Operating system
(ii) System
(iii)Process
(iv)Thread
(v)File Concepts
(vi)Virtual memory
(vii)Swapping
(viii)Swap space
Answer :
Operating system : All of the software and hardware on a computer is managed by
the operating system (OS), which also handles input and output, handles file, memory,
and process management, and controls peripheral devices like printers and disk drives.
System : A set of things working together as a mechanism or interconnecting network.
Process : A process is essentially running software. The execution of any process
must occur in a specific order.
Thread : A thread refers to a single sequential flow of activities being executed in a
process; it is also known as the thread of execution or the thread of control .
File concept : The operating system creates a logical storage unit known as a file by
abstracting from the physical characteristics of its storage cards.
Virtual memory : A computer can temporarily move data from random access memory
(RAM) to disk storage through the use of virtual memory, which combines hardware and
software to make up for physical memory deficiencies.
Swapping : The process of swapping involves loading a process into memory and,
once it has ran for some time, temporarily copying it to the disc.
Swap space: To increase the amount of memory that is available, swap space is
frequently a separate disk partition.

2.
(i)Differentiate between Multi-programming and Time sharing system.
Answer :
Multiprogramming makes sure that the CPU always has something to execute, thus
increases the CPU utilization. On the other hand, Time sharing is the sharing of
computing resources among several users simultaneously.

(ii)Justify the statement that "operating system service".


Answer :
Operating system services are essential for the smooth and effective operation of a

computer system because they handle files and devices, manage resources, offer a
OPERATING SYSTEM EXAM PAPER 2016

user interface, guarantee security, control processes, handle errors, facilitate

networking, and allow applications to communicate with hardware.

3.
(i)What is paging ? Write advantage and disadvantage of paging.
Answer :Paging:
Paging is a memory management scheme where the operating system divides physical
memory into fixed-size blocks (frames) and logical memory into corresponding-sized
blocks (pages).

Advantages:

​ Simplifies memory management.


​ Allows flexible memory allocation.
​ Solves external fragmentation.
​ Supports virtual memory.

Disadvantages:

​ Introduces overhead.
​ May lead to page faults.
​ Complexity of implementation.
​ Possibility of thrashing.
​ Fixed page size limitation.

(ii) What is the need of paging the segment table paged segmentation
Answer :The need for combining paging with segmented memory management arises
from the desire to have the flexibility of logical segmentation and the efficiency of
fixed-size page allocation. This approach avoids external fragmentation, supports virtual
memory, provides efficient memory protection, and allows better utilization of system
memory. Paged segmentation optimally balances the advantages of both segmentation
and paging in an operating system.
OPERATING SYSTEM EXAM PAPER 2016

4.
(i) Difference between preemptive and non preemptive scheduling.
Answer :Preemptive Scheduling:

● Allows interruption of running processes.


● More frequent context switching.
● Better responsiveness.
● May have higher overhead.
● Non-Preemptive Scheduling:
● Does not allow interruption of running processes.
● Less frequent context switching.
● Lower responsiveness.
● Tends to have lower overhead.

(ii) Difference between logical and physical address space.


Answer : Logical Address Space:

​ Definition:
● The set of all possible addresses generated by a program during its
execution is known as the logical address space.
​ Visibility:
● The logical address space is the view seen by the CPU or the program
itself.
​ Size:
● Larger than the physical address space.
​ Handling by OS:
● The operating system manages the mapping of logical addresses to
physical addresses through techniques like paging or segmentation.

Physical Address Space:

​ Definition:
● The set of all addresses corresponding to the actual physical memory
locations in the hardware is the physical address space.
​ Visibility:
● The physical address space is what the memory hardware understands
and uses.
​ Size:
OPERATING SYSTEM EXAM PAPER 2016

● Limited by the actual size of the physical memory (RAM) installed in the
computer.
​ Handling by OS:
● The operating system is responsible for mapping the logical addresses to
physical addresses to facilitate the proper execution of programs.

5. What is process control block(PCB) in CPU scheduling ?Explain scheduling


queues in details.
Answer :A Process Control Block (PCB) is a data structure used by the operating
system to store information about a currently running or recently terminated process. In
the context of CPU scheduling, the PCB contains crucial details needed for the effective
management and control of processes. It acts as a repository of information about a
process's state, resource usage, and other essential parameters. Key components of a
PCB include:

​ Process State:
● Indicates whether the process is ready, running, or waiting.
​ Program Counter (PC):
● Keeps track of the address of the next instruction to be executed.
​ CPU Registers:
● Stores the values of CPU registers, including the accumulator and
general-purpose registers.
​ CPU Scheduling Information:
● Contains data related to priority, scheduling algorithm details, and other
parameters influencing process scheduling.
​ Memory Management Information:
● Includes base and limit registers, indicating the process's memory
location in the physical memory.
​ Accounting Information:
● Tracks resource usage, CPU time, and other statistics for the process.
​ I/O Status Information:
● Records the status of I/O operations, such as open files and pending I/O
requests.
​ Process Identifier:
● A unique identifier assigned to each process.
​ Link to Next PCB:
OPERATING SYSTEM EXAM PAPER 2016

● In a queue, PCBs are often linked together to facilitate efficient


management.

Scheduling Queues:

Scheduling queues are data structures used by the operating system to organize
processes based on their current state and priority. These queues play a crucial role in
determining which process gets access to the CPU. Commonly used scheduling queues
include:

​ Job Queue:
● Contains all processes in the system, regardless of their current state.
​ Ready Queue:
● Holds processes that are ready to execute. The scheduler selects a
process from this queue for CPU allocation.
​ Waiting Queue:
● Stores processes waiting for a particular event or resource, such as I/O
completion.
​ Suspended Queue:
● Contains processes that are temporarily moved to secondary storage to
free up main memory.
​ Terminated Queue:
● Holds processes that have completed their execution or have been
terminated.

Details about Scheduling Queues:

6. Enqueue Operation:
a. Adds a process to the appropriate queue when it transitions from one
state to another (e.g., from new to ready, or from running to waiting).
7. Dequeue Operation:
a. Removes a process from a queue when it transitions to a different state
(e.g., from ready to running).
8. Priority Scheduling:
a. Some queues may be further divided based on priority levels, allowing the
scheduler to give preference to higher-priority processes.
9. Dispatcher:
a. The dispatcher is responsible for transferring a process from the ready
queue to the running state, based on the scheduling algorithm's decision.
OPERATING SYSTEM EXAM PAPER 2016

10. Context Switching:


a. The information in the PCB is crucial for context switching when a process
is moved between the ready, running, and waiting states.
11. Algorithm-Specific Queues:
a. Some scheduling algorithms may introduce additional queues to
implement specific policies, such as multiple-level feedback queues in a
feedback scheduling algorithm.

6. What is CPU Scheduling ? Why is it important ? Discuss how the CPU is


Allocated to process if static priority scheduling is used

Answer : CPU scheduling is a crucial component of operating systems that deals


with the efficient allocation of the central processing unit (CPU) to multiple processes. It
involves selecting a process from the ready queue (a queue of processes ready to
execute) and allocating the CPU to that process for execution. The goal of CPU
scheduling is to enhance system performance by maximizing CPU utilization,
minimizing waiting time, and ensuring fairness among competing processes.

Importance of CPU Scheduling:

​ Resource Utilization:
● Efficient CPU scheduling ensures that the CPU is continuously utilized,
maximizing the system's resource usage.
​ Response Time:
● Quick response times to user interactions are crucial for a responsive and
interactive computing experience. Effective CPU scheduling helps
minimize response times.
​ Throughput:
● CPU scheduling influences the system's throughput, which is the number
of processes completed within a given time period.
​ Fairness:
● Fair scheduling ensures that all processes get a fair share of CPU time,
preventing any single process from monopolizing the CPU.
​ Priority Management:
● Priority-based scheduling allows the system to allocate CPU time based
on the importance or priority assigned to each process.
OPERATING SYSTEM EXAM PAPER 2016

Static Priority Scheduling:

In static priority scheduling, each process is assigned a fixed priority that does not
change during its execution. The process with the highest priority gets the CPU first. If
multiple processes share the same priority, other scheduling criteria, such as
first-come-first-served, may be used.

CPU Allocation Process in Static Priority Scheduling:

​ Priority Assignment:
● Each process is assigned a static priority value, often based on factors like
process type, user priority, or system requirements.
​ Queue Formation:
● Processes are organized into priority queues, with each queue
representing a different priority level. Higher-priority queues are given
preference.
​ Selection of Process:
● The scheduler selects the process with the highest priority from the
highest-priority non-empty queue.
​ CPU Allocation:
● The selected process is allocated the CPU for execution.
​ Priority Adjustment:
● After a process completes its time slice or yields the CPU, its priority may
be adjusted based on predefined criteria.
​ Repeat the Process:
● Steps 3-5 are repeated to continuously select and execute processes
based on their static priorities.

Benefits of Static Priority Scheduling:

● Allows for the explicit prioritization of processes based on importance or system


requirements.
● Ensures that high-priority tasks are addressed promptly.
● Can be useful in real-time systems where predictable response times are critical.

Drawbacks of Static Priority Scheduling:

● May lead to starvation if lower-priority processes are continuously overshadowed


by higher-priority ones.
OPERATING SYSTEM EXAM PAPER 2016

● Does not adapt well to changing workloads or priorities.

7. What is semaphore ? Define the wait operation wait (s) and the wakeup
operation signal (s). Give an explanation and an example when and how they are
used.
Answer : Semaphore:
A semaphore is a synchronization primitive used in concurrent programming to
control access to a shared resource. It is a variable that is used to signal or
coordinate processes or threads in a multi-process or multi-threaded
environment. Semaphores can be used to enforce mutual exclusion, control
access to critical sections, and synchronize activities between different
processes or threads.
Wait Operation (Wait or P Operation):

The wait operation, often denoted as wait(s) or P(s), decrements the value of the
semaphore s. If the resulting value is negative, the process or thread performing the
wait operation is blocked, and if it's non-negative, the process continues. The wait
operation is used to acquire or "wait" for a resource, and it ensures that only one
process can enter a critical section at a time.

Signal Operation (Signal or V Operation):

The signal operation, often denoted as signal(s) or V(s), increments the value of the
semaphore s. If the resulting value is non-negative, it indicates that a resource is now
available, and if it's negative, a waiting process is unblocked. The signal operation is
used to release or "signal" that a resource is available for use.

Explanation and Example:

Consider a scenario where two processes need to share a printer. A semaphore can be
used to ensure that only one process can access the printer at a time:

# Initialization of the semaphore with an initial value of 1 (indicating the printer


is initially available)
printer_semaphore = Semaphore(1)

# Process 1 (Printing)
wait(printer_semaphore) # Attempt to acquire the printer
# Critical section: Print a document
OPERATING SYSTEM EXAM PAPER 2016

signal(printer_semaphore) # Release the printer

# Process 2 (Printing)
wait(printer_semaphore) # Attempt to acquire the printer
# Critical section: Print another document
signal(printer_semaphore) # Release the printer

8. Write short notes on ➖


(i) Critical section problem.
(ii)system call.
(iii)Dead lock characterizations.
(iv)Memory Management.

Answer :
(i) Critical Section Problem:

The critical section problem is a classic synchronization issue in concurrent


programming, where multiple processes or threads share a common resource or
section of code. The critical section is the part of the program where the shared
resource is accessed, and it is essential to ensure that only one process can execute it
at a time. The critical section problem aims to find a solution to prevent race conditions,
mutual exclusion violations, and data inconsistency. Common solutions include the use
of synchronization primitives like semaphores and locks to enforce mutual exclusion
and ensure orderly access to the critical section.

(ii) System Call:

A system call is a mechanism that allows a program to request services from the
operating system. It serves as an interface between user-level applications and the
kernel, enabling user processes to perform privileged operations and access system
resources. System calls provide a way for programs to interact with hardware, perform
I/O operations, manage memory, and execute privileged instructions. Examples of
system calls include opening or closing files, creating processes, allocating memory,
and performing I/O operations. Each operating system defines its set of system calls,
and they play a crucial role in facilitating the interaction between user applications and
the underlying system.

(iii) Deadlock Characterizations:


OPERATING SYSTEM EXAM PAPER 2016

Deadlocks are situations in a concurrent system where two or more processes are
unable to proceed because each is waiting for the other to release a resource. There are
four necessary conditions for a deadlock to occur, known as the Coffman conditions or
deadlock characterizations:

​ Mutual Exclusion:
● Processes must be prevented from accessing a resource concurrently.
​ Hold and Wait:
● A process holding at least one resource is waiting to acquire additional
resources held by other processes.
​ No Preemption:
● Resources cannot be forcibly taken away from a process; they must be
explicitly released.
​ Circular Wait:
● There must exist a cycle in the resource allocation graph, where each
process in the cycle is waiting for a resource held by the next process.

Detecting and preventing deadlocks involves addressing these conditions through


techniques such as resource allocation graphs, deadlock prevention, and deadlock
recovery strategies.

(iv) Memory Management:

Memory management is a crucial aspect of operating systems that involves the


organization and control of computer memory to support the execution of processes. It
includes several key functions:

​ Address Binding:
● Assigning addresses to variables and instructions during compilation or
execution.
​ Memory Allocation:
● Allocating memory to processes, managing free memory space, and
dealing with memory fragmentation.
​ Memory Protection:
● Ensuring that processes do not interfere with each other's memory space
to maintain data integrity and system stability.
​ Virtual Memory:
● Using disk space as an extension of physical memory to allow for the
execution of processes larger than the available physical memory.
OPERATING SYSTEM EXAM PAPER 2016

​ Memory Mapping:
● Mapping files into memory for efficient file I/O operations.

You might also like