You are on page 1of 37

NAME – SACHIN SHARMA

CLASS – B.TECH 5TH SEM


ROLL NO 21EJCCS195
SECTION – C
PAPER SET 2022

1 . Kernel is central component of an operating system that manages


operations of computer and hardware. It basically manages
operations of memory and CPU time. It is core component of an
operating system. Kernel acts as a bridge between applications and
data processing performed at hardware level using inter-process
communication and system calls.

2 . A thread is a single sequential flow of execution of tasks of a


process so it is also known as thread of execution or thread of control.
There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a
process.
3 . A process in operating system uses resources in the following way.
Requests a resource
Use the resource
Releases the resource

deadlock is a situation where a set of processes are blocked because each


process is holding a resource and waiting for another resource acquired by
some other process.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once
they are in front of each other. A similar situation occurs in operating
systems when there are two or more processes that hold some resources
and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired
by process 2, and process 2 is waiting for resource 1.
4 . Logical address: A logical address, also known as a
virtual
address, is an address generated by the CPU during
program execution. It is the address seen by the
process and is relative to the program’s address
space. The process accesses memory using logical
addresses, which are translated by the operating
system into physical addresses.

Physical address: A physical address is the actual


address in main memory where data is stored. It is a
location in physical memory, as opposed to a virtual
address. Physical addresses are used by the memory
management unit (MMU) to translate logical
addresses into physical addresses.

5 . A context switch is a procedure that a computer's CPU (central


processing unit) follows to change from one task (or process) to
another while ensuring that the tasks do not conflict. Effective context
switching is critical if a computer is to provide user-
friendly multitasking.
Context switching in an operating system involves saving the context or
state of a running process so that it can be restored later, and then
loading the context or state of another. process and run it.
6. Difference between Paging and Swapping :

Swapping Paging

It is the procedure of copying It is a technique of memory


out the entire process. allocation.

Swapping occurs when whole


Paging occurs when some part of the
process is transferred to the
process is transferred to the disk.
disk.

In this, a process is swapped In this. the contiguous block of


temporarily from main memory memory is made non-contiguous but
to secondary memory. of fixed size called frame or pages.

Swapping can be performed


It is a concept used in Non-
without any memory
contiguous Memory Management.
management.

Swapping is done by inactive The only active process can perform


processes. paging.

It provides the direction No suggestion is given regarding the


regarding the solution. solution in it.
Swapping Paging

Swapping copies the whole


Paging is relatively faster than
information, so is relatively
swapping.
slower.

7. Characteristics of Operating Systems


Let us now discuss some of the important characteristic features of operating systems:

• Device Management: The operating system keeps track of all the devices. So, it is also
called the Input/Output controller that decides which process gets the device, when,
and for how much time.
• File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
• Job Accounting: It keeps track of time and resources used by various jobs or users.
• Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
• Memory Management: It keeps track of the primary memory, like what part of it is in
use by whom, or what part is not in use, etc. and It also allocates the memory when a
process or program requests it.
• Processor Management: It allocates the processor to a process and then de-allocates
the processor when it is no longer required or the job is done.
• Control on System Performance: It records the delays between the request for a
service and the system.
• Security: It prevents unauthorized access to programs and data using passwords or
some kind of protection technique.
• Convenience: An OS makes a computer more convenient to use.
• Efficiency: An OS allows the computer system resources to be used efficiently.
• Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time
without interfering with service.
• Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).

Question 6: Differentiate between pager and swapper.

Answer:
A pager is a memory management technique that divides a
process's virtual memory into pages and allocates them to
physical memory as needed. This allows the process to be larger
than the amount of physical memory available.
A swapper is a memory management technique that moves
entire processes between physical memory and secondary
storage (such as a hard disk). This allows the operating system
to run more processes than would fit in physical memory at once.
The main difference between a pager and a swapper is that a
pager deals with individual pages of a process, while a swapper
deals with entire processes.
Here is a table that summarizes the key differences between
paging and swapping:
Feature Paging Swapping

Granularity Individual pages Entire processes


To allow the operating
To allow a process
system to run more
to be larger than the
Purpose processes than would
amount of physical
fit in physical memory
memory available
at once

When a page is When the operating


When it
accessed and not in system needs to free
happens
physical memory up physical memory

Faster than
Speed Slower than paging
swapping

Question 7: Explain the features of Operating System.

The features of an operating system can be broadly divided into


two categories: kernel features and user-level features.
Kernel features are those that are essential for the operation of
the operating system itself. These features include:
• Process management: The operating system is responsible

for creating, scheduling, and terminating processes.


• Memory management: The operating system is responsible


for allocating and managing memory for processes.

• File system management: The operating system provides a


file system that allows processes to store and retrieve data.

• Device management: The operating system provides a


device driver interface that allows processes to interact with
hardware devices.

• Security: The operating system provides security features to


protect system resources and user data.
User-level features are those that are provided to users for their
convenience. These features include:
• Command-line interpreter: The command-line interpreter is a

program that allows users to interact with the operating


system by typing commands.
• Graphical user interface: The graphical user interface (GUI)

provides a user-friendly way to interact with the operating


system using icons, menus, and other graphical elements.
• System utilities: The operating system provides a variety of

system utilities, such as text editors, file managers, and web


browsers.

Question 8: What are frames?


A frame is a unit of physical memory in a paged memory system.
Frames are typically the same size as pages, but this is not
always the case.
When a page is accessed and not in physical memory, the
operating system must bring the page into memory from
secondary storage. This process is called paging in.
The operating system selects a frame for the page and copies
the page from secondary storage into the frame. The operating
system then updates the page table to reflect the new location of
the page in memory.

Question 9: What is thrashing?


Answer:
Thrashing is a condition in which the operating system spends
more time paging pages in and out of memory than executing
user processes.
Thrashing can occur when there are too many processes
running and not enough physical memory available. The
operating system must then constantly swap pages in and out of
memory, which can severely degrade performance.
Question 10: Explain 'valid' and 'invalid' bit in page table.
Answer:
The valid bit in a page table entry indicates whether the
corresponding page is in physical memory. If the valid bit is set,
the page is in physical memory and can be accessed. If the valid
bit is not set, the page is not in physical memory and must be
paged in before it can be accessed.
The invalid bit in a page table entry is used to protect the
operating system from accessing invalid pages. For example, if a
process tries to access a page that has been swapped out to
disk, the operating system will generate a page fault. The
operating system can then handle the page fault by paging the
page back into memory or terminating the process.
PART B

Q.1 Preemptive and Non-preemptive Scheduling:

- **Preemptive Scheduling:** In preemptive scheduling, the operating


system can suspend a currently running process to start or resume
another, higher-priority process. The decision to switch between
processes can occur at any time. Common preemptive scheduling
algorithms include Round Robin, Priority Scheduling, and Multilevel
Queue Scheduling.

- **Non-preemptive Scheduling:** In non-preemptive scheduling, a


running process cannot be interrupted; it must release the CPU
voluntarily. The CPU is allocated to a process until it completes its
execution or enters a waiting state. Common non-preemptive
scheduling algorithms include First Come First Serve (FCFS) and
Shortest Job Next (SJN).

**Process State Diagram:**

The process state diagram represents the various states a process can
be in during its lifecycle. The typical states are:

1. **New:** The process is being created.


2. **Ready:** The process is waiting to be assigned to a processor.
3. **Running:** The process is being executed.
4. **Blocked (or Waiting):** The process is waiting for some event to
occur.
5. **Terminated:** The process has finished execution.

Processes transition between these states based on events and


scheduling decisions made by the operating system. The transitions
are typically represented as arrows between the states, and events
trigger the movement between states.

Q.2 Necessary Conditions for Deadlock and Resource Graph Model:

**Necessary Conditions for Deadlock:**


1. **Mutual Exclusion:** At least one resource must be held in a non-
sharable mode.
2. **Hold and Wait:** A process is holding at least one resource and
waiting for resources that are currently held by other processes.
3. **No Preemption:** Resources cannot be forcibly taken away from
a process; they must be released voluntarily.
4. **Circular Wait:** A set of waiting processes must exist, such that
each process is waiting for a resource held by another process in the
set.
**Resource Graph Model:**
- The resource graph is a directed graph where nodes represent
processes and resource types, and edges represent resource requests
or allocations.
- Circular wait can be detected in the resource graph. If there is a cycle
in the graph, it indicates a potential deadlock.

**Safe and Unsafe States:**


- **Safe State:** A state is safe if there exists a sequence of processes
such that each one can obtain its maximum resource requirements
and release all its resources, allowing the next process in the sequence
to do the same until all processes complete.
- **Unsafe State:** A state is unsafe if it is not safe. In an unsafe state,
there is a possibility of deadlock.

Example:
Consider three processes (P1, P2, P3) and three resource types (A, B,
C). If P1 holds resource A and requests resource B, P2 holds resource B
and requests resource C, and P3 holds resource C and requests
resource A, a circular wait occurs, leading to an unsafe state.
Q.3
(a) **Inter-process Communication (IPC):**
IPC involves mechanisms that allow processes to communicate and
synchronize with each other. Common IPC mechanisms include
message passing, shared memory, and semaphores.

(b) **Mutual Exclusion and Race Condition:**


- **Mutual Exclusion:** It ensures that only one process at a time can
access a critical section of code or a shared resource. This is often
implemented using locks or semaphores to prevent interference.
- **Race Condition:** It occurs when multiple processes access shared
data concurrently, and the final outcome depends on the order of
execution. Proper synchronization mechanisms are needed to avoid
race conditions.

(c) **Critical Section:**


A critical section is a part of a program where shared resources are
accessed and should be executed atomically. Mutual exclusion
mechanisms are used to ensure that only one process can enter the
critical section at a time to avoid race conditions.
Q.4
**Demand Paging, Virtual Memory, and Page Fault:**

- **Demand Paging:** It is a memory management scheme where


pages are brought into memory only when they are demanded by the
program during execution. This helps in efficient usage of memory.

- **Virtual Memory:** It is a memory management technique that


provides an "idealized abstraction of the storage resources that are
actually available on a given machine" which creates an "illusion" to
users of a very large (main) memory.

- **Page Fault:** A page fault occurs when a program accesses a page


in virtual memory that is not currently in RAM. The operating system
then fetches the required page from secondary storage (e.g., disk) into
RAM.

------------------------------------------------------------------------------------------
Q.5
**File Management:**

File management involves the organization and manipulation of data


stored in files. It includes creating, deleting, reading, and writing files.
Different types of file organizations include:

- **Sequential Files:** Data is stored in a linear sequence, and records


are accessed in order.
- **Indexed Files:** An index is maintained to access records quickly.
- **Random Access Files:** Direct access to any record using a unique
key.
- **Multilevel Index Files:** Indexes with multiple levels for efficient
access.

**File Structures:**
- **Heap (Unordered) File:** Records are stored in no particular order.
- **Sequential File:** Records are stored in order based on a key field.
- **Hashed File:** A hash function is used to determine the storage
location of each record.
- **Clustered File:** Records with similar values are grouped together.

Effective file management is crucial for efficient data storage, retrieval,


and manipulation in computer systems.

------------------------------------------------------------------------------
QUE 6
**Difference between Windows and Linux:**

1. **Source Code Access:**


- **Windows:** Closed source, proprietary operating system
developed by Microsoft.
- **Linux:** Open source, freely available, and developed
collaboratively by a community of developers.

2. **User Interface:**
- **Windows:** Typically has a graphical user interface (GUI) with
the Windows Desktop environment.
- **Linux:** Can have various desktop environments (e.g., GNOME,
KDE) and can also be operated through a command-line interface (CLI).
3. **File System:**
- **Windows:** Primarily uses file systems like NTFS and FAT.
- **Linux:** Supports various file systems, including ext4, Btrfs, and
XFS.

4. **Security Model:**
- **Windows:** Has a different security model with User Account
Control (UAC) and antivirus software often used.
- **Linux:** Follows a robust security model with user privileges and
permissions, and it is less susceptible to malware.

5. **Cost:**
- **Windows:** Typically requires a license fee for each installation.
- **Linux:** Generally free to use and distribute.

6. **Command Line vs. GUI:**


- **Windows:** Emphasizes GUI, but also provides a command-line
interface (Command Prompt, PowerShell).
- **Linux:** Provides powerful command-line tools, and many tasks
are performed using the terminal.

7. **Software Installation:**
- **Windows:** Software installation often involves executable (.exe)
files.
- **Linux:** Software is often installed through package managers,
which handle dependencies and updates.

8. **System Customization:**
- **Windows:** Customization options are somewhat limited
compared to Linux.
- **Linux:** Highly customizable, allowing users to modify the kernel
and other components.

9. **Multitasking and Stability:**


- **Windows:** Generally considered stable, but may require
periodic restarts.
- **Linux:** Known for stability and is often used in server
environments with long uptimes.
QUE 7

**Memory Management Unit (MMU):**

The Memory Management Unit (MMU) is a hardware component in a


computer that manages the mapping between virtual addresses used
by programs and the physical addresses in RAM. It plays a crucial role
in the virtual memory system.

**Memory Allocation Algorithms:**

1. **Best Fit:**
- Allocates the smallest free block of memory that is large enough to
accommodate the requested size.
- Pros: Reduces fragmentation.
- Cons: May lead to small, unused memory gaps.

2. **Worst Fit:**
- Allocates the largest free block of memory available.
- Pros: Reduces the likelihood of having large unused memory blocks.
- Cons: Can result in increased fragmentation.

3. **Quick Fit:**
- Uses pre-defined block sizes for allocation.
- Allocates the block that is closest in size to the requested size.
- Pros: Reduces search time for a suitable block.
- Cons: May lead to fragmentation if block sizes are not chosen
carefully.

These algorithms are used by the operating system to manage


memory and allocate resources efficiently. The choice of algorithm
depends on factors such as the system's requirements, workload, and
memory management goals.
PART C

**Page Replacement Algorithms (FIFO and LRU):**

Given page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

**Frame Size: 3**

1. **FIFO (First-In-First-Out):**
- Initially, the frames are empty: [ ], [ ], [ ].
- Page 1 comes in: [1], [ ], [ ] (Page Fault: 1).
- Page 2 comes in: [1, 2], [ ], [ ] (Page Fault: 2).
- Page 3 comes in: [1, 2, 3], [ ], [ ] (Page Fault: 3).
- Page 4 comes in: [2, 3, 4], [ ], [ ] (Page Fault: 4).
- Page 1 comes in: [3, 4, 1], [ ], [ ] (Page Fault: 1).
- ... and so on.

2. **LRU (Least Recently Used):**


- Initially, the frames are empty: [ ], [ ], [ ].
- Page 1 comes in: [1], [ ], [ ] (Page Fault: 1).
- Page 2 comes in: [1, 2], [ ], [ ] (Page Fault: 2).
- Page 3 comes in: [1, 2, 3], [ ], [ ] (Page Fault: 3).
- Page 4 comes in: [2, 3, 4], [ ], [ ] (Page Fault: 4).
- Page 1 comes in: [3, 4, 1], [ ], [ ] (Page Fault: 1).
- ... and so on.

**Frame Size: 4**

Repeat the process for both FIFO and LRU with a frame size of 4.

**Belady's Anomaly:**

Belady's anomaly refers to the unexpected occurrence of an increase


in the number of page faults when the number of page frames in a
system is increased. In some cases, adding more page frames to the
system may result in more page faults, which is counterintuitive.
Belady's anomaly can occur with certain page replacement algorithms,
especially with FIFO. The anomaly challenges the common belief that
increasing the number of available page frames should always lead to
a reduction in page faults. It highlights the non-monotonic behavior of
page replacement algorithms in certain situations.

**Schedulers:**

1. **Long-Term Scheduler (Job Scheduler):**


- Responsible for selecting processes from the job pool (secondary
storage) and loading them into the ready queue (main memory) for
execution.
- Aims to maintain a mix of I/O-bound and CPU-bound processes for
efficient system performance.

2. **Short-Term Scheduler (CPU Scheduler):**


- Executes frequently and selects a process from the ready queue to
run on the CPU.
- Prioritizes processes based on scheduling algorithms like Round
Robin, Priority Scheduling, etc.
- Manages the execution of processes in main memory.

3. **Medium-Term Scheduler:**
- Acts as an intermediate between the long-term and short-term
schedulers.
- Responsible for swapping processes in and out of the main
memory.
- Helps in avoiding overloading the memory by moving some
processes to the secondary storage (e.g., swapping to disk).

(b)
**Layered Approach of the Operating System:**

The layered approach to the operating system involves organizing the


OS into layers, where each layer provides specific functionality and
services to the layers above it. The layers are designed to be
independent and communicate with adjacent layers through well-
defined interfaces. The typical layers in a layered OS architecture are:
1. **Hardware Layer:** The lowest layer interacts directly with the
hardware, managing device drivers, interrupt handling, and basic I/O
operations.

2. **Kernel Layer:** This layer provides essential services such as


process management, memory management, file systems, and I/O
operations. It acts as an interface between the hardware and higher-
level layers.

3. **System Call Layer:** It provides a set of system calls that allow


user-level processes to request services from the kernel. System calls
serve as the boundary between user-space and kernel-space.

4. **Library Layer:** This layer includes standard libraries and APIs


that provide higher-level functions and services to application
programs. It acts as a bridge between user applications and system
calls.

5. **User Interface Layer:** The top layer interacts with users,


providing a user interface such as command-line interfaces (CLI) or
graphical user interfaces (GUI). This layer includes utilities and
applications for end-users.
The layered approach enhances modularity, maintainability, and
flexibility in operating system design. Each layer can be developed,
tested, and modified independently, making the overall system more
robust and adaptable to changes.

- ------------- ------------------ ----------------------- -----------

Ans 3

To find the average waiting time and turnaround time using Gantt
chart for Shortest Job First (SJF) and Priority Scheduling, we'll follow
these steps:

**SJF (Shortest Job First):**

1. Arrange the processes in ascending order based on burst time.


2. Execute the processes in the order obtained in step 1.
3. Calculate waiting time and turnaround time for each process.
4. Find the average waiting time and average turnaround time.
**Priority Scheduling:**

1. Arrange the processes in ascending order based on priority.


2. Execute the processes in the order obtained in step 1.
3. Calculate waiting time and turnaround time for each process.
4. Find the average waiting time and average turnaround time.

**Process Data:**

| Process | Burst Time (ms) | Priority |


|---------|-----------------|----------|
| P1 |5 |5 |
| P2 |3 |4 |
| P3 |8 |3 |
| P4 |2 |1 |
| P5 |1 |2 |

**Gantt Chart for SJF:**


| Time | P4 | P5 | P2 | P1 | P3 |
|-------|----|----|----|----|----|
| Start | 0 | 2 | 3 | 6 | 11 |
| End | 2 | 3 | 6 | 11 | 19 |

**Calculations for SJF:**

- Waiting Time:
- P1: 0 ms
- P2: 3 ms
- P3: 11 ms
- P4: 6 ms
- P5: 2 ms

- Turnaround Time:
- P1: 5 ms
- P2: 6 ms
- P3: 19 ms
- P4: 8 ms
- P5: 3 ms

**Gantt Chart for Priority Scheduling:**

| Time | P4 | P3 | P5 | P2 | P1 |
|-------|----|----|----|----|----|
| Start | 0 | 2 | 3 | 6 | 9 |
| End | 2 | 11 | 4 | 9 | 14 |

**Calculations for Priority Scheduling:**

- Waiting Time:
- P1: 4 ms
- P2: 6 ms
- P3: 11 ms
- P4: 2 ms
- P5: 3 ms

- Turnaround Time:
- P1: 9 ms
- P2: 9 ms
- P3: 19 ms
- P4: 8 ms
- P5: 4 ms
Ans 4 Suppose a disk drive has 200 cylinders. The drive is initially at cylinder position 98. The queue with request
from I/O to blocks on cylinders - 86, 147, 91, 177, 94, 150, 102, 175, 130 What is the total head movement needed to
satisfy the request for SCAN and C-SCAN scheduling algorithm?

**Disk Scheduling Algorithms: SCAN and C-SCAN:**

Given the initial position of the disk drive at cylinder 98 and the queue
of requests:

\[ 86, 147, 91, 177, 94, 150, 102, 175, 130 \]

**SCAN (Elevator) Algorithm:**


1. Start at the current position (cylinder 98).
2. Move in one direction (either towards lower or higher cylinder
numbers), serving requests along the way until the end.
3. When reaching the end, reverse direction and serve remaining
requests in the opposite direction.
4. Continue until all requests are served.

**C-SCAN (Circular SCAN) Algorithm:**


1. Similar to SCAN, but when reaching the end, jump to the opposite
end without serving requests in reverse.
2. Essentially, it is like SCAN, but the disk arm does not go back to the
beginning immediately after reaching the end.

**Calculations:**
- For SCAN and C-SCAN, calculate the distance between each adjacent
pair of requests and sum them up to get the total head movement.

Q 5 Explain the followings (a) Data Structure of Bankers


Algorithm (b) Segmentation (c) File Security

**Data Structure of Banker's Algorithm:**

(a) **Banker's Algorithm:**


- Banker's algorithm is used to avoid deadlock in resource allocation by
ensuring that the system remains in a safe state. The data structure
consists of the following:

1. **Available:** An array representing the number of available


resources of each type.
2. **Max:** A matrix indicating the maximum demand of each process
for each resource type.
3. **Allocation:** A matrix representing the number of resources of
each type currently allocated to each process.
4. **Need:** A matrix representing the remaining need of each
process for each resource type (Max - Allocation).

**Segmentation:**

(b) **Segmentation:**
- Segmentation is a memory management technique where memory is
divided into variable-sized segments. Each segment corresponds to a
logical unit, such as a procedure, function, or data structure.

1. **Code Segment:** Contains the program's executable code.


2. **Data Segment:** Stores global and static variables.
3. **Stack Segment:** Manages function call information and local
variables.
4. **Heap Segment:** Dynamically allocated memory during program
execution.

**File Security:**

(c) **File Security:**


- File security involves protecting files and data from unauthorized
access, modification, and deletion. Key aspects include:

1. **Access Control Lists (ACLs):** Lists specifying the permissions


(read, write, execute) for users or groups.
2. **File Permissions:** Setting permissions for the owner, group, and
others (e.g., chmod in Unix/Linux).
3. **Encryption:** Encoding data to prevent unauthorized access.
4. **Authentication and Authorization:** Verifying user identity and
determining their access rights.
5. **Auditing:** Monitoring and logging file access for security
analysis.
6. **Firewalls and Intrusion Detection Systems (IDS):** Additional
measures to protect against external threats.

Effective file security ensures data integrity, confidentiality, and


availability while preventing unauthorized access or tampering.

You might also like