Professional Documents
Culture Documents
OS - Note (Soumya Sir)
OS - Note (Soumya Sir)
***************************************************************************************
3. **Process Scheduling**:
- Process Scheduling is the mechanism by which an operating system manages and allocates CPU time to
different processes. It aims to ensure fair and efficient utilization of CPU resources by determining which
process should execute next and for how long. Various scheduling algorithms are used to achieve this.
4. **Inter Process Communication (IPC) using shared memory and message passing**:
- IPC is a set of mechanisms that allow processes to communicate and exchange data with each other in a
multi-process or multi-threaded environment. Two common IPC methods are:
- **Shared Memory**: In this method, processes can access a common area of memory to read and write
data, enabling fast communication but requiring synchronization.
- **Message Passing**: Processes exchange messages through the operating system, which provides a
reliable way to communicate but can be slower than shared memory.
6. **Scheduling Algorithms: FCFS, SJF, RR (First-Come-First-Serve, Shortest Job First, Round Robin)**:
- **FCFS (First-Come-First-Serve)**: In this algorithm, processes are executed in the order they arrive in
the ready queue. It is simple but can lead to poor response times for long-running processes.
- **SJF (Shortest Job First)**: SJF schedules the process with the shortest burst time first. It minimizes
waiting time and is efficient when burst times are known in advance.
- **RR (Round Robin)**: RR is a preemptive scheduling algorithm where each process is given a fixed
time slice (quantum) to execute. If a process doesn't finish within its quantum, it's moved to the back of the
queue. This ensures fairness and responsiveness.
***************************************************************************************
1. **Process Synchronization**:
Process synchronization is a fundamental concept in computer science and operating systems. It refers to
the coordination of multiple processes or threads to ensure they access shared resources or execute critical
sections of code in an orderly and controlled manner. The goal of process synchronization is to prevent
issues like race conditions, data corruption, and deadlock, which can occur when multiple processes or
threads access shared resources concurrently.
2. **Race Condition**:
A race condition is a situation in which the behavior of a system or program depends on the relative timing
of events, particularly when multiple processes or threads access shared resources simultaneously without
proper synchronization. This can lead to unpredictable and undesirable outcomes, as the order of execution
can vary, resulting in data inconsistencies or errors.
9. **Lock**:
A lock is a synchronization primitive that prevents multiple processes or threads from simultaneously
accessing a shared resource. Locks provide exclusive access, ensuring that only one process holds the lock
and can access the resource at any given time.
***************************************************************************************
1. **Deadlock:**
Deadlock is a situation in computer science and operating systems where two or more processes or threads
are unable to proceed because each is waiting for the other(s) to release resources or take some action.
Essentially, it's a standstill where no progress can be made, and the system becomes unresponsive.
2. **Necessary Conditions of Deadlock:**
Deadlocks typically occur when four necessary conditions are met simultaneously. These conditions are:
- **Mutual Exclusion:** At least one resource must be held in a non-shareable mode. Only one process
can use the resource at a time.
- **Hold and Wait:** Processes must hold resources while waiting for additional ones to become available.
- **No Preemption:** Resources cannot be forcibly taken away from a process; they must be released
voluntarily.
- **Circular Wait:** A circular chain of two or more processes exists, where each process is waiting for a
resource held by the next process in the chain.
3. **Deadlock Prevention:**
Deadlock prevention techniques aim to eliminate one or more of the necessary conditions for deadlock to
occur. Some common strategies include:
- **Mutual Exclusion:** Use resource-sharing if possible or design resources in a way that multiple
processes can access them concurrently.
- **Hold and Wait:** Require processes to request all necessary resources at once, or release resources if
additional ones cannot be obtained.
- **No Preemption:** Allow resources to be preempted if necessary by forcibly taking them from a
process.
- **Circular Wait:** Impose a total ordering of resource types and require processes to request resources in
order.
4. **Deadlock Avoidance:**
Deadlock avoidance involves dynamic monitoring of the resource allocation state to prevent processes
from entering into deadlock-prone situations. It relies on algorithms to ensure that resource allocations are
safe and will not lead to deadlock. Banker's Algorithm is one example of a deadlock avoidance technique.