You are on page 1of 4

Process

1. Process Control Block (PCB)


2. Process state transition diagram
3. Process Scheduling
4. Inter Process Communication (IPC) using shared memory and message passing
5. Preemptive and Non-preemptive scheduling
6. Scheduling algos: FCFS, SJF, RR
Process Synchronization
1. What is process Synchronization
2. Race condition
3. Effect of race condition (like data corruption, crashes, deadlock)
4. Types of race conditions
5. Prevention of race conditions(lock, semaphore, atomic operation, avoid global variable, message passing
etc)
6. Example of race conditions (like two employees in a company competing and accessing same resource)
7. Prevention of race condition (version control system and check out check in system mechanism to allow
work on different copies of system and merging later implementation by company)
8. Accessing shared memory semaphore
9. Lock
10. Reading and writing files
11. Critical section problem.
Critical Section Problem
1. Deadlock: what is deadlock
2. Necessary conditions of deadlock
3. Deadlock prevention
4. Deadlock avoidance
5. Deadlock detection recovery: limitation of algorithm.

***************************************************************************************

1. **Process Control Block (PCB)**:


- A Process Control Block (PCB) is a data structure used by an operating system to store information about
a running process. It contains essential details such as the process's state, program counter, registers,
scheduling information, and more. PCBs are crucial for the management and control of processes in a
computer system.

2. **Process State Transition Diagram**:


- A Process State Transition Diagram is a visual representation that illustrates the various states a process
can go through during its lifecycle in an operating system. Common states include "New," "Ready,"
"Running," "Blocked," and "Terminated." The diagram shows how processes transition from one state to
another based on events or system calls.

3. **Process Scheduling**:
- Process Scheduling is the mechanism by which an operating system manages and allocates CPU time to
different processes. It aims to ensure fair and efficient utilization of CPU resources by determining which
process should execute next and for how long. Various scheduling algorithms are used to achieve this.
4. **Inter Process Communication (IPC) using shared memory and message passing**:
- IPC is a set of mechanisms that allow processes to communicate and exchange data with each other in a
multi-process or multi-threaded environment. Two common IPC methods are:
- **Shared Memory**: In this method, processes can access a common area of memory to read and write
data, enabling fast communication but requiring synchronization.
- **Message Passing**: Processes exchange messages through the operating system, which provides a
reliable way to communicate but can be slower than shared memory.

5. **Preemptive and Non-preemptive Scheduling**:


- Preemptive Scheduling is a scheduling policy in which the operating system can interrupt a running
process and allocate the CPU to another process if a higher-priority process becomes available. This ensures
fairness and responsiveness.
- Non-preemptive Scheduling, on the other hand, allows a process to run until it voluntarily releases the
CPU. It is suitable for simple systems or when precise timing is essential.

6. **Scheduling Algorithms: FCFS, SJF, RR (First-Come-First-Serve, Shortest Job First, Round Robin)**:
- **FCFS (First-Come-First-Serve)**: In this algorithm, processes are executed in the order they arrive in
the ready queue. It is simple but can lead to poor response times for long-running processes.
- **SJF (Shortest Job First)**: SJF schedules the process with the shortest burst time first. It minimizes
waiting time and is efficient when burst times are known in advance.
- **RR (Round Robin)**: RR is a preemptive scheduling algorithm where each process is given a fixed
time slice (quantum) to execute. If a process doesn't finish within its quantum, it's moved to the back of the
queue. This ensures fairness and responsiveness.

***************************************************************************************

1. **Process Synchronization**:
Process synchronization is a fundamental concept in computer science and operating systems. It refers to
the coordination of multiple processes or threads to ensure they access shared resources or execute critical
sections of code in an orderly and controlled manner. The goal of process synchronization is to prevent
issues like race conditions, data corruption, and deadlock, which can occur when multiple processes or
threads access shared resources concurrently.

2. **Race Condition**:
A race condition is a situation in which the behavior of a system or program depends on the relative timing
of events, particularly when multiple processes or threads access shared resources simultaneously without
proper synchronization. This can lead to unpredictable and undesirable outcomes, as the order of execution
can vary, resulting in data inconsistencies or errors.

3. **Effects of Race Condition**:


Race conditions can have several adverse effects, including:
- **Data Corruption**: Concurrent access to shared data can lead to data being modified unexpectedly,
causing corruption.
- **Crashes**: Race conditions can trigger program crashes or unexpected behavior due to inconsistent
data.
- **Deadlock**: In some cases, processes can become deadlocked, where they're waiting indefinitely for
resources that are locked by other processes, leading to a system standstill.

4. **Types of Race Conditions**:


Common types of race conditions include:
- **Read-Write Race**: Multiple processes simultaneously read and write to shared data.
- **Write-Write Race**: Multiple processes attempt to write to shared data concurrently.
- **Write-Read Race**: One process writes while another reads the same shared data simultaneously.
5. **Prevention of Race Conditions**:
Race conditions can be prevented using various synchronization techniques such as locks, semaphores,
atomic operations, avoiding global variables, and message passing. These mechanisms help ensure that only
one process or thread can access a shared resource at a time.

6. **Example of Race Condition**:


Imagine two employees in a company trying to update the same customer's account balance concurrently.
If not properly synchronized, both employees might read the current balance, make changes, and save it back
without considering each other's updates, potentially leading to incorrect and inconsistent account balances.

7. **Prevention of Race Condition (Version Control System)**:


Version control systems like Git employ mechanisms such as branching and merging to allow developers to
work on different copies of the codebase concurrently. Developers can then merge their changes
systematically, preventing race conditions in code collaboration.

8. **Accessing Shared Memory Semaphore**:


Accessing shared memory using a semaphore involves using semaphores to control access to shared
memory regions. A semaphore allows only one process or thread to access the shared memory at a time,
ensuring synchronization and preventing race conditions.

9. **Lock**:
A lock is a synchronization primitive that prevents multiple processes or threads from simultaneously
accessing a shared resource. Locks provide exclusive access, ensuring that only one process holds the lock
and can access the resource at any given time.

10. **Reading and Writing Files**:


Race conditions can also occur when multiple processes or threads attempt to read or write to the same file
concurrently. To prevent this, file locks or mutexes can be used to coordinate access to the file.

11. **Critical Section Problem**:


The critical section problem is a classic synchronization problem. It refers to the need to ensure that only
one process or thread can execute a specific section of code (the critical section) at a time. Synchronization
mechanisms like locks and semaphores are used to solve the critical section problem and prevent race
conditions.

***************************************************************************************

1. **Deadlock:**
Deadlock is a situation in computer science and operating systems where two or more processes or threads
are unable to proceed because each is waiting for the other(s) to release resources or take some action.
Essentially, it's a standstill where no progress can be made, and the system becomes unresponsive.
2. **Necessary Conditions of Deadlock:**
Deadlocks typically occur when four necessary conditions are met simultaneously. These conditions are:
- **Mutual Exclusion:** At least one resource must be held in a non-shareable mode. Only one process
can use the resource at a time.
- **Hold and Wait:** Processes must hold resources while waiting for additional ones to become available.
- **No Preemption:** Resources cannot be forcibly taken away from a process; they must be released
voluntarily.
- **Circular Wait:** A circular chain of two or more processes exists, where each process is waiting for a
resource held by the next process in the chain.

3. **Deadlock Prevention:**
Deadlock prevention techniques aim to eliminate one or more of the necessary conditions for deadlock to
occur. Some common strategies include:
- **Mutual Exclusion:** Use resource-sharing if possible or design resources in a way that multiple
processes can access them concurrently.
- **Hold and Wait:** Require processes to request all necessary resources at once, or release resources if
additional ones cannot be obtained.
- **No Preemption:** Allow resources to be preempted if necessary by forcibly taking them from a
process.
- **Circular Wait:** Impose a total ordering of resource types and require processes to request resources in
order.

4. **Deadlock Avoidance:**
Deadlock avoidance involves dynamic monitoring of the resource allocation state to prevent processes
from entering into deadlock-prone situations. It relies on algorithms to ensure that resource allocations are
safe and will not lead to deadlock. Banker's Algorithm is one example of a deadlock avoidance technique.

5. **Deadlock Detection and Recovery: Limitation of Algorithm:**


Deadlock detection and recovery involve periodically checking the system for deadlock and taking actions
to recover from it when detected. Some limitations of this approach are:
- **Resource Wastage:** Deadlock detection may cause a delay in identifying and resolving deadlocks,
leading to resource wastage.
- **Complexity:** Implementing a deadlock detection algorithm can be complex, especially in large
systems with many resources and processes.
- **Overhead:** Continuous monitoring for deadlock can consume system resources, affecting system
performance.
- **Recovery Complexity:** Recovering from a deadlock can be challenging, and it may involve
terminating processes, releasing resources, or even restarting the system, which can lead to data loss or
disruption of services.

You might also like