You are on page 1of 16

CPU SCHEDULING

What to know about CPU sheduling


• Preemptive and non preemptive scheduling
• Cpu burst
• Process states
• Context switch
• Dispatcher and cpu/process scheduler
• Scheduling criteria
• Scheduling algorithms
Definitions
• CPU Burst:
• CPU burst refers to the period during which a process or thread is actively executing instructions on the
CPU. It is the time interval between the moment a process starts using the CPU and the moment it releases
the CPU, either voluntarily (e.g., completing its task) or involuntarily (e.g., waiting for I/O).
• CPU Scheduler:
• CPU scheduler is a component of the operating system responsible for selecting which process or thread to
execute next on the CPU. It determines the order in which processes are allocated CPU time based on
scheduling policies and criteria.
• Dispatcher:
• Dispatcher is a component of the operating system responsible for performing the actual context switch
between processes or threads on the CPU. It is invoked by the CPU scheduler when a new process is
selected for execution or when a context switch is required. Context Switch:
• Context switch refers to the process of saving the state of a currently running process or thread (context)
and loading the state of a different process or thread for execution on the CPU. It occurs when the CPU
scheduler selects a new process to run or when a process voluntarily relinquishes the CPU.
definitions
• Preemptive Scheduling:
• In preemptive scheduling, the operating system can interrupt a currently running process or thread and
allocate the CPU to another process or thread.
• Non-preemptive Scheduling (also known as Cooperative Scheduling):
• In non-preemptive scheduling, a running process or thread voluntarily gives up the CPU, typically by blocking
on I/O operations, completing its execution, or yielding explicitly.
FCFS
• First-Come, First-Served (FCFS):
• Definition: FCFS is a non-preemptive scheduling algorithm where processes are executed in the
order they arrive in the ready queue. The process that arrives first gets executed first.
• Advantages:
• Simple and easy to implement.
• Fairness: Ensures that processes are executed in the order they arrive, promoting fairness
among processes.
• Disadvantages:
• Poor turnaround time: May result in longer average turnaround time, especially if short
processes are queued behind long ones (convoy effect).
• Inefficiency: May lead to low CPU utilization if long processes are scheduled first.
SJF
• Shortest Job Next (SJN) or Shortest Job First (SJF):
• Definition: SJN is a non-preemptive scheduling algorithm where the process with the shortest
burst time (execution time) is selected for execution next. Also known as Shortest Job First (SJF).
• Advantages:
• Minimizes average waiting time and turnaround time by executing shorter processes first.
• Improves system throughput by quickly completing short jobs.
• Disadvantages:
• Difficulty in prediction: Requires knowledge of the execution time of each process, which
may not always be available.
• May lead to starvation: Long processes may suffer from starvation if short processes
continuously arrive.
PRIORITY SCHEDULING
• Priority Scheduling:
• Definition: Priority scheduling is a preemptive or non-preemptive scheduling algorithm where each
process is assigned a priority. The scheduler selects the process with the highest priority for execution.
In preemptive priority scheduling, the running process can be preempted by a higher-priority process.
• Advantages:
• Supports priority-based execution, allowing higher-priority processes to receive preferential
treatment.
• Enables customization: Allows administrators to assign priorities based on process importance or
criticality.
• Disadvantages:
• Possibility of starvation: Lower-priority processes may suffer from starvation if higher-priority
processes continuously arrive.
• Priority inversion: Low-priority processes may hold resources needed by high-priority processes,
causing priority inversion.
ROUND ROBIN
• Round Robin (RR):
• Definition: Round Robin is a preemptive scheduling algorithm where each process is assigned a
fixed time slice (quantum). The scheduler executes processes in a circular manner, allocating one
time slice to each process in turn.
• Advantages:
• Fairness: Provides fairness by allocating a fixed time slice (quantum) to each process,
ensuring that no process monopolizes the CPU for too long.
• Responsive: Suitable for interactive systems as it provides quick response times.
• Disadvantages:
• Higher overhead: Context switching overhead may reduce overall system performance,
especially with small time slices.
• Poor performance with varying burst times: May result in longer average waiting time if
processes have varying burst times
MULTILEVEL QUEUE
• Multilevel Queue:
• Definition: Multilevel queue scheduling divides processes into multiple queues based on priority
or other criteria. Each queue has its own scheduling algorithm and priority level.
• Advantages:
• Organizes processes into separate queues based on priority or other criteria, providing better
management and control.
• Supports different scheduling algorithms for different queues, allowing customization based
on process characteristics.
• Disadvantages:
• Complexity: More complex to manage multiple queues and scheduling policies, requiring
careful design and implementation.
• Priority inversion: Lower-priority processes in higher-priority queues may hold resources
needed by higher-priority processes, causing priority inversion.
MULTILEVEL QUEUE FEEDBACK
• Multilevel Feedback Queue:
• Definition: Multilevel feedback queue scheduling is an extension of multilevel queue scheduling
where processes can move between queues based on their behavior. Processes that use more CPU
time or exhibit high priority may be moved to higher-priority queues.
• Advantages:
• Provides flexibility: Allows processes to move between different priority queues based on their
behavior and resource requirements.
• Adaptive: Adapts to changing workload characteristics and process behavior, optimizing
performance over time.
• Disadvantages:
• Complexity: More complex to implement compared to simple multilevel queue algorithms,
requiring sophisticated feedback mechanisms and policies.
• Overhead: Increased overhead due to dynamic queue management and process migration
between queues.
THREADS
• Definition: A thread is the smallest unit of execution within a process. Unlike processes, which are
independent entities with their own memory space, threads share the same memory space and
resources within a process.
• Concurrency: Threads allow multiple tasks to be executed concurrently within the same process.
Each thread has its own program counter, stack, and thread-specific register values, but they all
share the same code segment, data segment, and other resources of the parent process.
TYPES OF THREADS
• User-Level Threads (ULTs): Managed entirely by the application and
not visible to the operating system. ULTs provide more flexibility and
control over thread management but may suffer from limitations such
as inability to take advantage of multi-core processors effectively.
• Kernel-Level Threads (KLTs): Managed by the operating system
kernel. Each thread is represented as a separate kernel data structure.
KLTs offer better performance and support for multi-core processors,
as the kernel can schedule threads across multiple processors.
RELATIONSHIPS FOR THREADS
• Many-to-One Model:
• Definition: In the many-to-one model, multiple user-level threads are mapped to a single kernel-
level thread. Thread management and scheduling are handled entirely by the user-level thread
library without kernel support.
• Advantages:
• Lightweight: User-level threads are lightweight and incur minimal overhead, as they do not require kernel
involvement for thread management.
• Easy to implement: Implementing user-level threading libraries for many-to-one models is straightforward and
does not require changes to the kernel.
• Disadvantages:
• Lack of parallelism: Since all user-level threads are mapped to a single kernel-level thread, only one thread can
execute at a time, limiting parallelism and scalability.
• Blocking system calls: If a user-level thread performs a blocking system call (e.g., I/O), it blocks the entire
process, preventing other user-level threads from making progress.
• Non-preemptive: Kernel-level threads cannot be preempted by the operating system scheduler, leading to
potential responsiveness issues.
ONE TO ONE
• One-to-One Model:
• Definition: In the one-to-one model, each user-level thread is mapped to a distinct kernel-level thread.
Thread management and scheduling are handled by the kernel.
• Advantages:
• Parallelism: Multiple threads can execute concurrently on multiple CPU cores, maximizing CPU utilization and
parallelism.
• Preemptive: Kernel-level threads can be preempted by the operating system scheduler, allowing for better
responsiveness and fairness.
• Blocking system calls: Blocking system calls do not affect other threads, as each thread has its own kernel-level
thread.
• Disadvantages:
• Overhead: Creating and managing kernel-level threads incurs additional overhead compared to user-level threads,
leading to increased resource consumption.
• Scalability: The one-to-one model may not scale well for applications with a large number of threads, as it requires
significant kernel resources.
• Complexity: Implementing and managing a large number of kernel-level threads can be complex and may require
careful tuning
MANY TO MANY
• Many-to-Many Model:
• Definition: In the many-to-many model, multiple user-level threads are multiplexed onto a smaller or equal
number of kernel-level threads. Both user-level and kernel-level threads are managed by the system.
• Advantages:
• Flexibility: Provides a balance between lightweight user-level threads and efficient kernel-level threads, allowing for
better resource utilization and scalability.
• Improved parallelism: Allows multiple user-level threads to execute concurrently on multiple CPU cores, maximizing
parallelism and performance.
• Blocking system calls: Blocking system calls do not block the entire process, as other user-level threads can continue
executing on different kernel-level threads.
• Disadvantages:
• Complexity: Implementing and managing the many-to-many model is more complex than other models, as it requires
coordination between user-level and kernel-level thread management.
• Overhead: Multiplexing user-level threads onto a smaller number of kernel-level threads incurs additional overhead
compared to the many-to-one model.
• Resource contention: If the number of user-level threads exceeds the number of available kernel-level threads, resource
contention may occur, leading to decreased performance.

You might also like