You are on page 1of 28

Machine Translated by Google

1906003052015
Operating Systems

Dr. Lecturer member Önder


EYECÿOÿLU Computer Engineering
Machine Translated by Google

WEEK TOPICS

Week 1 : Introduction to operating systems, Operating system strategies

Entrance Week 2 : System calls

Week 3 : Task, task management

Course Day and Week 4 : Threads


Time: Wednesday:
Week 5 : Job ranking algorithms
13:00-16:00 • Application Unix (Linux)
Operating system • Attendance Week 6 : Inter-task communication and synchronization
requirement is 70% • Applications will be carried
Week 7 : Semaphores, Monitors and applications
out on the C programming language. Programming
knowledge is expected from students. Week 8 : Visa

Week 9 : Critical Area Problems

Week 10 : Deadlock Problems

Week 11 : Memory Management

Week 12 : Paging, Segmentation

Week 13 : Virtual Memory

Week 14 : File system, access and protection mechanisms, Disk

planning and management

Week 15 : Final
Machine Translated by Google

Entrance

Sources: –
Modern Operating Systems, 3rd Edition by Andrew S.
Tanenbaum, Prentice Hall, 2008.
– Computer Operating Systems (BIS), Ali Saatçi, 2nd Edition,
Bÿçaklar Kitabevi.
Machine Translated by Google

Task Scheduling
Process scheduling is the activity of the process manager that manages the removal of the running process from the CPU and
the selection of another process based on a given strategy.

Process scheduling is an important part of Multiprogramming operating systems. Such operating systems allow multiple
processes to be loaded into executable memory at the same time, and the loaded process shares the CPU using time
multiplexing

On a system with a single CPU core, only one process can run at a time. Others have to wait until the CPU's cores are free
and can be rescheduled. The purpose of multiprogramming is to ensure that some processes always run to maximize CPU
usage.
Typically a process executes until it has to wait for some I/O requests to complete. In a simple computer system, the CPU
remains idle. This time is tried to be used
efficiently through multiprogramming. Several processes are held in memory at the same time. When a process needs to wait,
the operating system takes the CPU from that process and gives the CPU to another process.
This pattern continues. When a process needs to wait, another process can take over the use of the CPU. In a multi-core
system, the concept of keeping the CPU busy is extended to all processing cores in the system.

4
Machine Translated by Google

Task Scheduling
This type of scheduling is a basic operating system function. Almost all computer resources are planned
before use. The CPU is, of course, one of the primary computing resources. Therefore, timing is central to
operating system design.

5
Machine Translated by Google

Task Scheduling
The operating system maintains all PCBs in Process Scheduling Queues. The operating system maintains a separate
queue for each of the process states, and the PCBs of all processes in the same execution state are placed in the same
queue. When a process's status is changed, its PCB is detached from its current queue and moved to the new status queue.

•Job queue - This queue holds all processes in the


system. •Ready queue - This queue keeps a sequence of all
processes in main memory ready and waiting to be executed. A
new transaction is always placed in this
queue. •I/O queues - Processes that are blocked due to the
unavailability of an I/O device form this queue.

6
Machine Translated by Google

Task Scheduling
An important issue with scheduling is when to make scheduling decisions.

• First, when a new process is created, it is necessary to decide whether to run the main process or the
child process. Since both processes are in the ready state, this is a normal scheduling decision and
can go either way, meaning the scheduler can legitimately choose to run the next parent or child.

• Second, when a process emerges, a timing decision must be made. This process can no longer run
(because it no longer exists), so another process must be selected from the set of ready processes. If
no process is ready, an idle process provided by the system is normally run.

7
Machine Translated by Google

Task Scheduling
• Third, when a process blocks on I/O, a semaphore, or some other reason, another process must be
selected to run. Sometimes the blocking reason may play a role in the selection. For example, if A is an
important process and is waiting for B to exit its critical region, allowing B to run later will allow it to exit
its critical region and thus allow A to continue. But the problem is that the scheduler often does not have
the necessary information to account for this dependency.

• Fourth, when an I/O interrupt occurs, a scheduling decision can be made. If the interrupt came from an I/O
device that has completed its work, some processes that were blocked while waiting for I/O may now be
ready to run. It is up to the scheduler to decide whether to run a new ready process, the process that was
running at the time of the interruption, or a third process.

8
Machine Translated by Google

Task Scheduling
If a hardware clock provides periodic interrupts at a certain frequency, every clock interrupt or every k. A
scheduling decision can be made during a time cut. Scheduling algorithms can be divided into two categories
based on how they deal with clock interrupts.

A nonpreemptive scheduling algorithm selects a process to run and then lets it run until it either blocks (in
I/O or waiting for another process) or voluntarily frees up the CPU. Even if he works for hours, he will not be
forcibly suspended. In reality, no scheduling decisions are made during clock interruptions. Once the clock
interrupt is completed, the process that was running before the interrupt is resumed unless a higher priority
process is no longer waiting for a satisfied timeout.

In contrast, a preemptive scheduling algorithm selects a process and lets it run for at most a fixed amount
of time. If it is still running at the end of the time interval, it is suspended and the timer chooses another
process to run (if any). Doing preemptive scheduling requires a clock interrupt at the end of the time interval
to hand control of the CPU back to the scheduler.
If no time is available, non-preemptive scheduling is the only option.

9
Machine Translated by Google

Task Scheduling
Preemptive and Non-Preemptive Scheduling

CPU scheduling decisions can occur under the following four


conditions: 1. When a process transitions from a running state to a waiting state (for example, as a result of an I/
O request or a wait() call to terminate a child process).
2. When a process changes from running to ready (for example, when an interrupt occurs)
3. When a process goes from waiting to ready (for example, when I/O is completed)
4. When a process ends

For cases 1 and 4, there is no option in terms of timing. A new process (if available in the ready queue) must be selected
for execution. However, there is an option for cases 2 and 3.
When scheduling occurs only under conditions 1 and 4, we say that the scheduling scheme is neither preemptive nor
cooperative. Otherwise, it is preemptive. Under non-preemptive scheduling, once the CPU is allocated to a process, it
holds it until the process releases the CPU by terminating or going into a standby state.

Nearly all modern operating systems, including Windows, macOS, Linux, and UNIX, use preemptive scheduling
algorithms.

10
Machine Translated by Google

Task Scheduling
CPU SCHEDULING TYPES

The purpose of processor scheduling is to assign processes to be executed by


the processor or processors over time to meet system objectives such as response
time, throughput, and processor efficiency. In many systems, this scheduling
activity is divided into three separate functions: long-, medium-, and short-term
scheduling. The names suggest the relative time scales on which these functions
were performed.

11th _
Machine Translated by Google

Task Scheduling
CPU SCHEDULING TYPES

Long Term Scheduler


Also called job scheduler . A long-term scheduler determines which
programs are accepted into the system for processing. It selects processes
from the queue and loads them into memory for execution. The process is
loaded into memory for CPU scheduling.

The primary purpose of the job scheduler is to provide a balanced mix of I/O-
bound and processor-bound jobs.

In some systems, the long-range planner may be absent or minimal.


Timesharing operating systems do not have long-term timers. A long-term
scheduler is used when a process changes state back to ready.

12 _
Machine Translated by Google

Task Scheduling
CPU SCHEDULING TYPES

Medium-Term Scheduler
Medium-term planning is part of the trade-off. Removes processes from memory. It
reduces the degree of multiprogramming. The medium-term planner is responsible
for managing the changed processes.

A running process can be suspended if it requests an I/O. A suspended process


cannot make any progress towards completion. In this case, the suspended
process is moved to secondary storage to remove the process from memory and
make room for other processes. This process is called clearing and the process is
said to be exchanged or made available. Swapping may be necessary to improve
the process mix.

13
Machine Translated by Google

Task Scheduling
CPU SCHEDULING TYPES

Short Term Scheduler


Also called CPU scheduler. Its main purpose is to improve system
performance according to a selected set of criteria. It is the transition of the
process from ready to running. The CPU scheduler selects a process from
among the processes ready to execute and allocates the CPU to one of them.

Short-term planners, also known as dispatchers, decide what action to execute next.
Short-term planners are faster than long-term planners.

The main purpose of short-term scheduling is to allocate processor time in a way


that optimizes one or more aspects of system behavior.
In general, a set of criteria is established against which various scheduling policies
can be evaluated.

14
Machine Translated by Google

Scheduling Algorithms CategoriesAmazing


Different scheduling algorithms are needed in different environments. This is the case for different application areas (and different

This occurs because operating systems of different types have different goals.

. Batch Systems. one


2. Interactive Systems.
3. Real-time systems.

15
Machine Translated by Google

Task Scheduling
Timing Criteria

Different CPU scheduling algorithms have different characteristics, and the choice of a particular algorithm affects a class of processes.

may prefer it over the other. When choosing which algorithm to use in a particular situation, various algorithms

We must take into account its characteristics.

Many benchmarks have been proposed to compare CPU scheduling algorithms. Which features for comparison?

The way it is used can make a significant difference in deciding which algorithm is best.

16
Machine Translated by Google

Task Scheduling
Timing Criteria

CPU usage. We want to keep the CPU as busy as possible. Conceptually, CPU usage ranges from 0 percent to

It can vary between 100 and 100. In a real system, between 40 percent (for a lightly loaded system) and 90 percent (for a heavily loaded system)

for the system) should vary between . (CPU usage can be determined using the top command on Linux, macOS and UNIX systems.)

obtainable.)

Yield. If the CPU is busy executing operations, work is being done. A measure of work, a unit of time called efficiency

The number of transactions completed per transaction. For long transactions this rate may be one transaction for several seconds; short

For transactions, there may be dozens of transactions per second.

17
Machine Translated by Google

Task Scheduling
Timing Criteria

Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to carry out this process.

The time it takes from the time a process is submitted to the time of completion is the turnaround time. Turnaround time, ready

It is the sum of the time spent waiting in the queue, executing on the CPU, and doing I/O.

Standby time. The CPU scheduling algorithm does not affect the amount of time a process executes or does I/O. Only

affects the time a process spends waiting in the ready queue. Waiting time to wait in the ready queue

is the sum of the time spent.

18
Machine Translated by Google

Task Scheduling
Timing Criteria

Reaction time. In an interactive system, turnaround time is the best criterion. Often, a process produces an output quite early

and can continue to calculate new results while previous results are given to the user. Therefore, another

One metric is the time from submitting a request to generating the first response. This measure, called response time,

It's not the time it takes to print out the answer, it's the time it takes to start responding.

19
Machine Translated by Google

Task Scheduling
Timing Criteria

Reaction time. In an interactive system, turnaround time is the best criterion. Often, a process produces an output quite early

and can continue to calculate new results while previous results are given to the user. Therefore, another

One metric is the time from submitting a request to generating the first response. This measure, called response time,

It's not the time it takes to print out the answer, it's the time it takes to start responding.

2
0
Machine Translated by Google

Mission Control
Planning Algorithms
1. FCFS (First Come First Served) Algorithm:
According to this algorithm; The task that first requests the AIB uses the processor first. With FIFO queue
can be run. After the task is submitted to the ready tasks queue, its task control block
(PCB) is added to the end of the tail. The task at the head of the queue when AIB is empty
It is submitted to AIB for processing and is deleted from the queue. In this algorithm, the waiting time of tasks is
becomes high.

Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:

Duty Run Time (sec)


P1 24
P2 3
P3 3

2
one
Machine Translated by Google

Mission Control
1. FCFS (First Come First Served) Algorithm:
Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:
Duty Run Time (sec)
P1 24
P2 3
P3 3
1. Let's assume that the tasks are presented in the sequence P1, P2, P3. Planning
accordingly:

Average waiting time: (24+27+0) / 3 = 17 ms.


2. If your tasks are ordered as P2, P3, P1, planning:

Average waiting time: (3+6+0) / 3 = 3 ms.

2
2
Machine Translated by Google

Mission Control
2.SJF (Shortest Job First) Algorithm: In this algorithm, when the CPU is idle,
the task with the shortest running time among the remaining tasks is presented to the processor to run. If
the remaining times of two tasks are the same, then the FCFS algorithm is applied. In this algorithm: Each
task is evaluated with the next CPU processing time of that task. This is used to find the shortest time job.

SJF Types:

1. Uninterrupted SJF: If AÿB is allocated to a task, the task cannot be interrupted until the AÿB processing
time expires.
2.Must be Interrupted SJF: If a new task is presented to the system whose AIB processing time is less than
the remaining processing time of the currently running task, the old task will be interrupted. This method is
called SRTF (Shortest Remaining Time First) method.
Optimization for the smallest average waiting time for a set of tasks given SJF
does.

2
3
Machine Translated by Google

Mission Control
2.SJF (Shortest Job First) Algorithm: Example: P1, P2, P3, P4 tasks
are presented in the following sequence.
Let's assume. Accordingly, let's find the average waiting time according to the
uninterrupted SJF method:

• SJF : tob = tP1 + tP2 + tP3 + tP4 = (3+9+16+0) / 4 = 7 ms


• FCFS : tob = tP1 + tP2 + tP3 + tP4 = (0+6+13+21) / 4 =10.75 ms

2
4
Machine Translated by Google

Mission Control
3.Multi-Queue Scheduling Algorithm:
According to this algorithm, tasks are divided into certain classes and each class of tasks creates
its own queue. In other words, ready tasks are converted into a multi-level queue. Tasks are
placed in a certain queue according to the type of task, its priority, memory status or other
characteristics. The scheduling algorithm for each queue may be different. However, it is created
in the algorithm that allows tasks to be transferred from one queue to another.

According to this algorithm, tasks in the high priority queue are processed first. If this resource
is empty, lower level tasks can be run from it.

2
5
Machine Translated by Google

Mission Control
4. Priority Planning Algorithm:
According to this algorithm, a priority value is assigned to each task and the tasks use the
processor in order of priority. The same priority tasks are run with the FCFS algorithm.

2
6
Machine Translated by Google

Mission Control
5.Round Robin (RR) Algorithm: Each task takes a
small time frame of the RMS. When this time is up, the task is truncated and added to the end
of the queue of ready tasks.
• Example: Let's assume that tasks P1, P2, P3, P4 are presented in the following sequence.
If the time period is 20 ms, then:

27 _
Machine Translated by Google

Mission Control

2
8

You might also like