Professional Documents
Culture Documents
OPERATING SYSTEMS
LESSON - 1
Week 8 : Vize
Week 15 : Final
Introduction
Sources:
- Modern Operating Systems, 3rd Edition by Andrew S.
Tanenbaum, Prentice Hall, 2008.
- Computer Operating Systems (BIS), Ali Saatçi, 2nd
Edition,
Knives Bookstore.
LESSON - 4
THREAD MANAGEMENT
THREADS (THREADS)
• Traditional (or heavy) processing has a single control. The process model is based on
two independent concepts: resource grouping and execution.
• A thread (also called light process LWP) is a basic unit of CPU utilization. It consists of
the following
o a thread ID,
o a program counter,
o a set of records,
o and a pile.
• The fact that all threads in a process have exactly the same address space means
that they share the same global variables.
7
THREADS (THREADS)
8
THREADS (THREADS)
• The code section shares the data section of the same process and other OS
resources such as open files and signals.
• If a process has multiple thread controls in the same address space, it can be seen
as if they were separate processes running quasi-parallel.
• "Multiithreading" works as if multiple processes are running. The processor moves
quickly back and forth between threads, creating a situation where threads are
processed in parallel.
• In reality, if there are 3 threads in a process, CPU time is shared between the 3
threads depending on the speed of the CPU.
9
THREADS (THREADS)
• Although a thred is run in conjunction with a process, yarn and process are different
concepts and can be treated separately.
• Thread shares an address space, open files and other resources
• Processes share physical memory, disks, printers and other resources.
• - Because each thread can access every memory address in the process's address
space, there is no protection between threads,
• It is impossible.
• It is unnecessary. Yarns are partners, not competitors.
As in a traditional process (i.e. a process with only one thread), a thread can be in any
of several states. Transitions between thread states are the same as transitions
between process states.
1
0
BENEFITS
1. Flexibility
2. Resource sharing
3. Overhead economics
4. Ability to use multiprocessor architectures
1
1
MULTITHREADING MODEL
1
2
MULTITHREADING MODEL
1
3
MULTITHREADING MODEL
1
4
MULTITHREADING MODEL
1
5
THREAD LIBRARIES
A thread library provides the programmer with an API for creating and managing
threads. There are two basic ways to implement a thread library.
1. The first approach is to provide a library completely without kernel support. All code
and data structures for the library reside in user space. This means that calling a
function in the library means a local function call in user space and a system call.
1
6
THREAD LIBRARIES
1
7
PTHREAD LIBRARY (POSIX)
• Pthreads refers to the POSIX standard (IEEE 1003.1c) that defines an API for
threading and synchronization.
• This is a specification for thread behavior, not an implementation. Operating system
designers can implement the specification in any way they wish.
• Many systems implement Pthreads features, including Solaris, Linux, Mac OS X and
Tru64 UNIX. Shareware implementations are also in the public domain for various
Windows operating systems.
1
8
PTHREAD LIBRARY (POSIX)
1
9
LESSON - 4
1. Introduction
The need for communication between processes may arise for various reasons.
Sometimes one process needs the result produced by another process,
sometimes they need to wait for each other to solve a problem they are
working on jointly. Communication between processes can take place in
different ways:
2
1
1. Introduction
2
2
2. Communication Mechanisms between Processes
The Unix operating system provides three basic structures for inter-process interaction:
1. Message transfer
2. Shared memory
3. Semaphore
In the Unix operating system, the kernel uses a unique key for all resources. For these
three inter-process communication structures, a unique key must be generated for the
resources used. The ftok() function is used for this.
ftok () - Creates IPC Key from File Name.
(https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/apis/p0zftok.htm)
Identifier-based inter-process communication methods require you to provide a key to
the msgget(), semget(), shmget() functions to obtain inter-process communication
identifiers. The ftok() function is a mechanism to generate this key.
Return Values:
value ftok() succeeds.
(key_t)-1 ftok() failed. The variable errno is defined to
indicate the error.
ftok1.c
2
3
Message Queues
You can use the following functions to work with the message queuing system:
msgget()
Used to access the message queue
msgsnd()
Used to send messages
msgrcv()
Used to receive messages
msgctl()
Used to manage the message queue
#include <sys/msg.h>
int msgget(key_t key, int msgflg); key: unique key
identifying the message queue
msgflg Meaning
0 Returns the message queue id
IPC_CREAT | 0640 If the message queue does not exist, it creates
and returns its ID
IPC_CREAT | IPC_EXCL | 0640 If the message queue does not exist, it creates
and identifies
returns, returns error if any
2
4
Message Queues
#include <sys/msg.h>
int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg);
int msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
Here msgp is the address of a memory cache of type msgbuf containing the message:
struct msgbuf {
long mtype; char mtext[1];
}
The message can be of any structure. mtext[1] is only used here to mark the beginning
of the data.
msgsz defines the size of the message pointed to by the msgp pointer.
If the value of the msgflg flag is 0, it causes the calling process to block when the
message queue for msgsnd call is full and the message queue for msgrcv call is
empty. If the value of this flag is IPC_NOWAIT, it works without interruption. If it is called
when the message queue for msgsnd call is full and the message queue for msgrcv
call is empty, it returns with an error code. The value of the error code, errno, is EAGAIN
for msgsnd call and ENOMSG for msgrcv call.
msg_queue.c
2
5
Shared Memory
Shared memory is when a process shares part of its memory space with another process (Figure-1). The
shared memory space corresponds to different regions of the memory address spaces of Process A and
Process B. The system calls related to the shared memory system are listed below. We use these system
calls to allocate, link and return shared memory space.
2
6
Shared Memory
Let's take a look at the parameters and usage of these functions.
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget(key_t key, size_t size, int shmflg); key: unique key identifying
shared memory
size: defines the size of the shared memory segment. This value is ignored for an existing shared
memory area.
shmflg Meaning
0 Returns the identity of the shared memory
IPC_CREAT | 0640 Shared memory or creates an
turns identity d
IPC_CREAT | IPC_EXCL | 0640 Creates if shared memory does not exist and
#include <sys/types.h> returns identity, returns error if any
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg); int
shmdt(void *shmaddr);
shmflg Meaning
0 Shared memory space for both reading and writing
available
SHM_RDONLY Shared memory space can be used as read-only
2
7
Mission Audit
Planning Algorithms
1. FCFS (First Come First Served) Algorithm:
According to this algorithm, the task that requests the AIB first uses the processor first. It can be
executed with a FIFO queue. After the task is submitted to the ready tasks queue, its task control
block (PCB) is added to the end of the queue. When the AIB is empty, the task at the head of
the queue is submitted to the AIB for execution and deleted from the queue. In this algorithm,
the waiting time of tasks ishigh.
Example: Assume that tasks P1, P2, P3 are placed in the queue respectively:
2
8
Mission Audit
Scheduling in Batch Systems
2
9
Mission Audit
1. FCFS (First Come First Served) Algorithm:
Example: Assume that tasks P1, P2, P3 are placed in the queue respectively:
Mission Operation Time
(sec)
P1 24
P2 3
1.Assume
P3 that tasks are presented
3 in the sequence P1, P2, P3. Planning
accordingly:
3
0
Mission Audit
2.SJF (Shortest Job First) Algorithm:
In this algorithm, when the CPU is idle, the task with the smallest running time among the
remaining tasks is presented to the processor for execution. If two tasks have the same remaining
time, then the FCFS algorithm is applied. In this algorithm: Each task is evaluated against the next
CPU processing time of that task. This is usedto find the task with the shortest time.
SJF Types:
1. Non-interruptible SJF : If the IOP is allocated to a task, the task cannot be interrupted until
the IOP processing time is over.
2. Should be interrupted SJF : If a new task is submitted to the system whose CPU processing
time i s less than the remaining processing time of the currently running task, the old task will
be interrupted. This method is calledSRTF (Shortest Remaining Time First).
SJF optimizes forthe smallest average waiting time for a given set of tasks.
3
1
Mission Audit
2.SJF (Shortest Job First) Algorithm: Example: : The tasks P1, P2, P3, P4
are presented in the following sequence
Let's assume. Accordingly, let's find the average waiting time according to
the interruption-free SJF method:
3
2
Mission Audit
3.Multi-Tail Planning Algorithm:
According to this algorithm, tasks are divided into certain classes and each class of tasks createsits
own queue. In other words, ready tasks are converted into a multi-level queue. Based on the
type of task, priority, memory availability or other characteristics of the task, tasks are placed in
certain queues. The scheduling algorithm may be different for each queue. However, the
algorithm for transferring tasks from one queue to another is also created.
According to this algorithm, tasks in the high priority queue are processed first. If this resource is
free, the tasks below it can be executed.
3
3
Mission Audit
4.Priority Planning Algorithm:
According to this algorithm, each task is assigned a priority value and the tasks use the
processor in order of priority. Tasks with the same priority are executed with the FCFS algorithm.
3
4
Mission Audit
5.Round Robin (RR) Algorithm:
Each task takesa small amount of AIB time. When this time is over, the task is interrupted and
added to the end of the queue of ready tasks.
• Example: Assume that tasks P1, P2, P3, P4 are presented with the following sequence.
Accordingly, if the time period is 20 msec:
3
5