You are on page 1of 34

BLG305

OPERATING SYSTEMS
LESSON - 1

Assist. Assoc. Prof. Dr. Önder


EYECİOĞLU
Computer Engineering
WEEK TOPICS

Introduction Week 1 : Introduction to operating systems, Operating system


strategies

Week 2 : System calls


Class Day and Time:
Monday: 09:15-11:25 Week 3 : Task, task management
• Application Unix (Linux) Operating system
Week 4 : Yarns
• Attendance obligation 70%
• Applications will be performed on C programming Week 5 : Job sorting algorithms
language. Programming knowledge is expected from the
Week 6 : Inter-task communication and synchronization
students.
• - Grading: Midterm->30%, Final->60%, Homework->10 Week 7 : Semaphores, Monitors and applications

Week 8 : Vize

Week 9 : Critical Zone Problems

Week 10 : Crash Problems

Week 11 : Memory Management

Week 12 : Pagination, Segmentation

Week 13 : Virtual Memory

Week 14 : File system, access and protection mechanisms, Disk


planning and management

Week 15 : Final
Introduction

Sources:
- Modern Operating Systems, 3rd Edition by Andrew S.
Tanenbaum, Prentice Hall, 2008.
- Computer Operating Systems (BIS), Ali Saatçi, 2nd
Edition,
Knives Bookstore.
LESSON - 4

THREAD MANAGEMENT
THREADS (THREADS)
• Traditional (or heavy) processing has a single control. The process model is based on
two independent concepts: resource grouping and execution.
• A thread (also called light process LWP) is a basic unit of CPU utilization. It consists of
the following
o a thread ID,
o a program counter,
o a set of records,
o and a pile.
• The fact that all threads in a process have exactly the same address space means
that they share the same global variables.

7
THREADS (THREADS)

8
THREADS (THREADS)

• The code section shares the data section of the same process and other OS
resources such as open files and signals.
• If a process has multiple thread controls in the same address space, it can be seen
as if they were separate processes running quasi-parallel.
• "Multiithreading" works as if multiple processes are running. The processor moves
quickly back and forth between threads, creating a situation where threads are
processed in parallel.
• In reality, if there are 3 threads in a process, CPU time is shared between the 3
threads depending on the speed of the CPU.

9
THREADS (THREADS)

• Although a thred is run in conjunction with a process, yarn and process are different
concepts and can be treated separately.
• Thread shares an address space, open files and other resources
• Processes share physical memory, disks, printers and other resources.
• - Because each thread can access every memory address in the process's address
space, there is no protection between threads,
• It is impossible.
• It is unnecessary. Yarns are partners, not competitors.

As in a traditional process (i.e. a process with only one thread), a thread can be in any
of several states. Transitions between thread states are the same as transitions
between process states.

1
0
BENEFITS

• The benefits of multithreaded programming can be divided into four parts.


These are:

1. Flexibility
2. Resource sharing
3. Overhead economics
4. Ability to use multiprocessor architectures

1
1
MULTITHREADING MODEL

Yarn management can generally take


place in two different ways:

User Universe: The operating system is


unaware of the existence of threads.
Each task manages the switching
between its own threads in its
allocated time slot. The operating
system is not notified.
Example: POSIX P-Threads, Mach C-
Threads

Kernel Universe: The operating system


manages the tasks as well as the
threads under the tasks. The operating
system manages the switching of
each thread.
Example: Windows NT

1
2
MULTITHREADING MODEL

There are 3 models that determine the


relationship between threads in the user universe
and threads in the core universe.
1. It's already a one-to-one model:
+ Thread management is done by the
thread library in user space, so it is efficient;
- However, if a thread makes a blocking
system call, the whole process will be
blocked.
- Because only one thread can access the
kernel at a time, multiple threads cannot
run in parallel on multiprocessor systems.
1. One to One model:
2. Many to Many model:

1
3
MULTITHREADING MODEL

2. One to One model:

• It provides more concurrency than the many-


to-one model by allowing one thread to run
while another thread is making a blocking
system call;
• It also allows multiple threads to run in parallel
on multiple processors.
• The only disadvantage of this model is that
creating a user thread requires the creation of
a corresponding kernel thread.
• Most implementations of this model limit the
number of threads supported by the system,
as the load of kernel threading can burden
the performance of an application.
• Linux follows a one-to-one model with the
Windows Operating System family.

Many to Many model:

1
4
MULTITHREADING MODEL

2. Many to Many model:

• Many user-level threads are replicated with a smaller or


equal number of core threads.
• The number of core threads is specific to a particular
application or a particular machine.
• While the many-to-one model allows the developer to create
any number of user threads, true concurrency is not gained
because the kernel can only schedule one thread at a time.
• The one-to-one model provides more concurrency, but the
developer must be careful not to create too many threads in
an application
• The many-to-many model has none of these shortcomings:
 Developers can create as many user threads as needed
and the corresponding kernel thread can run in parallel
as a multiprocessor.
 Also, when a thread performs a blocking system call, the
kernel can schedule another thread to execute.

1
5
THREAD LIBRARIES

A thread library provides the programmer with an API for creating and managing
threads. There are two basic ways to implement a thread library.

1. The first approach is to provide a library completely without kernel support. All code
and data structures for the library reside in user space. This means that calling a
function in the library means a local function call in user space and a system call.

2. The second approach is to implement a kernel-level library directly supported by the


OS. In this case, the code and data structures for the library reside in kernel space.
Calling a function within the API for the library usually results in a system call to the
kernel.

1
6
THREAD LIBRARIES

Three main thread libraries are in use today:


1. POSIX Pthreads. Pthreads, the threading extension of the POSIX standard, can be
provided as a user- or kernel-level library.
2. The Win32 thread library is a kernel-level library available on Windows systems.
3. Java. The Java threading API allows you to create and manage threads directly in
Java programs. However, because in most cases the JVM runs on top of a host
operating system, the Java threading API is usually implemented using a threading
library that resides on the host system.

1
7
PTHREAD LIBRARY (POSIX)

• Pthreads refers to the POSIX standard (IEEE 1003.1c) that defines an API for
threading and synchronization.
• This is a specification for thread behavior, not an implementation. Operating system
designers can implement the specification in any way they wish.
• Many systems implement Pthreads features, including Solaris, Linux, Mac OS X and
Tru64 UNIX. Shareware implementations are also in the public domain for various
Windows operating systems.

1
8
PTHREAD LIBRARY (POSIX)

1
9
LESSON - 4
1. Introduction
The need for communication between processes may arise for various reasons.
Sometimes one process needs the result produced by another process,
sometimes they need to wait for each other to solve a problem they are
working on jointly. Communication between processes can take place in
different ways:

1. The processes to be communicated can be on the same or different


machines connected by computer network

2. Communication can be established between processes with or without


connection (data transfer) or connection (message transfer)

3. Appointment method: How the processes to communicate will initiate


communication

a. An object created in the file system can be used

b. Internet address available

2
1
1. Introduction

Communication Characteristic Machine distance Appointment


Between feature
Processes
exit() Returns an Same machine From child process to parent
integer process
signal() Signal number Same machine Signal number

mmap Virtual memory Same machine Virtual address


Memory region
Pipe Queue, first-in, Same machine File identifier
first-out
Message queue Message Same machine IPC ID

Shared memory like mmap Same machine IPC ID

Semaphore Synchronization Same machine IPC ID


Socket Message Same or different File identifier
machine

2
2
2. Communication Mechanisms between Processes

The Unix operating system provides three basic structures for inter-process interaction:
1. Message transfer
2. Shared memory
3. Semaphore
In the Unix operating system, the kernel uses a unique key for all resources. For these
three inter-process communication structures, a unique key must be generated for the
resources used. The ftok() function is used for this.
ftok () - Creates IPC Key from File Name.
(https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/apis/p0zftok.htm)
Identifier-based inter-process communication methods require you to provide a key to
the msgget(), semget(), shmget() functions to obtain inter-process communication
identifiers. The ftok() function is a mechanism to generate this key.

Return Values:
value ftok() succeeds.
(key_t)-1 ftok() failed. The variable errno is defined to
indicate the error.

ftok1.c

2
3
Message Queues
You can use the following functions to work with the message queuing system:
 msgget()
Used to access the message queue
 msgsnd()
Used to send messages
 msgrcv()
Used to receive messages
 msgctl()
Used to manage the message queue
#include <sys/msg.h>
int msgget(key_t key, int msgflg); key: unique key
identifying the message queue

msgflg Meaning
0 Returns the message queue id
IPC_CREAT | 0640 If the message queue does not exist, it creates
and returns its ID
IPC_CREAT | IPC_EXCL | 0640 If the message queue does not exist, it creates
and identifies
returns, returns error if any

2
4
Message Queues
#include <sys/msg.h>
int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg);
int msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);

Here msgp is the address of a memory cache of type msgbuf containing the message:
struct msgbuf {
long mtype; char mtext[1];
}
The message can be of any structure. mtext[1] is only used here to mark the beginning
of the data.
msgsz defines the size of the message pointed to by the msgp pointer.
If the value of the msgflg flag is 0, it causes the calling process to block when the
message queue for msgsnd call is full and the message queue for msgrcv call is
empty. If the value of this flag is IPC_NOWAIT, it works without interruption. If it is called
when the message queue for msgsnd call is full and the message queue for msgrcv
call is empty, it returns with an error code. The value of the error code, errno, is EAGAIN
for msgsnd call and ENOMSG for msgrcv call.

msg_queue.c

2
5
Shared Memory
Shared memory is when a process shares part of its memory space with another process (Figure-1). The
shared memory space corresponds to different regions of the memory address spaces of Process A and
Process B. The system calls related to the shared memory system are listed below. We use these system
calls to allocate, link and return shared memory space.

2
6
Shared Memory
Let's take a look at the parameters and usage of these functions.
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget(key_t key, size_t size, int shmflg); key: unique key identifying
shared memory
size: defines the size of the shared memory segment. This value is ignored for an existing shared
memory area.
shmflg Meaning
0 Returns the identity of the shared memory
IPC_CREAT | 0640 Shared memory or creates an
turns identity d
IPC_CREAT | IPC_EXCL | 0640 Creates if shared memory does not exist and
#include <sys/types.h> returns identity, returns error if any
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg); int
shmdt(void *shmaddr);
shmflg Meaning
0 Shared memory space for both reading and writing
available
SHM_RDONLY Shared memory space can be used as read-only

2
7
Mission Audit
Planning Algorithms
1. FCFS (First Come First Served) Algorithm:
According to this algorithm, the task that requests the AIB first uses the processor first. It can be
executed with a FIFO queue. After the task is submitted to the ready tasks queue, its task control
block (PCB) is added to the end of the queue. When the AIB is empty, the task at the head of
the queue is submitted to the AIB for execution and deleted from the queue. In this algorithm,
the waiting time of tasks ishigh.

Example: Assume that tasks P1, P2, P3 are placed in the queue respectively:

Mission Operation Time


(sec)
P1 24
P2 3
P3 3

2
8
Mission Audit
Scheduling in Batch Systems

2
9
Mission Audit
1. FCFS (First Come First Served) Algorithm:
Example: Assume that tasks P1, P2, P3 are placed in the queue respectively:
Mission Operation Time
(sec)
P1 24
P2 3
1.Assume
P3 that tasks are presented
3 in the sequence P1, P2, P3. Planning
accordingly:

Average waiting time : (24+27+0) / 3 = 17 msec.


2.Planning if the arrival of tasks is ordered as P2, P3, P1:

Average waiting time : (3+6+0) / 3 = 3 msec.

3
0
Mission Audit
2.SJF (Shortest Job First) Algorithm:
In this algorithm, when the CPU is idle, the task with the smallest running time among the
remaining tasks is presented to the processor for execution. If two tasks have the same remaining
time, then the FCFS algorithm is applied. In this algorithm: Each task is evaluated against the next
CPU processing time of that task. This is usedto find the task with the shortest time.
SJF Types:
1. Non-interruptible SJF : If the IOP is allocated to a task, the task cannot be interrupted until
the IOP processing time is over.
2. Should be interrupted SJF : If a new task is submitted to the system whose CPU processing
time i s less than the remaining processing time of the currently running task, the old task will
be interrupted. This method is calledSRTF (Shortest Remaining Time First).
SJF optimizes forthe smallest average waiting time for a given set of tasks.

3
1
Mission Audit
2.SJF (Shortest Job First) Algorithm: Example: : The tasks P1, P2, P3, P4
are presented in the following sequence
Let's assume. Accordingly, let's find the average waiting time according to
the interruption-free SJF method:

- SJF : tob = tP1 + tP2 + tP3 + tP4 = (3+9+16+0) / 4 = 7 msec


- FCFS : tob = tP1 + tP2 + tP3 + tP4 = (0+6+13+21) / 4 =10.75 msec

3
2
Mission Audit
3.Multi-Tail Planning Algorithm:
According to this algorithm, tasks are divided into certain classes and each class of tasks createsits
own queue. In other words, ready tasks are converted into a multi-level queue. Based on the
type of task, priority, memory availability or other characteristics of the task, tasks are placed in
certain queues. The scheduling algorithm may be different for each queue. However, the
algorithm for transferring tasks from one queue to another is also created.

According to this algorithm, tasks in the high priority queue are processed first. If this resource is
free, the tasks below it can be executed.

3
3
Mission Audit
4.Priority Planning Algorithm:
According to this algorithm, each task is assigned a priority value and the tasks use the
processor in order of priority. Tasks with the same priority are executed with the FCFS algorithm.

3
4
Mission Audit
5.Round Robin (RR) Algorithm:
Each task takesa small amount of AIB time. When this time is over, the task is interrupted and
added to the end of the queue of ready tasks.
• Example: Assume that tasks P1, P2, P3, P4 are presented with the following sequence.
Accordingly, if the time period is 20 msec:

3
5

You might also like