Professional Documents
Culture Documents
Prof.G.Apparao
Professor
Department of CSE
GITAM Institute of Technology (GIT)
Visakhapatnam – 530045
Email: agidutur@gitam.edu
19ECS204: OPERATING SYSTEMS - Syllabus
Process Management: Process concepts, process scheduling, Operations on processes, Interprocess
communication
CPU Scheduling: Multithreaded programming, Multi-core Programming, Multi-threading Models ,
Scheduling-criteria, scheduling algorithms, algorithm evaluation, Multiple processor scheduling,
algorithm evaluation
Learning Outcomes:
• After completion of this unit, You will be able to
• Execution of program started via GUI mouse clicks, command line entry of its name, etc
• This information may include such items as the value of the base and limit registers and the
page tables, or the segment tables, depending on the memory system used by the operating
system
• Accounting information – CPU used, clock time elapsed since start, time limits, account
numbers, job or process numbers, and so on.
• I/O status information – I/O devices allocated to process, list of open files etc.
Note: PCB serves like a repository for any information that may vary from process to process.
Process Scheduling
• The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running
• To meet these objectives , the Process scheduler selects a process from a set of several
process in memory for execution on the CPU.
• For a single-processor system, there will never be more than one running process.
• If there are more processes, the rest will have to wait until the CPU is free and can be
rescheduled.
Department of CSE Operating Systems 11
PROCESS SCHEDULING
1. scheduling queues of processes:
OS manages various types of queues for each process states. The PCB related to the process is also stored
in the Queue of the same state.
When processes are present in a state, they are represented in a Queue.All the processes are taken and
then whenever they move to a state, they will be added to a particular Queue and Queues are
implemented as Linked List.
2. Ready Queue: the processes that are residing in main memory and are ready and waiting to execute
are kept on a list called ready queue. This queue is generally stores a linked list. The short term
scheduler picks the job from the ready queue and dispatch to the CPU for the execution.
a) A ready queue header contains pointers to the first and final PCBs in the list.
b) Each PCB includes a pointer field that points to the next PCB in the ready queue.
• When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
• Context-switch time is overhead; the system does not useful work while switching
• The more complex the OS and the PCB 🡺 the longer the context switch
• Some hardware provides multiple sets of registers per CPU 🡺 multiple contexts
loaded at once
Context Switch
• When an interrupt occurs, it causes the OS to change a CPU from its current task and to run a kernel routine.
• When an interrupt occurs, the system needs to save the current context(status) of the process running on the
CPU so that it can restore the context when its processing is done(suspended or resuming)
• Switching the CPU to another process requires saving the current process context in its PC and restoring the
context of another process. This task is called context switch.
• When a context switch occurs, the kernel saves the context of old process in its PCB and loads the saved
context of the new process scheduled to run.
• Context switch time are highly dependent on h/w support-memory speed,number of regs to be copied.
• When a process is under execution, it migrates among the various scheduling queues
throughout its lifetime.
• The OS must select the process for scheduling of these queues in some order.
• Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU
• Sometimes the only scheduler in a system
• Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be fast)
• Long-term scheduler (or job scheduler) – selects which processes should be brought into the
ready queue
• Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may be slow)
• The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
• CPU-bound process – spends more time doing computations; few very long CPU bursts
• Long-term scheduler strives for good process mix
Department of CSE Operating Systems 20
Operations on Processes
• process creation
• process termination
Process Creation
• Parent process create children processes, which, in turn create other processes,
forming a tree of processes
• Generally, process identified and managed via a process identifier (pid)
• Resource sharing options
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution options
• Parent and children execute concurrently
• Parent waits until children terminate
A Tree of Processes in Linux
Producer-Consumer Problem
• Paradigm for cooperating processes, producer process produces information
that is consumed by a consumer process
• unbounded-buffer places no practical limit on the size of the buffer
• Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0; //next free position
int out = 0; //first full position
item next_produced;
while (true)
{
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
• The communication is under the control of the users processes not the operating
system.
• Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
• Message system – processes communicate with each other without resorting to shared
variables
• Messages are directed and received from mailboxes (also referred to as ports)
• Operations
• destroy a mailbox
• Mailbox sharing
• Solutions
• Allow a link to be associated with at most two processes
• Allow the system to select arbitrarily the receiver. Sender is notified who the receiver
was. Department of CSE Operating Systems 39
Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
• Blocking send -- the sender is blocked until the message is received
• Blocking receive -- the receiver is blocked until a message is available
• Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message and continue
• Non-blocking receive -- the receiver receives:
● A valid message, or
● Null message
● Different combinations possible
● If both send and receive are blocking, we have a rendezvous
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
A Thread shares with other threads belonging to the same process its code section, data section, and other operating-
system resources, such as open files and signals.
• A thread shares with its peer threads few information like code segment,
data segment and open files. When one thread alters a code segment
memory item, all other threads see that.
1 Process is heavy weight or resource Thread is light weight, taking lesser resources
intensive. than a process.
2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
3 In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has its child processes.
own memory and file resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process second thread in the same task can run.
is unblocked.
5 Multiple processes without using threads Multiple threaded processes use fewer
use more resources. resources.
6 In multiple processes each process One thread can read, write or change another
operates independently of the others. thread's data.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads. The thread library contains code for creating and destroying
threads, for passing message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.
• Kernel Level Threads − Here, Thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating system.
Multithreaded Programming: A single application may be required to perform several similar tasks. For example, a
web server accepts client requests for web pages, images etc. A busy web server may have several clients
concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service
only one client at a time, and a client might have to wait a very long time for its request to be serviced.
One solution is to have the server run as a single process that accepts requests. When the server receives a request,
it creates a separate process to service that request. Process creation is time consuming and resource intensive.
If the web-server process is multithreaded, the server will create a separate thread that listens for client requests.
When a request is made, rather than creating another process, the server creates a new thread to service the
request and resumes listening for additional requests.
Benefits of Multithreaded Programming:
1. Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of
it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.
2. Resource sharing. Processes can share resources only through techniques such as shared memory and message
passing. threads share the memory and the resources of the process to which they belong by default.
3. Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of
the process to which they belong, it is more economical to create and context-switch threads.
4. Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads
may be running in parallel on different processing cores.
Multicore Programming: Multiple computing cores on a single processing chip where each core appears as a separate CPU
to the operating system. Such systems referred as Multicore and Multithreaded programming systems.
Identifying tasks, Balance, Data splitting, Data dependency –major challenges needed to be addressed by using AMDAHL’S
Law.
Amdahl’s Law is a formula that identifies potential performance gains from adding additional computing cores to an
application that has both serial (nonparallel) and parallel components. If S is the portion of the application
that must be performed serially on a system with N processing cores, the formula appears as follows:
speedup ≤ 1/(S + ((1−S)/N))
Multithreading Models: combined user level thread and Kernel level thread facility.
Eg: Solaris OS
In a combined system, multiple threads within the same application can run in parallel on multiple processors and a
blocking system call need not block the entire process.