Professional Documents
Culture Documents
Process Concept
• An operating system executes a variety of programs:
– Batch system – jobs
– Time-shared systems – user programs or tasks
• Textbook uses the terms job and process almost interchangeably
• Process – a program in execution; process execution must progress in
sequential fashion
• Multiple parts
– The program code, also called text section
– Current activity including program counter, processor registers
– Stack containing temporary data
• Function parameters, return addresses, local variables
– Data section containing global variables
– Heap containing memory dynamically allocated during run time
Process State
Process Scheduling
Process scheduling is a major element in process management, since the
efficiency with which processes are assigned to the processor will affect the
overall performance of the system. It is essentially a matter of managing queues,
with the aim of minimizing delay while making the most effective use of the
processor's time. The operating system carries out four types of process
scheduling:
• Maximize CPU use, quickly switch processes onto CPU for time sharing
• Process scheduler selects among available processes for next execution on
CPU
• Maintains scheduling queues of processes
– Job queue – set of all processes in the system
– Ready queue – set of all processes residing in main memory, ready and
waiting to execute
– Device queues – set of processes waiting for an I/O device
– Processes migrate among the various queues
Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU
– Sometimes the only scheduler in a system
– Short-term scheduler is invoked frequently (milliseconds) (must be
fast)
• Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
– Long-term scheduler is invoked infrequently (seconds, minutes)
(may be slow)
– The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
– CPU-bound process – spends more time doing computations; few
very long CPU bursts
• Long-term scheduler strives for good process mix
Process scheduling
• The long-term scheduler determines which programs are admitted to the system
for processing, and as such controls the degree of multiprogramming. Before
accepting a new program, the long-term scheduler must first decide whether the
processor is able to cope effectively with another process.
Process scheduling
Threads
• A thread is a sub-process that executes independently of the parent process.
• A process may spawn several threads, which although they execute
independently of each other, are managed by the parent process and share the
same memory space. Most modern operating systems support threads, which if
implemented become the basic unit for scheduling and execution.
• If the operating system does not support threads, they must be managed by the
application itself. Threads will be discussed in more detail elsewhere.
CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to unavailability of any
resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is
to make the system efficient, fast and fair.
Types of Scheduling:
CPU Scheduling-Scheduling
Criteria
Scheduling Criteria
There are many different criteria's to check when considering the "best"
scheduling algorithm :
CPU utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU
would be working most of the time(Ideally 100% of the time). Considering a real
system, CPU usage should range from 40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say
total amount of work done in a unit of time. This may range from 10/second to
1/hour depending on the specific processes.
CPU Scheduling
Turnaround time
It is the amount of time taken to execute a particular process, i.e. The
interval from time of submission of the process to the time of completion of the
process(Wall clock time).
Waiting time
The sum of the periods spent waiting in the ready queue amount of time a
process has been waiting in the ready queue to acquire get control on the CPU.
Load average
It is the average number of processes residing in the ready queue waiting
for their turn to get into the CPU.
Response time
Amount of time it takes from when a request was submitted until the first
response is produced. Remember, it is the time till the first response and not the
completion of process execution(final response).
Scheduling Algorithms
We'll discuss four major scheduling algorithms here which are following :
Shortest-Job-First(SJF) Scheduling
• Best approach to minimize waiting time.
• Actual time taken by the process is already
known to processor.
• Impossible to implement.
Preemptive Shortest-Job-
First(SJF) Scheduling
Priority Scheduling
• Priority scheduling is a non-
preemptive algorithm and one of
the most common scheduling
algorithms in batch systems.
• Each process is assigned a
priority. Process with highest
priority is to be executed first and
so on.
• Processes with same priority are
executed on first come first
served basis.
• Priority can be decided based on
memory requirements, time
requirements or any other
resource requirement.
Note that under this algorithm jobs cannot switch from queue to queue - Once they
are assigned a queue, that is their queue until they finish.
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including
sharing data
• Reasons for cooperating processes:
– Information sharing
– Computation speedup
– Modularity
– Convenience
Interprocess communication
Interprocess communication (IPC) is a set of programming interfaces that allow a
programmer to coordinate activities among different program processes that can run
concurrently in an operating system. This allows a program to handle many user
requests at the same time.
Communications Models
Interprocess Communication –
Shared Memory
• An area of memory shared among the processes that wish to communicate
• The communication is under the control of the users processes not the operating
system.
• Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
Interprocess Communication –
Message Passing
• Mechanism for processes to communicate and to synchronize their actions
Producer-Consumer Problem
• Paradigm for cooperating processes, producer process produces information that
is consumed by a consumer process
• The buffer may be provided by OS through the use of IPC or explicitly with the
use of shared memory.
Direct Communication
• Processes must name each other explicitly: send (P, message) – send a message
to process P
• receive(Q, message) – receive a message from process Q
Indirect Communication
• Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id
• Processes can communicate only if they share a mailbox
Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
– Blocking send -- the sender is blocked until the message is received
– Blocking receive -- the receiver is blocked until a message is available
• Non-blocking is considered asynchronous
– Non-blocking send -- the sender sends the message and continue
– Non-blocking receive -- the receiver receives:
A valid message, or
Null message
n Different combinations possible
l If both send and receive are blocking, we have a rendezvous
Buffering
• Queue of messages attached to the link.
• implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Communications in Client-Server
• Sockets Systems
• Remote Procedure Calls
• Pipes
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint for communication
• Concatenation of IP address and port – a number
included at start of message packet to differentiate
network services on a host
• The socket 161.25.19.8:1625 refers to port 1625 on
host 161.25.19.8
• Communication consists between a pair of sockets
• All ports below 1024 are well known, used for
standard services
• Special IP address 127.0.0.1 (loopback) to refer to
system on which process is running
Sockets in Java
• Three types of
sockets
– Connection-oriented
(TCP)
– Connectionless (UDP)
– MulticastSocket
class– data can be sent
to multiple recipients
Pipes
• Acts as a conduit allowing two processes to
communicate
• Issues:
– Is communication unidirectional or bidirectional?
– In the case of two-way communication, is it half or full-
duplex?
– Must there exist a relationship (i.e., parent-child) between
the communicating processes?
– Can the pipes be used over a network?
• Ordinary pipes – cannot be accessed from outside
the process that created it. Typically, a parent process
creates a pipe and uses it to communicate with a child
process that it created.
• Named pipes – can be accessed without a parent-
child relationship.
Ordinary Pipes
Ordinary Pipes allow communication in standard producer-
consumer style
Producer writes to one end (the write-end of the pipe)
Consumer reads from the other end (the read-end of the pipe)
Ordinary pipes are therefore unidirectional
Require parent-child relationship between communicating
processes
Named Pipes
• Named Pipes are more powerful than ordinary
pipes
• Communication is bidirectional
• No parent-child relationship is necessary between
the communicating processes
• Several processes can use the named pipe for
communication
• Provided on both UNIX and Windows systems