You are on page 1of 12

Unix/Linux 2017

B.C.A.-4(UNIX.)
Bachelor of Computer Application
(Semester – IV)
Saurashtra University

No. Topics Details


2. Process Introduction of OS process
Management Process State Transition
Diagram
Process Scheduling


Chapter No. 2
Introduction of OS System Process

1. Introduction Of Operating System Process

In general life we are doing our routine work that is known as our task which
generally we will have to perform daily or routine. As we know that user can’t deal
directly with computer there a control program which supports to user to perform any
specific operation. To perform any operation, operating system has to perform some task
that task known as process.

The fundamental of any modern operating system is process management.


The operating system must allocate resources to processes, enable processes to share
and exchange information, protect the resources of each process from other
processes, and enable synchronization among processes. To meet this requirement,
the operating system must maintain a data structure for each process that describes
the state and resource ownership of the process and that enables the operating
system to exert process control.

2. Process States Transaction Diagram

The principal function of a processor is to execute machine instruction


residing in main memory. These instructions are provided in the form of program.
A process or task is created for the program to execute involves a sequence of instruction
within that program. We can characterize the behavior of an individual process by listing

Prepared BY: Gaurav K Sardhara. [UNIX] Page 1


Unix/Linux 2017

the sequence of instructions that execute for the process. Such a listing is referred to as a
trace of the process.
Address Main Memory
Program Counter
0
100 8000
Dispatcher

5000
Process A

8000
Process B
12000
8000

Above figure shows a memory layout


ProcessofCthree processes. These three processes
are fully loaded in main memory. In addiction, there is a small dispatcher program that
switches the processor from one process to another. The address given above figure are
the starting address of any process and program counter indicates the process B is going
to be executed and A and C are waiting for I/O operation.

A Two-State Process Model:

The operating system’s principal responsibility is controlling the execution of


processes; this includes determining the interleaving pattern for execution and allocating
resources to processes. The first step in designing a program to control processes is to
describe the behavior that we would like the processes to exhibit.

We can construct the simplest possible model by observing that, at any time, a
process is either being executed by a processor or not. Thus, a process may be in one of
two states: Running or Not Running.

Dispatch

Not Running
Running

Pause

State Transition Diagram

Prepared BY: Gaurav K Sardhara. [UNIX] Page 2


Unix/Linux 2017

When the operating system creates a new process, it enters that process into the
system in the No Running State. The process exists, is known to the operating system,
and is waiting for an opportunity to execute. From time to time, the currently running
process will be interrupted and the dispatcher portion of the operating system will select a
new process to run. The former process moves from the Running state to the Not
Running state, and one of the other processes moves to the Running state.

A Five-State Model

If all processes were always ready to execute, then the queuing description is
necessary. The queue is a first-in-first-out and the processor operates in round-robin
fashion on the available process. However, even with the simple example that we have
described that some processes in the Not Running state are ready to execute, while other
are blocked, waiting for and I/O operation to complete. Thus, using a single queue, the
dispatcher could not just select the process at the oldest end of the queue. Rather, the
dispatcher would have to scan the list looking for the process that is not blocked and that
has been in the queue the longest.

A more natural way to handle this situation is to split the Not Running state into
two states: Ready and Blocked. This is shown in given below figure.

Admit Dispatch Release


New Ready Running Exit
Time-Out

Blocked

Five State Process Model

 Running: The process that is currently being executed. One process can be in this
state at a time
 Ready: A process that is prepared to execute when given the opportunity.
 Blocked: A process that cannot execute until some event occurs, such as the
completion of an I/O operation.
 New: A process that has just been created but has not yet been admitted to the pool of
executable processes by the operating system. Typically, a new process has not yet been
loaded into main memory.
 Exit: A process that has been released from the pool of executable processes by the
operating system, either it halted or because it aborted for some reason.

Prepared BY: Gaurav K Sardhara. [UNIX] Page 3


Unix/Linux 2017

The three principal states just described (Ready, Running, Blocked) provide a
systematic way of modeling he behavior of processes and guide the implementation of
the operating system. Many operating systems are constructed using just these three
states.

But many operating systems using other states to justify better. Such a state
known as suspended process. Given below diagram will explain well about these new
states.

Admit Dispatch Release


New Ready Running Exit

Activate Event
Occurs

Suspend Blocked
Suspend

Process State Transition Diagram with suspend state

Admit New Admit

Ready
/Suspend Ready Running Exit

Blocked Blocked
/Suspend

Process State Transition Diagram with two suspend state

Prepared BY: Gaurav K Sardhara. [UNIX] Page 4


Unix/Linux 2017

 Ready: The process is in main memory and available for execution.


 Blocked: The process is in main memory and awaiting an event.
 Running: The process is in main memory and going to be executed
 Blocked/Suspend: The process is in secondary memory and awaiting an event
 Ready/Blocked: The process is in secondary memory but is available for execution as
soon as it is loaded into main memory.

Blocked -> Blocked/Suspend:

If there are no ready processes, then at least one blocked process is swapped out
to make room for another process that is not blocked. This transition can be made even if
there are ready processes available, if the operating system determines that the currently
running process or a ready process that it would like to dispatch requires more main
memory to maintain adequate performance.

 Blocked/Suspend -> Ready/Suspend:

A process in the Blocked/Suspend states is moved to the Ready/Suspend state


when the event for which it has been waiting occurs. Note that this requires that the state
information concerning suspended processes must be accessible to the operating system.

 Ready/Suspend->Ready:

When there are no ready processes in main memory, the operating system will
need to bring one in to continue execution. In addition, it might be the case that a process
in the Ready/Suspend state has higher priority than any of the processes in the Ready
state. In that case, the operating system designer may dictate that it is more important to
get at the higher-priority process than to minimize swapping.

 Ready -> Ready/Suspend:

Normally, the operating system would prefer to suspend a blocked process rather
than a ready one, because the ready process can now be executed, whereas the blocked
process is taking up main memory space and cannot be executed. However, it may be
necessary to suspend a ready process if that is the only way to free up a sufficiently large
block of main memory. Also, the operating system may choose the suspend a lower-
priority ready process rather than a higher-priority blocked process if it believes that the
blocked process will be ready soon.

New->Ready/Suspend and New->Ready:

When a new process is created, it can either be added to the Ready queue or the
Ready/Suspend queue. In either case, the operating system must build some tables to
manage the process and allocate an address space to the process. It might be preferable
for the operating system to perform these housekeeping duties at early times, so that it
can maintain a large pool of processes that are not blocked. With this strategy, there
would often be insufficient room in main memory for a new process; hence the use of the
Prepared BY: Gaurav K Sardhara. [UNIX] Page 5
Unix/Linux 2017

(new -> Ready/Suspend) transition. On the other hand, we could argue that a just-in-time
philosophy of creating processes as late as possible reduces operating system overhead
and allows that operating system to perform the process-creation duties at a time when
the system is clogged with blocked processes anyway.

 Blocked/Suspend->Blocked:

Inclusion of this transition may seem to be poor design. After all, if a process is
not ready to execute and is not already in main memory, what is the point of bringing it
in? But consider the following scenario: A process terminates, freeing up some main
memory. There is a process in the Blocked/Suspend queue with and the operating system
has reason to believe that the blocking event for that process will occur soon. Under
these circumstances, it would seem reasonable to bring a blocked process into main
memory in preference to a ready process.

 Running -> Ready/Suspend:

Normally, a running process is moved to the Ready state when its time allocating
expires. If, however, the operating system is preempting the process because a higher-
priority process on the Blocked/Suspend queue has just become unblocked, the operating
system could move the running process directly to the (Ready/Suspend) queue and free
some main memory.

 Various -> Exit:

Typically, a process terminates while it is running, either because it has completed


or because of some fatal fault condition. However, in some operating systems, a process
may be terminated by the process that created it or when the parent process is itself
terminated. If this is allowed then a process in any state can be moved to the Exit state.

Summary
Process is an event or Program in execution mode.
Process states are:-
1. New
2. Ready
3. Running
4. Block/Waiting
5. Terminated/Stop

Prepared BY: Gaurav K Sardhara. [UNIX] Page 6


Unix/Linux 2017

Process Scheduling

The aim of processor scheduling is to assign processes to be executed by the


processor or processors over time, in a way that meets system objectives, such as
response time, throughput, and processor efficiency.

Scheduling affects the performance of the system because it determines which


processes will wait and which will progress. Fundamentally, scheduling is a mater of
managing queues to minimize queuing delay and to optimize performance in a queuing
environment.

FCFS (Firs-Come-First-Served)

The simplest scheduling policy is first-come-first-served (FCFS), also known as


first-in-first-out (FIFO) or a strict queuing scheme. As each process becomes ready, it
joins the ready queue. When the currently running process ceases to execute, the process
that has been in the ready queue the longest is selected for running.

First-Come-First-Served performs much better for long processes than short ones.
Consider the following example,

Process Arrival Service Start Finish Turnaround T,/T


Time Time Time Time Time
W 0 1 0 1 1 1
X 1 100 1 101 100 1
Y 2 1 101 102 100 100
Z 3 100 102 202 199 1.99
Mean 100 26

The normalized turnaround time for process Y is way out of time compared to the
other processes: The total time that it is in the system is 100 times the required
processing time. This will happen whenever a short process arrives just after a long
process. On the other hand, even in this extreme example, long processes do not fare
poorly. Process Z has a turnaround time that is almost double that of Y, but its
normalized waiting time is under 2.0.

Another difficulty with FCFS is that it tends to favor processor-bound processes


over I/O-bound processes. Consider that there is a collection of processes, one of which
mostly uses the processor (processor bound) and a number of which favor I/O (I/O
bound). When a processor-bound process is running, all of the I/O-bound processes must
wait. Some of these may be in I/O queue (blocked state) but may move back to the ready
queue while the processor-bound process is executing tidally work for them to do. When
the currently running process leaves the Running state, the ready I/O-bound processes
quickly move through the Running state and become blocked on I/O events. If the

Prepared BY: Gaurav K Sardhara. [UNIX] Page 7


Unix/Linux 2017

processor-bound process is also blocked, the processor becomes idle. Thus, FCFS may
result in inefficient use of both the processor and the I/O devices.

FCFS is not an attractive alternative on its own for a single-processor system.


However, it is often combined with a priority scheme to provide an effective scheduler.
Thus, the scheduler may maintain a number of queues, one for each priority level, and
dispatch within each queue on a first-come-first-served basis.

Priority Base Non Preemptive And Preemptive

The selection function determines which process, among ready processes, is


selected next for execution. The function may be on priority, resource requirement, or
the execution characteristics of the process. In latter case three quantities are significant.

w = time spent in system so far, waiting and executing


e = time spent in execution so far
s = total service time required by the process, including e; generally, this quantity
must be estimated or supplied by the user.

For example, the selection function max[w] indicates a first-come-first-served


discipline.

This decision mode specifies the instants in time at which the selection function is
exercised. There are two general categories. (1) Non Preemptive (2) Preemptive.

Non Preemptive

In this case, once a process is in the Running state, it continues to execute until (a)
it terminates or (b) blocks itself to wait for I/O or to request some operating system
service.

Preemptive

The currently running process may be interrupted and moved to the Ready state
by the operating system. The decision to preempt may be performed when a new process
arrives, when an interrupt occurs that places a blocked process in the Ready state, or
periodically based on a clock interrupt.

Preemptive policies incur greater overhead than non preemptive once but may
provide better service to the total population of processes, because they prevent any one
process from monopolizing the processor for very long. In addition, the cost of
preemption may be kept relatively low by using efficient process-switching mechanism
and providing a large main memory to keep a high percentage of program in main
memory.

Prepared BY: Gaurav K Sardhara. [UNIX] Page 8


Unix/Linux 2017

Round Robin

Round Robin is particularly effective in a general-purpose time-sharing system or


transaction processing system. One drawback to round robin is its relative treatment of
processor bound and I/O bound processes. Generally, an I/O bound process has a shorter
processor burst (Amount of time spent executing between I/O operation) than a
processor-bound process. If there is a mix of processor – bound and I/O bound
processes, than the following will happen: An I/O bound process uses a processor for a
short period and then is blocked for I/O; it waits for the I/O operation to complete and
then joins the ready queue. On the other hand, a processor-bound process generally uses
a complete time quantum while executing and immediately returns to the ready queue.
Thus, processor-bound processes tend to receive an unfair portion of processor time,
which result in poor performance for I/O devices and an increase in the variance of
response time.

Summary
Process scheduling Techniques are:
1. Preemptive
FCFS (First Come First Serve)
Event Driven/Priority Based
2. Non-Preemptive
SJF (Shortest Job First)
Round Robin

Prepared BY: Gaurav K Sardhara. [UNIX] Page 9


Unix/Linux 2017

 Difference between Multitasking/Time-Sharing and Multiprocessing and


Multithreading

(1) Multitasking/Time-Sharing Operating System.

Multitasking is a logical extension of the multiprogramming OS. Here CPU is


multiplexed by time among several jobs that are kept in main memory and on disk.

Pseudo-parallelism is achieved by providing rapid CPU scheduling among


jobs/processes. Thus, users can have good terminal response. It also provides direct
interaction between user and the system.

It also allows sharing of computer among users simultaneously. It gives an


illusion/impression to each user as if the entire computer resources are available to him
solely.

This system is more complex than multiprogramming, and requires following


features:

- Several jobs must be kept in memory simultaneously, which requires


memory management and protection.
- As memory is limited, concepts like swapping and virtual memory are used.
- CPU scheduling should be sophisticated, to provide fairness to all processes.
- Disk management and file system is complex now.
- Job synchronization, communication among jobs and problems like deadlock
should be overcome.

(2) Multiprocessing Operating System

In multiprocessing operating system it is more then one CPUs are used, those are
interconnected or can be independent but they execute together.

The instruction of the program is executed by more then one CPU and their
memory.

CPU 1 CPU 1

IO Processor IO Processor

IO Units IO Units

Prepared BY: Gaurav K Sardhara. [UNIX] Page 10


Unix/Linux 2017

This inputs are also taken for it CPU for processing. If one CPU breaks down the
control is transferred to another CPU.

The main thing to be controlled is scheduling and job priority. If scheduling is


not perfect then it can produce wrong result.

Advantages:

1. It improves performance of computer system by allowing parallel processing.


2. It also utilize CPU in method where ideal time is minimum.
3. If one CPU breaks down, the process is completed by another CPU so there is
minimum break down.
Disadvantages:

1. The O.S. should be program to schedule balance and co-ordinate input output and
processing with multiple CPU.
2. A large main memory is required.
3. It is very expensive.
4. It is difficult to maintain.

(3) Multithreading Operating System:

Multithreading OS supports the concept of multiple threads within single process


environment.

Generally, an application is implemented as a separate process with several


threads of control. Sometimes, a single application needs to perform more than one task
simultaneously. For example, word processor needs to accept input, provide spell
checking, and auto-saving. But, how all these are possible at the same time?

There are different threads in single process which perform these different tasks
simultaneously. One thread may accept inputs, while other may provide spell checking,
and third one may do auto saving.

Operating system, which supports such type of functionality, is called


multithreading operating system.

Such systems provide advantages like Good response, Resource sharing, Better
economy and Utilization of multiprocessor architectures.

Summary
Multiprocessing O.S:-

Prepared BY: Gaurav K Sardhara. [UNIX] Page 11


Unix/Linux 2017

In multiprocessing operating system it is more then one CPUs are used,


those are interconnected or can be independent but they execute together.

Multithreading O.S:-
Multithreading OS supports the concept of multiple threads within single
process environment.

Prepared BY: Gaurav K Sardhara. [UNIX] Page 12

You might also like