Professional Documents
Culture Documents
B.C.A.-4(UNIX.)
Bachelor of Computer Application
(Semester – IV)
Saurashtra University
Chapter No. 2
Introduction of OS System Process
In general life we are doing our routine work that is known as our task which
generally we will have to perform daily or routine. As we know that user can’t deal
directly with computer there a control program which supports to user to perform any
specific operation. To perform any operation, operating system has to perform some task
that task known as process.
the sequence of instructions that execute for the process. Such a listing is referred to as a
trace of the process.
Address Main Memory
Program Counter
0
100 8000
Dispatcher
5000
Process A
8000
Process B
12000
8000
We can construct the simplest possible model by observing that, at any time, a
process is either being executed by a processor or not. Thus, a process may be in one of
two states: Running or Not Running.
Dispatch
Not Running
Running
Pause
When the operating system creates a new process, it enters that process into the
system in the No Running State. The process exists, is known to the operating system,
and is waiting for an opportunity to execute. From time to time, the currently running
process will be interrupted and the dispatcher portion of the operating system will select a
new process to run. The former process moves from the Running state to the Not
Running state, and one of the other processes moves to the Running state.
A Five-State Model
If all processes were always ready to execute, then the queuing description is
necessary. The queue is a first-in-first-out and the processor operates in round-robin
fashion on the available process. However, even with the simple example that we have
described that some processes in the Not Running state are ready to execute, while other
are blocked, waiting for and I/O operation to complete. Thus, using a single queue, the
dispatcher could not just select the process at the oldest end of the queue. Rather, the
dispatcher would have to scan the list looking for the process that is not blocked and that
has been in the queue the longest.
A more natural way to handle this situation is to split the Not Running state into
two states: Ready and Blocked. This is shown in given below figure.
Blocked
Running: The process that is currently being executed. One process can be in this
state at a time
Ready: A process that is prepared to execute when given the opportunity.
Blocked: A process that cannot execute until some event occurs, such as the
completion of an I/O operation.
New: A process that has just been created but has not yet been admitted to the pool of
executable processes by the operating system. Typically, a new process has not yet been
loaded into main memory.
Exit: A process that has been released from the pool of executable processes by the
operating system, either it halted or because it aborted for some reason.
The three principal states just described (Ready, Running, Blocked) provide a
systematic way of modeling he behavior of processes and guide the implementation of
the operating system. Many operating systems are constructed using just these three
states.
But many operating systems using other states to justify better. Such a state
known as suspended process. Given below diagram will explain well about these new
states.
Activate Event
Occurs
Suspend Blocked
Suspend
Ready
/Suspend Ready Running Exit
Blocked Blocked
/Suspend
If there are no ready processes, then at least one blocked process is swapped out
to make room for another process that is not blocked. This transition can be made even if
there are ready processes available, if the operating system determines that the currently
running process or a ready process that it would like to dispatch requires more main
memory to maintain adequate performance.
Ready/Suspend->Ready:
When there are no ready processes in main memory, the operating system will
need to bring one in to continue execution. In addition, it might be the case that a process
in the Ready/Suspend state has higher priority than any of the processes in the Ready
state. In that case, the operating system designer may dictate that it is more important to
get at the higher-priority process than to minimize swapping.
Normally, the operating system would prefer to suspend a blocked process rather
than a ready one, because the ready process can now be executed, whereas the blocked
process is taking up main memory space and cannot be executed. However, it may be
necessary to suspend a ready process if that is the only way to free up a sufficiently large
block of main memory. Also, the operating system may choose the suspend a lower-
priority ready process rather than a higher-priority blocked process if it believes that the
blocked process will be ready soon.
When a new process is created, it can either be added to the Ready queue or the
Ready/Suspend queue. In either case, the operating system must build some tables to
manage the process and allocate an address space to the process. It might be preferable
for the operating system to perform these housekeeping duties at early times, so that it
can maintain a large pool of processes that are not blocked. With this strategy, there
would often be insufficient room in main memory for a new process; hence the use of the
Prepared BY: Gaurav K Sardhara. [UNIX] Page 5
Unix/Linux 2017
(new -> Ready/Suspend) transition. On the other hand, we could argue that a just-in-time
philosophy of creating processes as late as possible reduces operating system overhead
and allows that operating system to perform the process-creation duties at a time when
the system is clogged with blocked processes anyway.
Blocked/Suspend->Blocked:
Inclusion of this transition may seem to be poor design. After all, if a process is
not ready to execute and is not already in main memory, what is the point of bringing it
in? But consider the following scenario: A process terminates, freeing up some main
memory. There is a process in the Blocked/Suspend queue with and the operating system
has reason to believe that the blocking event for that process will occur soon. Under
these circumstances, it would seem reasonable to bring a blocked process into main
memory in preference to a ready process.
Normally, a running process is moved to the Ready state when its time allocating
expires. If, however, the operating system is preempting the process because a higher-
priority process on the Blocked/Suspend queue has just become unblocked, the operating
system could move the running process directly to the (Ready/Suspend) queue and free
some main memory.
Summary
Process is an event or Program in execution mode.
Process states are:-
1. New
2. Ready
3. Running
4. Block/Waiting
5. Terminated/Stop
Process Scheduling
FCFS (Firs-Come-First-Served)
First-Come-First-Served performs much better for long processes than short ones.
Consider the following example,
The normalized turnaround time for process Y is way out of time compared to the
other processes: The total time that it is in the system is 100 times the required
processing time. This will happen whenever a short process arrives just after a long
process. On the other hand, even in this extreme example, long processes do not fare
poorly. Process Z has a turnaround time that is almost double that of Y, but its
normalized waiting time is under 2.0.
processor-bound process is also blocked, the processor becomes idle. Thus, FCFS may
result in inefficient use of both the processor and the I/O devices.
This decision mode specifies the instants in time at which the selection function is
exercised. There are two general categories. (1) Non Preemptive (2) Preemptive.
Non Preemptive
In this case, once a process is in the Running state, it continues to execute until (a)
it terminates or (b) blocks itself to wait for I/O or to request some operating system
service.
Preemptive
The currently running process may be interrupted and moved to the Ready state
by the operating system. The decision to preempt may be performed when a new process
arrives, when an interrupt occurs that places a blocked process in the Ready state, or
periodically based on a clock interrupt.
Preemptive policies incur greater overhead than non preemptive once but may
provide better service to the total population of processes, because they prevent any one
process from monopolizing the processor for very long. In addition, the cost of
preemption may be kept relatively low by using efficient process-switching mechanism
and providing a large main memory to keep a high percentage of program in main
memory.
Round Robin
Summary
Process scheduling Techniques are:
1. Preemptive
FCFS (First Come First Serve)
Event Driven/Priority Based
2. Non-Preemptive
SJF (Shortest Job First)
Round Robin
In multiprocessing operating system it is more then one CPUs are used, those are
interconnected or can be independent but they execute together.
The instruction of the program is executed by more then one CPU and their
memory.
CPU 1 CPU 1
IO Processor IO Processor
IO Units IO Units
This inputs are also taken for it CPU for processing. If one CPU breaks down the
control is transferred to another CPU.
Advantages:
1. The O.S. should be program to schedule balance and co-ordinate input output and
processing with multiple CPU.
2. A large main memory is required.
3. It is very expensive.
4. It is difficult to maintain.
There are different threads in single process which perform these different tasks
simultaneously. One thread may accept inputs, while other may provide spell checking,
and third one may do auto saving.
Such systems provide advantages like Good response, Resource sharing, Better
economy and Utilization of multiprocessor architectures.
Summary
Multiprocessing O.S:-
Multithreading O.S:-
Multithreading OS supports the concept of multiple threads within single
process environment.