Professional Documents
Culture Documents
COMPUTER COMPONENTS:
Functioning and interaction.
Hardware and software concepts
Needs for operating system
INTRODUCTION TO DOS:
Types of processing.
Multiprocessing and multiprogramming.
DISK DRIVE:
CONCEPT OF PROCESSES..
Interrupts.
IO COMMUNICATION TECHNIQUES.
MEMORY MANAGEMENT:
D a t a:
Draw facts and figures is called data.
INFORMATION:
Processed form of data is called information.
1. Hardware.
2. Software.
Hardware:
Physical equipment of the computer, Hardware devices may be electronic,
magnetic or mechanical etc.
Any thing having physical existence is called as hardware.
I n p u t:
Keyboard ,mouse, light pen, scanner, microphone, digitizer etc.
Processing:
CPU (central processing unit) processor ,floppy drives, hard disk, magnetic tape, CD drive, RAM, ROM,
EPROM, POWER SUPPLY etc.
Ram: Random access memory (temporary storage device) all the variables defined is stored in ram.
Memory that can be accessed randomly.
Variable a = 10 value.
ROM:
Read only memory not accessible.
Also called boot ROM. In computers ASCII codes are allotted to all. ASCII code for a is 65
64 32 16 8 4 2 1
1 0 0 0 0 0 1
8 bits = 1 byte.
1024 bytes = 1 KB
1024 KB = 1 MB
1024 MB = 1 GB
1024 GB = 1 Triga byte.
SYSTEM PROGRAM:
Programs which control the internal operations of the computer system, e.g.
operating systems.
Such programs which deals with (languages and the operating system) the computer operations.
APPLICATION PROGRAMS:
Programs which are used for specific purposes, e.g. word processors,
spread-sheets etc. They need system software to use hardware resources.
Such a program that deal the problems of the user for example words lotus etc.
In case of languages we have to do all the work by ourselves in the case of packages predefined functions
are there
Languages are divided into three levels higher middle low level languages
OPERATING SYSTEM:
Operating system controls the execution of other programs. It consists of
instructions for the processor to manage other system resources. Two important
functions of the operating system are:
A layer of software which provide the interface between hard ware and user.
A software which makes hardware usable. For example. OS/2, CP/M, DOS, UNIX, XENIX, Windows 95
and Windows NT.
Administrative personals:
The personals of administration using computers.
Introduction to MS DOS: before MS DOS the operating system was cp/m-80 and was working on 8080
processors. For competing with cp/m –80 Microsoft wrote 86 – dos for Seattle computer products. This was
not a better operating system there were lots of errors and bugs in it.
Inside bios are device drivers of the keyboard, printer, clock, block/auxiliary devices (all storage devices).
DOS KERNEL:
Like other programs, OS is also a software which is executed by the processor.
However, instructions in the OS are executed in a special operating mode, called
kernel or supervisor mode. In this mode certain privilaged instructions can be
executed which cannot be executed in the normal mode.
(Those devices through which characters are inputted like key board scanner.)
Dos kernel is also uploaded to ram and is considered as a part of ms dos .sys (anything defined by the
memory must have some address which is access able through interrupts)
INTERRUPTS:
An interrupt is a signal which causes the hardware to transfer program control to
some specific location in the main memory, thus breaking the normal flow of the
program.
TYPES OF INTERRUPTS
Interrupts can be of the following four types:
1. Program Generated as a result of an instruction execution, such as
overflow, division by zero etc.
2. Timer Generated by a timer which enables the OS to perform certain
funtions on regular basis.
3. I/O Generated by an I/O controller, to get the attention of the CPU.
4. Hardware Failure Generated by a failure such as power failure.
Such special programs through which we can access devices.
The work of the dos shell is that it provide interface between the user and the operating system.
All files which comes with the following extensions are called as executable files.
Those commands that can be run from the dos prompt are called executable files.
If a command is shown in the directory then it is an external command. Or in shell and in path.
1. Resident module.
2. Transient module
3. Initialization module.
ROM
TRANSIENT MODULE
RAM
RESIDENT MODULE
OPERATING SYSTEM
INITIALIZATION MODULE:
When the computer is turned on initialization module will work at the place of resident module & will
search for the autoexec.bat and will display the messages if not found it will go out of the memory.
RESIDENT MODULE:
It stays in the memory
TRANSIENT MODULE:
All the files run by shell are executable through transient module. If we load a big program transient
module will be damaged & afterwards when we come out of the program operating system automatically
uploads this module from the hard disk.
Directory: it’s same like a closet & the subdirectories are the drawers of this closet & files are inside the
closet and the drawers.
C:\ is called as the root directory.
In dos 8 digits are restricted for a name of the directory but in the case of Windows 95 & NT this can be of
256 characters.
Z:\waq \ far 1 > cd..\far1
Z:\>waq\far1
When the system is booted interrupt vector table is loaded to the lower portion. It contains the address of
the interrupt service procedure.
ROM BOOTSTRAP:
BOOTING PROCESS
Whenever a system is started is started or reset, the OS which has to control
it performs some functions automatically before allowing the user to interact with
the machine. These functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.
1. It creates the IVT (Interrupt Vector Table) in the lower most area of the
memory.
2. If the system is booted from a hard disk, it loads the ‘partition table of that
disk and reads it to access the active drive. There is no partition table in
floppy disks. Therefore, in case of floppy drives, it directly jumps to the
boot sector.
3. From the boot sector, it loads the disk bootstrap routine and transfers
control to it.
Disk Bootstrap Routine is a small code which resides in the boot sector of the
bootable system disk. During the booting process, when it gets the control of the
CPU from the Rom Bootstrap Routine, it performs the following tasks:
1. It checks the first two directory entries of the disk. If these entries are no
IO.SYS & MSDOS.SYS (in case of MS-DOS), it displays the error
message “Non System Disk Or Disk Error” and waits for the system disk in
Drive A.
2. If these files are found, disk bootstrap routine loads them into the memory
and transfers control to IO.SYS.
IO.SYS has two portions - BIOS and SYSINIT.SYS. BIOS contains the
code of resident device drivers and is loaded above IVT. SYSINIT.SYS is loaded
above BIOS and control is actually transferred to this portion by the disk bootstrap
routine. During its execution, it performs the following functions:
MSDOS.SYS is also called DOS Kernel. When it gets the control, it:
1. Sets up its internal tables.
2. Makes intreeupt vector entries.
3. Initiliazes all the internal device drives.
1. Looks for the file CONFIG.SYS in the current directory and loads it into
the memory and executes it. During the execution of CONFIG.SYS,
installable device drivers are loaded and initialized. If CONFIG.SYS does
not exist, some default values are used.
It is loads the partition table from the first physical sector of the hard disk.
PARTITION TABLE:
Partition Table
The information kept at side 0, track 0, sector 1 of the hard disk. It keeps
record of all the partitions made on the disk such as partition size, starting and
ending location and active flag (for booting process).
Portion activate
Boot sector of hard disk.
ROM bootstrap routine.
ROM Bootstrap routine read disk bootstrap.
Routine from the first logical sector of the hard disk (disk bootstrap will check the MS DOS copy it will
display the messages “NON SYSTEM DISK”)
ROM
RAM
DOS KERNAL
SYS INIT .SYS
BIOS
IVT
SYSINIT.SYS:
It is not a permanent memory resident every thing will be done through initialization module. First of all it
will check the ivt, Fat, and interrupt vectors at the end will come to the sysinit.sys and the file will be
initialized.
Config.sys :
Disk buffers will be controlled (information of files bits date)
File control block (fcbs)
Installable drivers.
Sysinit will now load shell (command.com) MS DOS execution function will work and will load shell and
will throw the sysinit. Sys and will run the autoexec.bat and the c prompt will be displayed.
ROM
TRANSIENT MOD
OF COMMAND.COM
RAM
RESIDENT MODULE
OF COMMAND.COM
INSTALLABLE
DRIVERS
FCB’s
DISK BUFFERS
DOS KERNAL
BIOS
INTERRUPT VECTOR
TABLE
ROM
SYSINIT.SYS
RAM
INSTALABLE DRIVERS
FCB’s
DISK BUFFERS
DOS KERNAL
BIOS
IVT
Process management:
PROCESS MANAGEMENT
PROCESS STATES
In a single processor system, only one instruction from one program can be
executed at any one intant, although the processor may be able to execute multiple
programs over a period of time, a facility known as multiprogramming. The
operating system manages these multiple programs by keeping all or part of each
these processes in memory and switching control between them to give an
impression of simultaneous execution. The process that is currently using the
processor is said to be in the ‘running’ state. The rest of the processes would be in
some state other than the running state. The possible states for a process are as
follows:
Running: The process that is currently being executed. The number of running
processes will be equal to the number of processes in the system.
Ready: Processes that reside in main memory and are prepared to execute
when given the opportunity.
Blocked: A process that is in main memory but cannot execute when given the
control of the processor until some event (such as completion of an
I/O operation) occurs.
New: A process that has just been created but not yet admitted to the pool
of executable processes by the OS.
Exit: A process that has been released from the pool of executable
processes by the OS, either normally or abnormally.
In operating systems that support process swapping, two additional states exist:
Process: a program in execution with programs counter, register and all other information required by it to
execute registers reside inside the process. Registers are AX, BX, CX, and DX.
Program counter it checks that how many lines are there in the program.
The idea of processing was originated from.
1. Mono programming ( Dos ) (PC’s)
2. Multiprogramming ( Unix, windows )
Block state: waiting for some input or output function. Processor will wait for the header file to be
included.
Multi programming and multitasking are one of the same.
1. dispatch
2. Time run out – block wake up.
2 bytes.
Array of structures:
Student
NAME
ROLL NUMBER
SUBJECT
GRADE
TOTAL
LOCK VARIABLE:
The concept is of using a flag variable as a lock. When a
process wants to enter in its critical section, it will check the lock variable. If it is TRUE
(i.e. it is lock), it means some other process is using the shared memory area, so it has to
wait, if it is FALSE (i.e. it is not locked), it sets the clock to TRUE and enter in its critical
section. If any other process checks the lock at this specific
Moment, it will find it TRUE, and will wait outside its critical section. When a process
reaches at the end of its critical section, it sets the lock to false and enters in its non _
critical _ section. The idea is quite attractive but does not promise perfect mutual
exclusion. Let’s see some particular cases.
What if a process sets the lock to true and never place false in it again? It is just like
disabling interrupts in which the user is trusted with the security of the system. Suppose a
process had the following code check and set the lock variable.
Suppose processes A execute line number 10. It found the lock variable FALSE and was
going to process the next line it was suspended. Process B got the CPU and execute the
same code to check the lock. As process B found Flag False, it entered in its critical
section by setting the lock to TRUE. When process B was suspended, process A resumed
execution from line number 20, placed true in the lock (which is already true). So it will
also enter in its critical section and race condition will arise.
DEKKER’S ALGORITHM :
a) First solution (Taking turns / Strict alteration):
{ {
while (true) while (true)
{ {
while ( process no == 2 ) / * busy wait * /. while ( process no == 1 ) / * busy wait * /.
Critical _ sec_ one( ); Critical _ sec_ two( );
Explanation :
Mutual exclusion is guaranteed but at high price. Firstly, process one enters in the
critical section as the process number variable is initialized by one, thus until process one
executes its critical section; process two must busy wait. After process one enters and
leaves its critical section (Because the global variable is then assigned the value to ‘ 2 ’).
The same sequence will be repeated if process one wants to enter in its critical section but
process two does not allow it to do so. Thus the processes must enter and leave their
critical sections in strict alterations.
SECOND SOLUTION :
In the first solution , there was only one global variable , which gave rise to certain
problems ( e.g. Slowing the speed ) . In this solution, instead of one there are two global variables (Process
1 _ in and Process 2 _ in). Process 1 _ in is true if Process 1 is in its critical section and Process 2_in which
is true Process two is using its critical section. Which means that once a process is in its critical section, it
will not allow the other process to enter in its critical section unless it goes out of it. I.e. it will set the
second variable in the busy wait lock position.
{ {
while (true) while (true)
{ {
while ( process 2_in == true ) / * busy wait * /. while ( process 1_in==true ) / * busy wait * /.
EXPLANATION :
Initially Process one is to enter in its critical section . Before entering it sets the Process – one variable to
true. Now process two remains locked in a busy wait as long as process1 _ in is true. Eventually process
one leaves its critical section and set process 1_in to false.
At the moment process two sets Process 2 _ in to true and enters its critical section.
Again , the solution is not perfect . Consider the following example ;
Initially, both process 1 _ in & Process 2 _ in are false. Process one checks Process 2 _ in and finds it false
. At this moment, CPU suspends process one and switches over the process two. Process two checks
process 1 _ in, finds it false and enters in its critical Section. When process one resumes execution, it will
proceed to the next statement to while loop, Set Process 1 _ in to true and enter in its critical section. Both
process are in their critical section simultaneously, so second solution does not even guarantee mutual
exclusion.
THIRD SOLUTION:
In the second solution, the basic problem was that when a process determines in the while loop test that it
can go ahead and the time that the process set the flag to say that it is in its critical section, there could be a
process switch for another process to test and set the flags and slip into its critical section. Therefore once
ope process attempts the while test it must be assured that the other process may not proceed past its own
while test. Third solution attempts to resolve this by having each process set its own flag before performing
the while test.
{ {
while ( process 2_in == true ) / * busy wait * /. while ( process 1_in==true ) / * busy wait * /.
} }
} }
One problem is solved but another is introduced in this solution. Consider a situation where process one
gets time from the CPU, it sets Process 1 _ in Flag to true and then get suspended by the operating system
which then switches control to process two. Process two will also set its flag to true and perform the while
test. It will simply busy wait as the Process 1 _ in was set true by Process one (but Process one never
entered in its critical section). When the control comes back to process one, it will also get into the while
loop and run forever. Process one will always wait for Process 2 _ in Flag to become false and process two
will always wait for Process 1 _ in Flag to become False. They will be caught in a deadlock.
FOURTH SOLUTION:
FAVOURED PROCESS:
It is that process that is attempting to come in the critical section.
The problem with the third solution was that the each process can be get locked up in its
respective while loop. We need to break out of these loops. Solution four accomplish this by forcing each
looping process to set its flag false repeatedly for brief periods. This will allow other process to proceed
past its while test with its own Flag still true.
Process one ()
Mutual exclusion is guaranteed and deadlocks can not occur but another devastating problem could develop
named infinite postponement i.e. process may have to wait indefinitely long to get into its critical section as
no assumption can be made about the relative speed of processes. In just a few lines of code, Decker’s
algorithms handles two_process mutual exclusion elegantly without requiring special hardware
instructions.
Process one indicates its desire to enter its critical section by setting its flag on. It then
proceed to the while test where it checks if process two also wants to enter. If process two flag is of,
process one skip the body of the while loop and enters its critical section
Process one:
Suppose, when process one performs the while test, it discovers that process two’s flag is set. This force
process one into the body of its while loop. Here it looks at the variable favored process, which is used to
solve the conflict of indefinite postponement (Delay). If process one is the favored process, it skips the
body of the if ( ) and rapidly executes the while test waiting for the process two to turn of its flag which,
eventually , it must do .
If process one determines that process two is the favored process, then process one is forced into the body
of the if ( ) where it sets its own flag off and then busy wait in the following while loop as long as process
two remains the favored process. By turning of its flag. Process one allows process two to enter its own
critical section.
Eventually process two will leave its critical section, set the favored process back to process one and turn of
its flag. Process one may now pass its inner while and set its own flag on. Process one then executes outer
while test. If process two’s flag (which was recently set of) is still of, then process one enters its critical
section. If however, process two has quickly tried to reenter its critical section then process two’s flag will
be on and process one is once again forced into the body of the outer while. This time however process one
is in the good books of the system as it is the favored process (as when process two left its critical section it
sets favored process to on). So the process one skips the body of the if and repeatedly executes the outer
while until process two humbly sets its flag of allowing process one to enter its critical section.
This guarantee mutual exclusion without any favor of indefinite postponement.
PETERSON’S SOLUTION:
This method combines the idea of taking turns with the idea of lock variables
Assume there is a global variable named “turn” and a global array named “interested” with one entry for
each process, initially all the entries in the array are set to false.
#Define FALSE 0
#Define TRUE 1
Enter_critical_section (process)
{
leave_critcal_section (process)
{
Interested [process] = false;
To enter its critical section, each process calls enter_critical_section() with its own process
number (0 or 1 )which will cause it to wait until it is safe to enter.
When the process is done with its critical section , it calls leave_critcal_section() again with its process
number
Suppose, process 0 calls enter_critcal_section(). As process 1 is not using shared memory area so
its entry in array interested will be false. Process 0 will immediately return as the second condition in while
loop will be false with turn set to 0. If process 1 now calls enter_critcal_section(), it will hang on the loop
until interested[0] is false, which will be done when process 0 finishes its critical section calls
leave_critcal_section()
Now assume, both processes call enter_critcal_section ()
Almost simultaneously both will store their process number in turn. Which so ever stores later will
overwrite the previous value (suppose process 1 does) so turn = 1.
When both processes come to the while loop, process 0 checks first condition and as it is false, it
immediately terminates and enters its critcal_section () where as 1 waits. This guarantees mutual exclusion.
INSTRUCTION.
Decker and Peterson’s solution are software solutions to the mutual exclusion problems. The TSL
instruction provides a hardware solution. The logic is to have single instruction that reads a variable, stores
a value in a safe place and sets the variable to a certain value. Once initiated, this instruction completes all
the function without interruption. The instruction TSL(a,b) will read the value of flag B. copy it into A and
set B to true: All within the span of the single un interrupt able instruction. The instruction may be applied
in the following manner.
Process one()
{
while(true)
{
one_cannot_enter = true;
while(one_cannot_enter)
tsl(one_cannot_enter,active);
critical_section_one();
active = false;
non_critical_section_one();
}
Process two)
{
while(true)
{
two_cannot_enter = true;
while(two_cannot_enter)
tsl(two_cannot_enter,active);
critical_section_two();
active = false;
non_critical_section_two();
}
Active flag is true if any process is in its critical section and false otherwise. Process one bases its decision
to enter its critical section on its local flag one cannot enter. It sets one cannot enter to true and repeatedly
perform TSL instruction to continuously test the global variable active if process two is not in its critical
section, active will be false. The TSL instruction will store this value in one cannot enter flag and will set
active to true. The while test will terminate as the condition becomes false and process one will enter its
critical section. Because active flag has been set to true process two cannot enter its critical section and
busy wait in the while loop.
The solution to the above mentioned wastage of the CPU time is to derive such a method
in which, when a process is not allowed to enter in its critical section, it blocks its own self instead of
wasting CPU time. The process may request to go to SLEEP instead of busy waiting until another process
gives a WAKEUP call. As an example how these primitives work, consider the following problem.
The trouble arises when the procedure wants to put in a new item in the buffer but it is
already full. The solution is for the producer to go to sleep, to be awakened when the consumer has
removed at least one or more items from the buffer. Similarly if the consumer wants to take items out and
the buffer is empty, it will go to sleep until the procedure puts in one or more items in the buffer.
To keep track of the number of items in the buffer, a variable named COUNT is maintained. The procedure
will first check count. If it is full then it will go to sleep, otherwise it will add an item and increment count.
The consumer will process in the same manner. It will check count, if it is zero, it will go to sleep. If not it
will get information out and decrement count. Both processes will also check if the other one is sleeping. If
it is so, it will give a wakeup call to enter at a particular point by checking the value of the variable count.
PRODUCER
While(true)
{
get_new_item();
if(count == MAX)SLEEP;
put_item_in_buffer();
Inner(count);
if count == 1)WAKEUP(consumer);
CONSUMER:
while(true)
{
if(count == 0)SLEEP;
remove_item_from_buffer();
deer(count);
if (count == MAX_1)WAKEUP(PRODUCER);
use_item();
}
Initially, the buffer was empty. The consumer checked the count, found it 0 and was just
going to sleep when it was suspended by the CPU. The producer started running, checked the count, found
it 0, placed an item in the buffer and gave a wake up call to the consumer. As the consumer is still not
asleep, this wakeup call is wasted. When the consumer runs again, it starts from where it found the count to
be zero. Therefore it will go to SLEEP. The producer will never come to know about this mishap as it is
supposed to give wake up call when count equal to one. Ultimately the producer will fill up the buffer and
will also go to SLEEP. Both will sleep forever with no one to wake them up. The system will be caught in a
dead lock.
The solution of such a situation is simple. If that wake up call (which was sent to the consumer
before time) was not wasted and stored somewhere, it can be used later on when the consumer goes to
sleep. The consumer’s sleep call will be cancelled by the early wakeup call and the consumer will remain
awake and in turn, can give a wakeup call to the producer. Thus the whole situation can easily be avoided.
Usually, a flag is reserved for such additional wakeup calls. When a wakeup call is given for such
a process, which is yet not asleep, the flag is set. Later on, when the process tries to sleep, it checks the
flag, if it is set, it clears it and stays awake (just as gone to sleep and again wakeup).
DEADLOCKS:
A process in a multiprogramming system is said to be in a state of deadlock (or deadlocked) if it is
waiting for a particular event that will never occur. A system is said to be deadlocked if one or more
processes are deadlocked.
Perhaps the simplest way to create the deadlocked process is this .
Revenge()
{
while (t = 1 );
}
Here , the process is waiting for an event (when t not equal to 1)which will never happen.
The process with the procedure revenge will execute the while loop forever and will never terminate
resulting in a deadlock.
In multi programmed operating systems, resource sharing is one of the primary goals.
When resources are shared in a population of process, each of whom maintains exclusive control over the
particular allotted resource, it is possible for deadlocks to develop. A simple example is illustrated in figure
nn.nn.
The figure shows two processes and two resources the arrow from process to a resource
indicates the process is requesting for that particular resource but has not yet been serviced. The diagram
displays a deadlocked system. Process A holds resource 1 and needs resource 2 to continue. Process 2 holds
resource 2 and needs resource 1 to continue. Each process is waiting for the other process to release a
resource, which it will not release until the other frees its resource. The circular wait is a characteristic of
deadlocked systems.
1. processes claim exclusive control of the resources they require (mutual exclusion condition ).
2. Processes hold resources already allocated to them while waiting for the additional resources
(wait for condition) .
3. Resources cannot be removed from the processes holding them until the resources are used to
completion (no preemption condition ).
4. A circular chain of processes exist in which each process holds one or more resources that are
requested by the next process in the chain (circular wait condition).
Preemptible resources are such resources which can be taken away from a process by as another
process. Resources like CPU or the main memory are known as preemptible resources.
1. Deadlock prevention .
2. Deadlock avoidance.
3. Deadlock detection.
4. Deadlock recovery.
DEADLOCK PREVENTION:
This is the most frequent approach used by the operating system designers and dealing
with deadlocks. And this method, the major concern is to remove any possibility of deadlocks occurring.
In 1968 it was concluded that if any of the four necessary conditions is avoided, it is
impossible for a deadlock to occur. Following strategies to deny various necessary conditions were
suggested.
Each process must request all its required resources and cannot proceed until all have
been granted.
If a process holding certain resources is denied a further request for any resource, that
process must release its original resources.
Imposition of resource types i.e. if a process has been allocated resources of a given type;
it may subsequently request only those resources of types later in the resource ordering list.
The first strategy requires that all the
Resources will need must be requested at once. The system must grant them on an “ All or none “ bases. If
the set of resources required by the process is available, the system will allocate them to the requesting
process and will be allowed to proceed. If the complete set is not available, the process must wait. While
the process waits, it may not hold any resources. Thus the wait for the condition is denied and deadlock
cannot occur.
Havender’s third strategy denies thePossibility circular wait. Because all the resources
are uniquely numbered and because processes must request resources in ascending order, it is impossible
for a circular wait to develop. If process A is holding resource number 1 and process B is holding resource
number 2 then, although process A can request the resource number 2 but process B can never request for
process number one as the requests are supposed to be for the next number process. This will prevent a
deadlock to happen.
DEADLOCK AVOIDANCE:
Deadlock detection is a process of actually determining that a deadlock has occurred and of
identifying the processes and resources involved in the deadlock .In this strategy, the system is said to be in
a safe state if it can allow all the current users to complete their jobs within finite time by carefully
manipulating the allocation of resources (i.e. by watching the current behavior s of a process frequency of
resource allocation requests and frequency of resource releasing etc). Processes do claim exclusive use of
the resources they require. Processes are allowed to hold currently allocated resources while requesting
and waiting for additional resources and resources may not be preempted from a process. Users ease on the
system by requesting one resource at a time. The system may grant or deny the request. If the request is
denied the process holds the allocated resources and waits for a finite time until that resource is granted.
The system grants requests that result in a safe state only. A user request that would result in an unsafe state
is denied until it can eventually be satisfied. The system is always maintained in a safe state.
DEADLOCK
on releasing their hold resources) then there is no deadlock.
If all its processes cannot reduce a graph then the “irreducible “ processes constitute to a state of a dead
lock in the graph. When the system detects such a graph it can call deadlock recovery routines to remove
deadlocked processes and release the resources allocated to them.
DEADLOCK RECOVERY
Once a system has been deadlocked the deadlocked must be removed by removing one or
more necessary conditions. Usually, several processes will lose some or all of the work
they have done so far. This is a small price to pay compared with leaving the deadlock in
place.
In current systems, recovery is usually performed by forcibly removing the process from
the system and reclaiming its resources. The removed processes are ordinarily lost but
remaining processes may now be able to continue. Sometimes it is necessary to remove
several processes until sufficient resources have been reclaimed to allow the remaining
process to finish. Another and most desirable approach to deadlock recovery is an
affective suspend /resume mechanism which allows the system to put temporary hold on
processes and then resume the held processes without loss of their work. For example, it
may become necessary to shutdown a system temporarily and start up again from exactly
the same point without loss of productive work. It requires conscious effort on the part of
the system developers to incorporate such suspend/resume features.
PROCESS SCHEDULING
When there are several processes run able, the operating system
must decide which one to run first and for how long. This part
of the operating system responsible for this decision making is
known as the scheduler and the algorithm is known as scheduling
algorithm. Before looking at different scheduling algorithms we should
think about what the scheduler is trying to achieve. The scheduling
algorithm must provide some basic facilities, which are as follows:
1. Fairness: Make sure that every process gets a fair share of the CPU and no
process can suffer indefinite postponement.
2. Efficiency: Keeps the CPU busy 100% of the time.
3. Response time: Minimize response time for every user.
4. Turn Around: Minimize the time batch users must wait for output.
5. Maximum throughput: Service the largest possible number processes per unit
time.
A B C D E F
PRIORITY SCHEDULING
As the users on a system can have different levels of importance.
The solution is to have a priority-scheduling algorithm.
In this strategy, the scheduler maintains a list in which processes
are queued according to their priority. A process with priority 0
will be serviced before a process with priority. Similarly a process
with priority 1 will be serviced before a process with priority 2 and
so on. In short, each process is assigned a priority and processes
with the highest priority are serviced first.
Now suppose, if the scheduler decides for a process switch after
checking priorities and a process X, having the highest priority
is run able, then X will be given the CPU. When its quantum
expires, scheduler cheeks the priorities again, still X has the
highest one and again X is serviced. Process X may continue
to run forever thus proposing a fear of indefinite postponement
of the process with less priorities.
To prevent such problems, the scheduler may decrease the priority
of process X with every clock tick (or clock interrupt) and when
its priority becomes less then the next process on the ready to run
list. A process switch occurs. Priorities are assigned by the system
when the user logs in but a process can reduce its own priority
voluntarily. On UNIX system, there is a nice () command which
allows a user to reduce its own priority in order to be nice with
other processes. Nobody ever uses it.
MULTIPLE QUEUES
One of the earliest priority schedulers was used in CTSS designed for IBM 7094.
CTSS had the problem the process switching was very slow as the IBM 7094 could hold
only one process in the memory at one time. Each process swap meant placing the current
process out the out the disk and loading the next one in the memory. CTSS designers
quickly realized that it was more efficient to give CPU bound processes a large quantum
once in a while rather than giving them small quantum’s frequently. So a solution was
derived in which the process switching was quite reduced.
Priority classes were used. Processes in the highest class were run for one quantum,
processes in the next class were run for two quantum’s, next for four quantum’s and so
on. Whenever a process used up the entire quantum’s allocated to it, was moved down
one class.
For example, consider a process that needed to computer for 100 quantum . It
would initially be given one quantum; than swapped out, next time it would be low
quantum’s before it is swapped out to the disk. On succeeding runs it would be given
4,8,16,32 and 64 quantum’s. Although it will use only 37 out of the last 64 but this way,
only
7 swaps would be needed instead of 100 with pure round robin algorithm. As the process
sank deeper and deeper into the priority queues, it would be run less and less frequently
(but when it runs it gets more and more quantum’s). This saves the CPU time for short
interactive processes.
Every process always started with the highest class no matter how many quantum’s it
require when it was initiated.
Suppose there were four jobs to run with A requiring 8 seconds of CPU time. B
requiring 2, C requiring 3, and D requiring 1 second. They will be executed in the
following pattern as displayed in figure nnn.nnn.
The obvious problem with SJF is that it requires precise knowledge of how long a
job or process will run and this information is not usually available. The best SJF can do
is to rely one-user estimates of run times. In production environments where same jobs
run regularly, it may be possible to provide reasonable estimates. But in development
environments users rarely know how long their program will execute. Another absurdity
in this algorithm is the possibility of indefinite postponement as the process requiring a
longer time period may never be served if processes with short time to run requirement
keep on appearing.
If the service time required by the processor is 5 seconds and it has to wait for 10 seconds in get
served then its priority will be (10 | 5)/5-3 and if a process requires 2 seconds and it has to wait for 3
seconds to be served then its priority will be (3 | 2 )/ 2 =2.5 As we can see that the process requiring longer
time period does not have to wait long to be served.
OR
OPERATING SYSTEMS
HARDWARE Physical equipment of the computer, Hardware devices may
be electronic, magnetic or mechanical etc.
HARDWARE RESOURCES
1. CPU
2. I/O Devices
3. Bus Architecture
4. Main Memory
CPU The unit responsible for processing as well as controlling other resources.
I/O Devices: Equipment which is used for taking input for processing and
producing desired output processing.
BUS Channel or path (Cable & Circuits) used for transferring data and electrical
signals between different components of the computer.
Processor Registers
High speed memory used in CPU for temporary storage of small
amounts of data, addresses and other necessary information during processing.
Some of these registers are accessible to the user while some are not.
Data Registers are those registers which are used to store data before or after
processing.
Condition-code Registers (Flag Registers) hold conditions codes (bits) which are
set by the hardware as the result of different operations. These registers are
partially visible to the user.
The registers which are not accessible to the programmer are the Control
and Status Registers. Some of these may be accessible by machine instructions
executed in the operating system mode. The commonly used registers are:
MAR (Memory address register)Specifies the address in memory for the next
read or write.
MBR (Memory buffer register) Contains data to be written into the memory or
data read from the memory.
I/OAR (I/O address register) Specifies the address of a particular I/O devices.
I/OBR (I/O buffer register) Used to exchange data between an I/O module
and the processor.
PC (program counter) Contains the address of an instruction to be
fetched.
IR (instruction register) Contains the instructions most recently fetched.
SYSTEM SOFTWARE
APPLICATION SOFTWARE
Programs which are used for specific purposes, e.g. word processors,
spread-sheets etc. They need system software to use hardware resources.
OPERATING SYSTEM
Operating system controls the execution of other programs. It consists of
instructions for the processor to manage other system resources. Two important
functions of the operating system are:
KERNEL MODE
Like other programs, OS is also a software which is executed by the
processor. However, instructions in the OS are executed in a special operating
mode, called kernel or supervisor mode. In this mode certain privilaged
instructions can be executed which cannot be executed in the normal mode.
Program Execution: Functions like loading programs and data into the
memory and preparing other resources like I/O devices for use are also performed
by the OS.
Access to I/O Devices: Each I/O device has its own set of instructions for
operation. The details of these instructions are also provided by the OS.
Controlled Access to Files: Details of the file format as well as the I/O
device on which it is stored are also handled by the OS.
System Access: In case of shared systems, the OS also has to control access to
system as a whole as well as to specific system resources.
Error Detection & Response: Different type of hardware and software errors
can occur while a system is working. OS tries to resolve such errors or reduce their
impact as much as possible.
INSTRUCTION
Group of characters that defines an operation to be performed.
CPU BOUND JOB The job which uses a high proportion of CPU time as
compared to I/O operations.
I/O BOUND JOB The job which uses a high proportion of time in I/O transfers
as compared to CPU processing.
LOADING Reading data from the secondary storage device into the main
memory of the computer.
LOADER Program which tells the processor how to read programs into main
memory of the computer for execution.
MULTIPROGRAMMING
Running two or more programs concurrently in the same computer. Each
program is allotted its own place in memory and its own peripherals, but all share
the CPU.
MULTIPROCESSING
Simultaneous execution of one or more than one processes by more than
one CUPs under a common control.
BATCH PROCESSING
The type of processing in which more than one jobs are submitted to the
computer as a group for processing is used usually for data which requires
periodic processing e.g. payroll, billing etc. In batch processing, there is lack of
interaction between the user and the job during execution.
REAL-TIME SYSTEMS
On-line processing systems which can receive and process data quickly
enough to produce output to control the outcome of an ongoing activity e.g. airline
reservation system, controlling a nuclear power plant.
DISTRIBUTED SYSTEMS
Distributed systems are used to distribute computation among several
processors (called ‘sites’). These are two types of distributed systems:
1. Tightly Coupled Systems
2. Loosely Coupled Systems
TIGHTLY COUPLED SYSTEMS
In tightly coupled distributed systems the processors share the memory and
the clock. Communication between the processors takes place through shared
memory.
Load Sharing: If a particular site is currently overloaded with jobs then some
of these jobs may be moved to other lightly loaded sites.
Reliability: If one site fails in a distributed system, the remaining sites may be
able to continue their operation.
BUFFERING
Overlapping the I/O of a job with its own comutation. Each I/O device
creates its own buffer so that I/O Calls cause obly a transfer to or from the buffer.
BUFFER An area in momory which is used to store data temporarily before or
after processing
DOS uses FORMAT command to format the disk. In this process, each side
of the disk is divided into a specific number of ‘tracks’ and each track is further
subdivided into a specific number of ‘sectors’. A sector is usually capable of
storing 512 bytes of information. The number of tracks per side and sectors per
track depends upon the disk type and is specific for specific type of disk. The
storage capacity of a disk can be calculated using the formula:
Total capacity = no. of sides x tracks per side x sectors per track x sector
size.
During the process of formatting, DOS to reserves first few sectors of the
disk for its own use.
Boot Sector DOS uses the first sector for placing the ‘boot record’. The boot
record contains information about the number of bytes per sector, sector per
cluster, sectors per track, total number of sectors and some other specifications.
Reserved Space For FAT (File Allocation Table) DOS uses FAT to access
the actual sectors on the disk. Following the boot record, DOS uses a few sectors
for sectors for keeping at least two copies of the FAT. The number of sectors used
for FATs varies with the size of the disk.
Reserved Space For Root Directory Following the FATs, DOS reserves some
sectors for root directory entries. Once FORMAT reserves space for root directory,
DOS cannot increase its size.
BOOTING PROCESS
Whenever a system is started is started or reset, the OS which has to control
it performs some functions automatically before allowing the user to interact with
the machine. These functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.
2. If the system is booted from a hard disk, it loads the ‘partition table of that
disk and reads it to access the active drive. There is no partition table in
floppy disks. Therefore, in case of floppy drives, it directly jumps to the
boot sector.
3. From the boot sector, it loads the disk bootstrap routine and transfers
control to it.
Disk Bootstrap Routine is a small code which resides in the boot sector of the
bootable system disk. During the booting process, when it gets the control of the
CPU from the Rom Bootstrap Routine, it performs the following tasks:
1. It checks the first two directory entries of the disk. If these entries are no
IO.SYS & MSDOS.SYS (in case of MS-DOS), it displays the error
message “Non System Disk Or Disk Error” and waits for the system disk in
Drive A.
2. If these files are found, disk bootstrap routine loads them into the memory
and transfers control to IO.SYS.
IO.SYS has two portions - BIOS and SYSINIT.SYS. BIOS contains the
code of resident device drivers and is loaded above IVT. SYSINIT.SYS is loaded
above BIOS and control is actually transferred to this portion by the disk bootstrap
routine. During its execution, it performs the following functions:
MSDOS.SYS is also called DOS Kernel. When it gets the control, it:
1. Sets up its internal tables.
2. Makes intreeupt vector entries.
3. Initiliazes all the internal device drives.
1. Looks for the file CONFIG.SYS in the current directory and loads it into
the memory and executes it. During the execution of CONFIG.SYS,
installable device drivers are loaded and initialized. If CONFIG.SYS does
not exist, some default values are used.
Partition Table
The information kept at side 0, track 0, sector 1 of the hard disk. It keeps
record of all the partitions made on the disk such as partition size, starting and
ending location and active flag (for booting process).
Device Drivers
Routines which are used to manipulate the hardware devices connected to
the computer. There are two types of device drivers-resident & installable.
Resident Device Drivers: Device drivers which are automatically loaded from
IO.SYS and permanently stored in RAM for later execution.
Intallable Device Drivers: Device drivers which are loaded on explicit commaand
from the user. These commands are usually placed in CONFIG.SYS file.
PROCESS MANAGEMENT
PROCESS STATES
In a single processor system, only one instruction from one program can be
executed at any one intant, although the processor may be able to execute multiple
programs over a period of time, a facility known as multiprogramming. The
operating system manages these multiple programs by keeping all or part of each
these processes in memory and switching control between them to give an
impression of simultaneous execution. The process that is currently using the
processor is said to be in the ‘running’ state. The rest of the processes would be in
some state other than the running state. The possible states for a process are as
follows:
Running: The process that is currently being executed. The number of running
processes will be equal to the number of processes in the system.
Ready: Processes that reside in main memory and are prepared to execute
when given the opportunity.
Blocked: A process that is in main memory but cannot execute when given the
control of the processor until some event (such as completion of an
I/O operation) occurs.
New: A process that has just been created but not yet admitted to the pool
of executable processes by the OS.
Exit: A process that has been released from the pool of executable
processes by the OS, either normally or abnormally.
In operating systems that support process swapping, two additional states exist:
1. New Batch Job Whenever the OS encounters a new job in the batch
submitted to it, it creates a new process for it.
2. Interactive Log on A new user logs on to the system.
3. Created by OS to The OS may create a process itself to provide a service
to the user.
Provide a Service process, e.g. for printing.
4. Spawned by Parent An already existing process may itself create
another process in which case the new process is called the child of that process.
Reasons of Process Termination
The common reasons for process termination are as follows:
PROCESS IMAGE
Whenever a new process is created and entered into the ready queue, the OS
creates the following elements for its execution:
• Process Identification
• Process State Information
• Process Control Information
Process Identification
Information relating to process identification includes:
In UNIX, e.g., these identifiers are named as pid, ppid & uid respectively.
INSTRUCTION EXECUTION
The processing required for a single instruction is called ‘instruction cycle’.
Its two basic steps are:
Fetch Cycle The processor reads an instruction from the main memory.
Execute Cycle The processor executes this instruction.
Program execution stops only when:
The instruction is in the form of binary code. The first few bits of the this
code contain the ‘op-code’ which tells the processor what operation is to be
performed. The remaining bits give the address of the location where the operation
is to be performed. The results of these operations can be temporarily stored in the
register called ‘Accumulator’.
After executing one instruction, the processor reads next adress from the
PC and loads instrucion from that location into IR for execution. In this way, the
program is executed, one instruction at a time, in a sequence. This sequence can
only be changed by an instruction which tells the processor to load a new address
in the PC.
The operations that are performed usually fall in the following four
categories:
INTERRUPTS
An interrupt is a signal which causes the hardware to transfer program
control to some specific location in the main memory, thus breaking the normal
flow of the program.
TYPES OF INTERRUPTS
Interrupts can be of the following four types:
1. Program Generated as a result of an instruction execution, such as
overflow, division by zero etc.
2. Timer Generated by a timer which enables the OS to perform certain
funtions on regular basis.
3. I/O Generated by an I/O controller, to get the attention of the CPU.
4. Hardware Failure Generated by a failure such as power failure.
After every ‘execute’ cycle, the processor checks for the occurrence of an
interrupt. In case of an interrupt, the processor suspends the execution of the
current program and executes an ‘interrupt handling routine’. This routine is
usually part of the OS. The interrupt handling routine determines the nature of the
interrupt and performs the required action. When the execution of this routine is
completed, the processor can resume execution of the interrupted program at the
point of interruption.
2. It checks the type of request and goes to the routine which will service that
request.
3. This routine prepares the I/O device for use and asks it to perform the
desired operation.
4. While I/O device is busy transferring data to or from the main memory, the
control may be given to another user program which is ready to execute.
5. The I/O device finishes with the current I/O operation and issues an
interrupt signal to the processor.
6. The processor checks for the interrupt signal after each ‘execute cycle’.
When it finds the signal, it sends an acknowledgment signal to the I/O
device.
9. The I/O device again starts exchange of data from the memory and the
processor starts executing the interrupted program by restoring the saved
information from the main memory.
This operation (point 5-9) may be repeated until the whole I/O transfer is
complete.
MULTIPLE INTERRUPTS
It is possible for an interrupt to occur while another is being procesed. This
situation can be handled in two ways:
1. While one interrupt is being processed, the OS can disable other interrupts
from occuring. If an interrupt occurs during this time period, it is kept
pending until the interrupts are enabled. It is a simple approach but it does
not consider priorily or time critical needs.
2. In the second approach of handling multiple interrupt is given a priority
level such that an interrupt of higher priority can cause a lower priority
interrupt to itself interrupted. After processing the higher priority interrupt,
the control is given to the lower priority interrupt which was interrupted. In
this way, the efficiency of the system is increased.
PROGRAMMED I/O
The most significant feature of programmed I/O is the lack of interrupts. In
this technique, an I/O operation proceeds as follows:
INTERRUPT-DRIVEN I/O
As opposed to programmed I/O, the interrupt-driven I/O uses interrupts
during I/O operations.
In programmed I/O, the processor has to wait needlessly for the I/O device
to send or receive more data which slows down the performance of the system.
Interrupt driven I/O is used to utilize this wasted time.
In interrupt driven I/O, the processor issues the I/O command to the
appropriate device controller and istead of waiting for the I/O operation to
complete, starts doing some other work. It does not check the device controller
periodically for completion of the operation. When the device finishes with its
task, it signals an interrupt to the processor. The processor checks for the
occurrence of any interrupt during each instruction cycle and finding an interrupt,
responds to the I/O device. Accordig to the requirement, it may ask the device to
transfer more data or tell it that the whole I/O transfer is complete.
To counter this situation, the DMA technique is used. In this technique, the
device controller transfers an entire block of data at a time without involving the
CPU.
Thus the interrupt is generated per block of transfer instead of per word. In
order to use DMA technique, the DMA controller is given the following
information:
• Type of operation
• Address of I/O device
• Starting memory location to read from or write to
• Number of words to be read or written
With this information, the device controller is enabled to transfer the entire
block of data, one word at a time, directly to or from the memory. The processor is
involved only at beginning and end of the transfer.
MICROSOFT WINDOWS
INTRODUCTION TO OPERATIING SYSTEM & WINDOWS
DOS stands for Disk Operating System. Normally PC’s use this operating
system because its rather easier to work with it.
DOS commands have syntax with which we work. But its necessary to
understand syntax and usage of the commands.
The two modes in which Windows 3.1 operates are described below:
Although Windows automatically detects the type of machine you have and
uses the requisite operating mode. If you want the Windows to operate in some
other mode; for example, the benefits of 386 enhanced mode using its ability to
multitask DOS applications and use its virtual memory; Standard mode actually
makes slightly more efficient use of memory in Windows. If you have an 80386
based system, with little memory, less hard disk space, and you are not going to
run DOS applications, you may want Windows to run in Standard mode.
You can make Windows run in any mode you choose using command line
parameters; it is a series of characters placed after a command that causes the
commands to execute in a specific manner. To run Windows in the mode of you
choice; Do the following jobs:
From the DOS prompt, type WIN/S to run in Standard mode or WIN/3 to
run in 386 Enhanced mode, then press Enter.
WINDOWS APPLICATIONS
NON-WINDOWS APPLICATIONS
Non - Windows Applications are designed to run with MS-DOS; but not
specifically with Windows. Windows provides us facility to run these applications
from within Windows. The method to run these applications from Windows
environment is discusses later in this chapter.