You are on page 1of 52

COURSE OUTLINE:

COMPUTER COMPONENTS:
Functioning and interaction.
Hardware and software concepts
Needs for operating system

INTRODUCTION TO DOS:

Internal and external commands.


Command lines parameters
Directory structure in DOS
File names and extensions.

FUNDAMENTAL FILE OPERATION:

Types of processing.
Multiprocessing and multiprogramming.

DISK DRIVE:

Formatting the disk.


Booting process in DOS.
Advanced file concept in DOS.

PRIMARY FUNCTIONS OF DOS AND UNIX

CONCEPT OF PROCESSES..

Interrupts.

IO COMMUNICATION TECHNIQUES.

Memory usage reallocation.


Types of user interface.

BATCH FILES IN DOS:

Autoexec.bat and Config.sys.


Dos environment.
Redirection operation.

ADVANCED FILE MANUPULATION IN DOS.

Process states scheduling


Process scheduling techniques.
Process control structure
Process management in UNIX

MEMORY MANAGEMENT:

MEMORY ALLOCATION METHODS


Paging and segmentation
Concepts of virtual memory
Virtual paging and segmentation
Memory management in DOS and UNIX.
Memory protection
COMPUTER
The word computer has been originated from the word compute means calculations.
Computer is an electronic machine there are three steps in computerized system.

1. Input 2. Processing 3. Output.

D a t a:
Draw facts and figures is called data.

INFORMATION:
Processed form of data is called information.

Computer is divided into two parts.

1. Hardware.
2. Software.

Hardware:
Physical equipment of the computer, Hardware devices may be electronic,
magnetic or mechanical etc.
Any thing having physical existence is called as hardware.

Hardware is divided into three steps


1. Input 2. Processing 3. Output.

I n p u t:
Keyboard ,mouse, light pen, scanner, microphone, digitizer etc.

Processing:
CPU (central processing unit) processor ,floppy drives, hard disk, magnetic tape, CD drive, RAM, ROM,
EPROM, POWER SUPPLY etc.
Ram: Random access memory (temporary storage device) all the variables defined is stored in ram.
Memory that can be accessed randomly.
Variable a = 10 value.

ROM:
Read only memory not accessible.

EPROM: Erasable programmable read only memory it is used in networks only.

Also called boot ROM. In computers ASCII codes are allotted to all. ASCII code for a is 65

64 32 16 8 4 2 1

1 0 0 0 0 0 1

& Will be written (1000001) 2.

Bit is the smallest unit in the computer.

8 bits = 1 byte.
1024 bytes = 1 KB
1024 KB = 1 MB
1024 MB = 1 GB
1024 GB = 1 Triga byte.

Software: anything having logical existence is called as software.


Software: two categories

1. System programs 2. Application program

SYSTEM PROGRAM:
Programs which control the internal operations of the computer system, e.g.
operating systems.
Such programs which deals with (languages and the operating system) the computer operations.

APPLICATION PROGRAMS:
Programs which are used for specific purposes, e.g. word processors,
spread-sheets etc. They need system software to use hardware resources.
Such a program that deal the problems of the user for example words lotus etc.
In case of languages we have to do all the work by ourselves in the case of packages predefined functions
are there

Languages are divided into three levels higher middle low level languages

1) High level languages are Basic, Pascal, Cobol etc.

2) Middle level languages are c family.

3) Low level languages are assembly language.

Packages: 1. Customized packages.

Customized: for specific organization.


GENERALIZED: for all the users. Word lotus excels WordStar etc.

OPERATING SYSTEM:
Operating system controls the execution of other programs. It consists of
instructions for the processor to manage other system resources. Two important
functions of the operating system are:

1. Provide interface between the user and the hardware resources.


2. Hide details of hardware from the application programmer.
Operating systems consists of the following:
1. Control Programs: Scheduler, I/O control system etc.
2. Service Programs: Compilers, Utility programs etc.
Common examples of operating systems are: DOS, UNIX, OS/2, VMS, etc.

A layer of software which provide the interface between hard ware and user.

A software which makes hardware usable. For example. OS/2, CP/M, DOS, UNIX, XENIX, Windows 95
and Windows NT.

LAN: dos based environment.

Unix: for personnel computers.

XENIX: for networks.

Operating system is also known as resource manager


Resources are 1. Storage devices 2. Input output devices 3. Processors.

Users: 1. Application programmers 2. System programmers 3. Computer operators 4. Computer users.

Operators & users: the one who uses the packages.

Administrative personals:
The personals of administration using computers.

History of the operating system:

The zeroth generation (Adam early forties).

The first generation (Mid forties mid fifties )

The era of vacuums tubes and plug board.

The second generation (Mid fifties to mid sixties )

Era of transistors batch system.

The third generation: (Mid sixties to late seventies)

Era of ICS (integrated chips)

The forth generation:

The era of micro computers

File Server processing server output server.

Introduction to MS DOS: before MS DOS the operating system was cp/m-80 and was working on 8080
processors. For competing with cp/m –80 Microsoft wrote 86 – dos for Seattle computer products. This was
not a better operating system there were lots of errors and bugs in it.

Another software was designed in 1980s MS DOS

Dos can be divided into three layers

1. Bios 2. Dos kernel 3. Shell

Bios: basic input output system. Also known as ROM.

Inside bios are device drivers of the keyboard, printer, clock, block/auxiliary devices (all storage devices).

Bios is uploaded to ram and is considered as a part of I/O.SYS.


Without the below mentioned file system cannot be booted.
1. I/O.SYS. 2. MS DOS.SYS 3. Command.com.

DOS KERNEL:
Like other programs, OS is also a software which is executed by the processor.
However, instructions in the OS are executed in a special operating mode, called
kernel or supervisor mode. In this mode certain privilaged instructions can be
executed which cannot be executed in the normal mode.

Its name is kernel because it is operating most of the displaying function.

Collection of hardware independent services:

1. File manger 2. Memory manager 3. Character device management.

(Those devices through which characters are inputted like key board scanner.)

3. Access to real time clock ( processors clock or timer )

{Mips} (Million instructions are passed per second)

Dos kernel is also uploaded to ram and is considered as a part of ms dos .sys (anything defined by the
memory must have some address which is access able through interrupts)

INTERRUPTS:
An interrupt is a signal which causes the hardware to transfer program control to
some specific location in the main memory, thus breaking the normal flow of the
program.

TYPES OF INTERRUPTS
Interrupts can be of the following four types:
1. Program Generated as a result of an instruction execution, such as
overflow, division by zero etc.
2. Timer Generated by a timer which enables the OS to perform certain
funtions on regular basis.
3. I/O Generated by an I/O controller, to get the attention of the CPU.
4. Hardware Failure Generated by a failure such as power failure.
Such special programs through which we can access devices.

Dos shell: Two more names:

Command processor, command interpreter.

The work of the dos shell is that it provide interface between the user and the operating system.

Shell is used to execute the commands of the operating system

Executable files commands.

Ms dos shell is called command. Com

Some are internal commands and some are external commands

Internal commands: the commands reside inside the command.com or shell.


(MD CD type class CLS)

All files which comes with the following extensions are called as executable files.

. exe ,com, bat.

Those commands that can be run from the dos prompt are called executable files.

External commands (can be seen in three ways)

Whether they are in command. com , xcopy is external command.

If a command is shown in the directory then it is an external command. Or in shell and in path.

C:\ win > win is the home directory.

Shell command.com internal command

Path external command.

Internal structure of command.com basically it can be divided into three modules

1. Resident module.
2. Transient module
3. Initialization module.

ROM
TRANSIENT MODULE

RAM
RESIDENT MODULE
OPERATING SYSTEM

INITIALIZATION MODULE:
When the computer is turned on initialization module will work at the place of resident module & will
search for the autoexec.bat and will display the messages if not found it will go out of the memory.

RESIDENT MODULE:
It stays in the memory

Control break is because of the resident module.


Control C is because of the resident module.
Abort is because of the resident module.
Retry is because of the resident module.
Fail break is because of the resident module.

Error messages are generated because of this module.

TRANSIENT MODULE:
All the files run by shell are executable through transient module. If we load a big program transient
module will be damaged & afterwards when we come out of the program operating system automatically
uploads this module from the hard disk.

Directory: it’s same like a closet & the subdirectories are the drawers of this closet & files are inside the
closet and the drawers.
C:\ is called as the root directory.

In dos 8 digits are restricted for a name of the directory but in the case of Windows 95 & NT this can be of
256 characters.
Z:\waq \ far 1 > cd..\far1
Z:\>waq\far1

We can make our own executable files

Copy con filename.bat enter


Dir. enter
CLS enter
DIR enter
Control Z
For changes edit filename.
If we don’t want the messages to be displayed we will give a command @ echo off. We can also display
our own messages.
Echo Hello.

Dir/?  To get help of a command.


IVT: interrupt vector table:

When the system is booted interrupt vector table is loaded to the lower portion. It contains the address of
the interrupt service procedure.

ROM BOOTSTRAP:
BOOTING PROCESS
Whenever a system is started is started or reset, the OS which has to control
it performs some functions automatically before allowing the user to interact with
the machine. These functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.

BOOTING PROCESS OF DOS


Whenever a system is started or reset, the OS which has to control it
performs some functions automatically before allowing the user to interact with
the machine these functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.

BOOTING PROCESS OF DOS


When the system is started or reset, the execution starts at address
0FFFF0h. This because of the design of 80X86 family of microprocessors. This
address lies in ROM and contains a JUMP instructions to another part of ROM.
From this location, a small portion of code called ‘System Test Code’ is executed.
This code checks the microprocessor itself as well as the devices connected to it.
This test is called POST (Power On Self Test). After performing POST, the ROM
bootstrap routine is executed.

Rom Bootstrap Routine is a program in ROM which performs the following


tasks:

1. It creates the IVT (Interrupt Vector Table) in the lower most area of the
memory.

2. If the system is booted from a hard disk, it loads the ‘partition table of that
disk and reads it to access the active drive. There is no partition table in
floppy disks. Therefore, in case of floppy drives, it directly jumps to the
boot sector.
3. From the boot sector, it loads the disk bootstrap routine and transfers
control to it.

Disk Bootstrap Routine is a small code which resides in the boot sector of the
bootable system disk. During the booting process, when it gets the control of the
CPU from the Rom Bootstrap Routine, it performs the following tasks:

1. It checks the first two directory entries of the disk. If these entries are no
IO.SYS & MSDOS.SYS (in case of MS-DOS), it displays the error
message “Non System Disk Or Disk Error” and waits for the system disk in
Drive A.
2. If these files are found, disk bootstrap routine loads them into the memory
and transfers control to IO.SYS.

IO.SYS has two portions - BIOS and SYSINIT.SYS. BIOS contains the
code of resident device drivers and is loaded above IVT. SYSINIT.SYS is loaded
above BIOS and control is actually transferred to this portion by the disk bootstrap
routine. During its execution, it performs the following functions:

1. It determines the amount of available memory.


2. It loads itself in the higher memory and moves MSDOS.SYS to its place
(above BIOS).
3. It gives control to MSDOS.SYS.

MSDOS.SYS is also called DOS Kernel. When it gets the control, it:
1. Sets up its internal tables.
2. Makes intreeupt vector entries.
3. Initiliazes all the internal device drives.

4. It returns control back to SYSINIT.SYS.


When SYSINIT.SYS gets control for the second time, it:

1. Looks for the file CONFIG.SYS in the current directory and loads it into
the memory and executes it. During the execution of CONFIG.SYS,
installable device drivers are loaded and initialized. If CONFIG.SYS does
not exist, some default values are used.

2. Loads the shell which is mentioned in CONFIG.SYS (or


COMMAND.COM by default) and transfers control to it.

COMMAND.COM, on getting the control of the CPU, does the following:


1. It looks for the file AUTOEXEC.BAT and processes it.
2. Displays the prompt to indicate that it is ready to take commands from the
user.

It is loads the partition table from the first physical sector of the hard disk.

IVT (Interrupt Vector Table)


A table which is created in the lower portion of the memory by DOS. It
keeps the addresses of the all the interrupt handling routines. It takes 1Kbytes of
memory and can store 256 entries.

PARTITION TABLE:
Partition Table
The information kept at side 0, track 0, sector 1 of the hard disk. It keeps
record of all the partitions made on the disk such as partition size, starting and
ending location and active flag (for booting process).

Gives the detail or entries of the partitions of the hard disk.

Portion activate
Boot sector of hard disk.
ROM bootstrap routine.
ROM Bootstrap routine read disk bootstrap.
Routine from the first logical sector of the hard disk (disk bootstrap will check the MS DOS copy it will
display the messages “NON SYSTEM DISK”)

First of all the I/O sys will be loaded in the memory


There are two modules

ROM
RAM
DOS KERNAL
SYS INIT .SYS
BIOS
IVT

Bios will do two things


1. Initialize hardware.
2. Permanent place for dos kernel.

SYSINIT.SYS:
It is not a permanent memory resident every thing will be done through initialization module. First of all it
will check the ivt, Fat, and interrupt vectors at the end will come to the sysinit.sys and the file will be
initialized.

Config.sys :
Disk buffers will be controlled (information of files bits date)
File control block (fcbs)

Installable drivers.
Sysinit will now load shell (command.com) MS DOS execution function will work and will load shell and
will throw the sysinit. Sys and will run the autoexec.bat and the c prompt will be displayed.

ROM
TRANSIENT MOD
OF COMMAND.COM
RAM
RESIDENT MODULE
OF COMMAND.COM
INSTALLABLE
DRIVERS
FCB’s
DISK BUFFERS
DOS KERNAL
BIOS
INTERRUPT VECTOR
TABLE
ROM
SYSINIT.SYS
RAM
INSTALABLE DRIVERS
FCB’s
DISK BUFFERS
DOS KERNAL
BIOS
IVT
Process management:
PROCESS MANAGEMENT
PROCESS STATES

In a single processor system, only one instruction from one program can be
executed at any one intant, although the processor may be able to execute multiple
programs over a period of time, a facility known as multiprogramming. The
operating system manages these multiple programs by keeping all or part of each
these processes in memory and switching control between them to give an
impression of simultaneous execution. The process that is currently using the
processor is said to be in the ‘running’ state. The rest of the processes would be in
some state other than the running state. The possible states for a process are as
follows:

Running: The process that is currently being executed. The number of running
processes will be equal to the number of processes in the system.
Ready: Processes that reside in main memory and are prepared to execute
when given the opportunity.
Blocked: A process that is in main memory but cannot execute when given the
control of the processor until some event (such as completion of an
I/O operation) occurs.
New: A process that has just been created but not yet admitted to the pool
of executable processes by the OS.
Exit: A process that has been released from the pool of executable
processes by the OS, either normally or abnormally.
In operating systems that support process swapping, two additional states exist:

Ready Suspended: The process is in secondary memory but is available for


execution when loaded into the main memory.
Blocked Suspended:The process is in secondary memory and also waiting for and
event.

Process: a program in execution with programs counter, register and all other information required by it to
execute registers reside inside the process. Registers are AX, BX, CX, and DX.
Program counter it checks that how many lines are there in the program.
The idea of processing was originated from.
1. Mono programming ( Dos ) (PC’s)
2. Multiprogramming ( Unix, windows )

1. Only one task can be performed at a time.


2. Unix and windows

Ready to Run process List (By operating system)

Time unit (quantum).


Multitasking cannot be provided without time-sharing.

The program in process or execution (running state).

Wait: Ready State.

Block state: waiting for some input or output function. Processor will wait for the header file to be
included.
Multi programming and multitasking are one of the same.

LAN is not multitasking but is multi-user environment.

Process state Transistors: (process state comes under the heading)

1. dispatch
2. Time run out – block wake up.

Ready to run  dispatch


Running state to ready to run  Block State.

Block to ready state  wakeup.

(Operating system is providing us a scheduler for the time distribution).

Process control block (PCB’s)

(For keeping the record of the tracks by operating system).


Array Structure:

2 bytes.

A [10]; A sub 10  20 bytes.

Array of structures:

Student is a structure and name etc are the fields.

Student

NAME

ROLL NUMBER

SUBJECT

GRADE

TOTAL

Fields in an array are:


Process state
Program counter (its pointer)
Stack pointer.
Memory allocation
Status of open files
Accounting and scheduling of information
Other information.
MUTUAL EXCLUSION :
The simplest solution is to have each process disable all the
interrupts Just after entering its critical section and enable them just leaving it . With
interrupt disabled, clock interrupt can not occur. CPU switches from one process to
another when a clock interrupt occurs (i.e. When it is in on mode), thus disabling
interrupts will results in the suspension of the scheduler and the process can use and
modify shared area without the fear of interruption by another process.
This is not a very attractive idea because of many reasons. First
of all, it is unwise to give user processes the power of disabling the interrupts. What if
someone disables all the interrupts and never enables them. System will be caught in a
deadlock with no way out. That particular process will have the complete charge of the
system as even the operating system cannot interrupt.
Second, disabling clock interrupt is highly risky as with clock interrupt, the CPU
performs some housekeeping function like updating the system time. Disabling clock
interrupt will, most probably, result in system crash.
Operating system kernel uses this method when it is updating
important variables, lists, and tables. So that no other process can interfere, but this is a
specific case in which this method is acceptable.
The conclusion is that disabling interrupts is sometimes useful but not an appropriate
approach as a general-purpose mutual exclusion mechanism for user processes.

LOCK VARIABLE:
The concept is of using a flag variable as a lock. When a
process wants to enter in its critical section, it will check the lock variable. If it is TRUE
(i.e. it is lock), it means some other process is using the shared memory area, so it has to
wait, if it is FALSE (i.e. it is not locked), it sets the clock to TRUE and enter in its critical
section. If any other process checks the lock at this specific
Moment, it will find it TRUE, and will wait outside its critical section. When a process
reaches at the end of its critical section, it sets the lock to false and enters in its non _
critical _ section. The idea is quite attractive but does not promise perfect mutual
exclusion. Let’s see some particular cases.

What if a process sets the lock to true and never place false in it again? It is just like
disabling interrupts in which the user is trusted with the security of the system. Suppose a
process had the following code check and set the lock variable.

10 If lock = = false then


20 {
30 lock = true ;
40 Enter Critical _ Section ( ) ;
50 }
60 Else
70 Wait ( ) ;

Suppose processes A execute line number 10. It found the lock variable FALSE and was
going to process the next line it was suspended. Process B got the CPU and execute the
same code to check the lock. As process B found Flag False, it entered in its critical
section by setting the lock to TRUE. When process B was suspended, process A resumed
execution from line number 20, placed true in the lock (which is already true). So it will
also enter in its critical section and race condition will arise.

DEKKER’S ALGORITHM :
a) First solution (Taking turns / Strict alteration):

1. In this process only one global variable is used.


2. Both process one ( ) and process two ( ) are dependent on the one and only single
global variable.
3. By default the global variable is assigned the value of ‘1’, therefore it will execute the
first process at first.
4. When a process comes out of its critical section after completing its work it changes
the value of the global variable, so that other process can enter in its critical section.

Process one ( ); Process two ( );

{ {
while (true) while (true)

{ {
while ( process no == 2 ) / * busy wait * /. while ( process no == 1 ) / * busy wait * /.
Critical _ sec_ one( ); Critical _ sec_ two( );

Process no =2; Process no =1;

Non _ critical _ sec_one ( ); Non _ critical _ sec_two ( );


} }
} }

Explanation :
Mutual exclusion is guaranteed but at high price. Firstly, process one enters in the
critical section as the process number variable is initialized by one, thus until process one
executes its critical section; process two must busy wait. After process one enters and
leaves its critical section (Because the global variable is then assigned the value to ‘ 2 ’).
The same sequence will be repeated if process one wants to enter in its critical section but
process two does not allow it to do so. Thus the processes must enter and leave their
critical sections in strict alterations.

SECOND SOLUTION :
In the first solution , there was only one global variable , which gave rise to certain
problems ( e.g. Slowing the speed ) . In this solution, instead of one there are two global variables (Process
1 _ in and Process 2 _ in). Process 1 _ in is true if Process 1 is in its critical section and Process 2_in which
is true Process two is using its critical section. Which means that once a process is in its critical section, it
will not allow the other process to enter in its critical section unless it goes out of it. I.e. it will set the
second variable in the busy wait lock position.

Process one ( ); Process two ( );

{ {
while (true) while (true)

{ {

while ( process 2_in == true ) / * busy wait * /. while ( process 1_in==true ) / * busy wait * /.

Process 1_in =true; process 2_in=true;

Critical _sec_one ( ); Critical _ sec_ two( );;


Process 1_in=false; process 2_in=false;

Non _ critical _ sec_one ( ); Non _ critical _ sec_two ( );


} }
} }

EXPLANATION :
Initially Process one is to enter in its critical section . Before entering it sets the Process – one variable to
true. Now process two remains locked in a busy wait as long as process1 _ in is true. Eventually process
one leaves its critical section and set process 1_in to false.
At the moment process two sets Process 2 _ in to true and enters its critical section.
Again , the solution is not perfect . Consider the following example ;
Initially, both process 1 _ in & Process 2 _ in are false. Process one checks Process 2 _ in and finds it false
. At this moment, CPU suspends process one and switches over the process two. Process two checks
process 1 _ in, finds it false and enters in its critical Section. When process one resumes execution, it will
proceed to the next statement to while loop, Set Process 1 _ in to true and enter in its critical section. Both
process are in their critical section simultaneously, so second solution does not even guarantee mutual
exclusion.

THIRD SOLUTION:
In the second solution, the basic problem was that when a process determines in the while loop test that it
can go ahead and the time that the process set the flag to say that it is in its critical section, there could be a
process switch for another process to test and set the flags and slip into its critical section. Therefore once
ope process attempts the while test it must be assured that the other process may not proceed past its own
while test. Third solution attempts to resolve this by having each process set its own flag before performing
the while test.

Process one ( ); Process two ( );

{ {

while (true) while (true)

Process 1_in=true; Process 2_in=true;

while ( process 2_in == true ) / * busy wait * /. while ( process 1_in==true ) / * busy wait * /.

Critical _sec_one ( ); Critical _sec_two ( );

Process 1_in =false; process 2_in=false;


Non_critical_sec_one(); Non_critical_sec_two();

} }
} }

One problem is solved but another is introduced in this solution. Consider a situation where process one
gets time from the CPU, it sets Process 1 _ in Flag to true and then get suspended by the operating system
which then switches control to process two. Process two will also set its flag to true and perform the while
test. It will simply busy wait as the Process 1 _ in was set true by Process one (but Process one never
entered in its critical section). When the control comes back to process one, it will also get into the while
loop and run forever. Process one will always wait for Process 2 _ in Flag to become false and process two
will always wait for Process 1 _ in Flag to become False. They will be caught in a deadlock.

FOURTH SOLUTION:

FAVOURED PROCESS:
It is that process that is attempting to come in the critical section.
The problem with the third solution was that the each process can be get locked up in its
respective while loop. We need to break out of these loops. Solution four accomplish this by forcing each
looping process to set its flag false repeatedly for brief periods. This will allow other process to proceed
past its while test with its own Flag still true.

Process one ()

Mutual exclusion is guaranteed and deadlocks can not occur but another devastating problem could develop
named infinite postponement i.e. process may have to wait indefinitely long to get into its critical section as
no assumption can be made about the relative speed of processes. In just a few lines of code, Decker’s
algorithms handles two_process mutual exclusion elegantly without requiring special hardware
instructions.

Process one indicates its desire to enter its critical section by setting its flag on. It then
proceed to the while test where it checks if process two also wants to enter. If process two flag is of,
process one skip the body of the while loop and enters its critical section

Process one:

Suppose, when process one performs the while test, it discovers that process two’s flag is set. This force
process one into the body of its while loop. Here it looks at the variable favored process, which is used to
solve the conflict of indefinite postponement (Delay). If process one is the favored process, it skips the
body of the if ( ) and rapidly executes the while test waiting for the process two to turn of its flag which,
eventually , it must do .

If process one determines that process two is the favored process, then process one is forced into the body
of the if ( ) where it sets its own flag off and then busy wait in the following while loop as long as process
two remains the favored process. By turning of its flag. Process one allows process two to enter its own
critical section.
Eventually process two will leave its critical section, set the favored process back to process one and turn of
its flag. Process one may now pass its inner while and set its own flag on. Process one then executes outer
while test. If process two’s flag (which was recently set of) is still of, then process one enters its critical
section. If however, process two has quickly tried to reenter its critical section then process two’s flag will
be on and process one is once again forced into the body of the outer while. This time however process one
is in the good books of the system as it is the favored process (as when process two left its critical section it
sets favored process to on). So the process one skips the body of the if and repeatedly executes the outer
while until process two humbly sets its flag of allowing process one to enter its critical section.
This guarantee mutual exclusion without any favor of indefinite postponement.

PETERSON’S SOLUTION:
This method combines the idea of taking turns with the idea of lock variables
Assume there is a global variable named “turn” and a global array named “interested” with one entry for
each process, initially all the entries in the array are set to false.

#Define FALSE 0
#Define TRUE 1

Enter_critical_section (process)
{

Int. other; /* # of other process */

Other = 1_process; /* opposite of its own process */


Interested [process]= true;
Turn = process;
While (turn == process && interested[other] ==true)
}

leave_critcal_section (process)

{
Interested [process] = false;

To enter its critical section, each process calls enter_critical_section() with its own process
number (0 or 1 )which will cause it to wait until it is safe to enter.
When the process is done with its critical section , it calls leave_critcal_section() again with its process
number

Suppose, process 0 calls enter_critcal_section(). As process 1 is not using shared memory area so
its entry in array interested will be false. Process 0 will immediately return as the second condition in while
loop will be false with turn set to 0. If process 1 now calls enter_critcal_section(), it will hang on the loop
until interested[0] is false, which will be done when process 0 finishes its critical section calls
leave_critcal_section()
Now assume, both processes call enter_critcal_section ()
Almost simultaneously both will store their process number in turn. Which so ever stores later will
overwrite the previous value (suppose process 1 does) so turn = 1.
When both processes come to the while loop, process 0 checks first condition and as it is false, it
immediately terminates and enters its critcal_section () where as 1 waits. This guarantees mutual exclusion.

THE TSL (TEST AND SET LOCK):

INSTRUCTION.

Decker and Peterson’s solution are software solutions to the mutual exclusion problems. The TSL
instruction provides a hardware solution. The logic is to have single instruction that reads a variable, stores
a value in a safe place and sets the variable to a certain value. Once initiated, this instruction completes all
the function without interruption. The instruction TSL(a,b) will read the value of flag B. copy it into A and
set B to true: All within the span of the single un interrupt able instruction. The instruction may be applied
in the following manner.

Process one()

{
while(true)

{
one_cannot_enter = true;
while(one_cannot_enter)
tsl(one_cannot_enter,active);
critical_section_one();
active = false;
non_critical_section_one();
}

Process two)

{
while(true)

{
two_cannot_enter = true;
while(two_cannot_enter)
tsl(two_cannot_enter,active);
critical_section_two();
active = false;
non_critical_section_two();
}

Active flag is true if any process is in its critical section and false otherwise. Process one bases its decision
to enter its critical section on its local flag one cannot enter. It sets one cannot enter to true and repeatedly
perform TSL instruction to continuously test the global variable active if process two is not in its critical
section, active will be false. The TSL instruction will store this value in one cannot enter flag and will set
active to true. The while test will terminate as the condition becomes false and process one will enter its
critical section. Because active flag has been set to true process two cannot enter its critical section and
busy wait in the while loop.

SLEEP AND WAKEUP:


Decker’s algorithm, Peterson’s solution and TSL instruction are all correct and provide
mutual exclusion but they waste precious CPU time in busy waiting process two wastes its share of the
CPU time when process one is in its critical section. Similarly process one will busy wait while process two
is its critical section. In short, what these solutions do is that when a process wants to enter in its critical
section, it checks to see if the entry is allowed. If it is not, the process sits in a tight loop waiting until it
becomes false.

The solution to the above mentioned wastage of the CPU time is to derive such a method
in which, when a process is not allowed to enter in its critical section, it blocks its own self instead of
wasting CPU time. The process may request to go to SLEEP instead of busy waiting until another process
gives a WAKEUP call. As an example how these primitives work, consider the following problem.

THE PRODUCER-CONSUMER OR BOUNDED


BUFFER PROBLEM.
Suppose two processes share a common memory area known as buffer. Process 0, the
procedure puts information in the buffer and process 1, the consumer takes it out.

The trouble arises when the procedure wants to put in a new item in the buffer but it is
already full. The solution is for the producer to go to sleep, to be awakened when the consumer has
removed at least one or more items from the buffer. Similarly if the consumer wants to take items out and
the buffer is empty, it will go to sleep until the procedure puts in one or more items in the buffer.

To keep track of the number of items in the buffer, a variable named COUNT is maintained. The procedure
will first check count. If it is full then it will go to sleep, otherwise it will add an item and increment count.
The consumer will process in the same manner. It will check count, if it is zero, it will go to sleep. If not it
will get information out and decrement count. Both processes will also check if the other one is sleeping. If
it is so, it will give a wakeup call to enter at a particular point by checking the value of the variable count.

PRODUCER

While(true)
{
get_new_item();
if(count == MAX)SLEEP;
put_item_in_buffer();
Inner(count);
if count == 1)WAKEUP(consumer);

CONSUMER:

while(true)
{
if(count == 0)SLEEP;
remove_item_from_buffer();
deer(count);
if (count == MAX_1)WAKEUP(PRODUCER);
use_item();
}

RACE CONDITIONS IN SLEEPS AND WAKEUPS:


The idea sounds simple enough but it leads to the same kinds of race conditions we have
seen earlier because the access to the global variable COUNT is unstructured. Let’s consider the following
case.

Initially, the buffer was empty. The consumer checked the count, found it 0 and was just
going to sleep when it was suspended by the CPU. The producer started running, checked the count, found
it 0, placed an item in the buffer and gave a wake up call to the consumer. As the consumer is still not
asleep, this wakeup call is wasted. When the consumer runs again, it starts from where it found the count to
be zero. Therefore it will go to SLEEP. The producer will never come to know about this mishap as it is
supposed to give wake up call when count equal to one. Ultimately the producer will fill up the buffer and
will also go to SLEEP. Both will sleep forever with no one to wake them up. The system will be caught in a
dead lock.

The solution of such a situation is simple. If that wake up call (which was sent to the consumer
before time) was not wasted and stored somewhere, it can be used later on when the consumer goes to
sleep. The consumer’s sleep call will be cancelled by the early wakeup call and the consumer will remain
awake and in turn, can give a wakeup call to the producer. Thus the whole situation can easily be avoided.

Usually, a flag is reserved for such additional wakeup calls. When a wakeup call is given for such
a process, which is yet not asleep, the flag is set. Later on, when the process tries to sleep, it checks the
flag, if it is set, it clears it and stays awake (just as gone to sleep and again wakeup).

DEADLOCKS:
A process in a multiprogramming system is said to be in a state of deadlock (or deadlocked) if it is
waiting for a particular event that will never occur. A system is said to be deadlocked if one or more
processes are deadlocked.
Perhaps the simplest way to create the deadlocked process is this .

Revenge()
{
while (t = 1 );
}

Here , the process is waiting for an event (when t not equal to 1)which will never happen.
The process with the procedure revenge will execute the while loop forever and will never terminate
resulting in a deadlock.

In multi programmed operating systems, resource sharing is one of the primary goals.
When resources are shared in a population of process, each of whom maintains exclusive control over the
particular allotted resource, it is possible for deadlocks to develop. A simple example is illustrated in figure
nn.nn.

The figure shows two processes and two resources the arrow from process to a resource
indicates the process is requesting for that particular resource but has not yet been serviced. The diagram
displays a deadlocked system. Process A holds resource 1 and needs resource 2 to continue. Process 2 holds
resource 2 and needs resource 1 to continue. Each process is waiting for the other process to release a
resource, which it will not release until the other frees its resource. The circular wait is a characteristic of
deadlocked systems.

Similarly, a related problem to deadlock is indefinite postponement. Suppose in a system


resources are allotted on priority bases. A process with a lower priority may have to wait indefinitely long
to get hold on the required resource if processes with higher priorities continue arriving. In some systems,
indefinite postponement
Is prevented by allowing a process’s priority to increase as it waits for a resource. This is known as
aging.eventually, the process’s priority will exceed all other priorities and it will be served by the system.

NECESSARY CONDITIONS FOR DEADLOCKS.


Following four necessary conditions must be in effect for a deadlock to exist.

1. processes claim exclusive control of the resources they require (mutual exclusion condition ).
2. Processes hold resources already allocated to them while waiting for the additional resources
(wait for condition) .
3. Resources cannot be removed from the processes holding them until the resources are used to
completion (no preemption condition ).
4. A circular chain of processes exist in which each process holds one or more resources that are
requested by the next process in the chain (circular wait condition).

Preemptible resources are such resources which can be taken away from a process by as another
process. Resources like CPU or the main memory are known as preemptible resources.

DEADLOCK RESEARCH FIELDS:


Deadlock has been a favorite area of research in computer sciences and operating system.
Primarily, there are four areas of interest in deadlock research. These are:

1. Deadlock prevention .
2. Deadlock avoidance.
3. Deadlock detection.
4. Deadlock recovery.

We will discuss them briefly in the following section.

DEADLOCK PREVENTION:
This is the most frequent approach used by the operating system designers and dealing
with deadlocks. And this method, the major concern is to remove any possibility of deadlocks occurring.

In 1968 it was concluded that if any of the four necessary conditions is avoided, it is
impossible for a deadlock to occur. Following strategies to deny various necessary conditions were
suggested.

Each process must request all its required resources and cannot proceed until all have
been granted.
If a process holding certain resources is denied a further request for any resource, that
process must release its original resources.

Imposition of resource types i.e. if a process has been allocated resources of a given type;
it may subsequently request only those resources of types later in the resource ordering list.
The first strategy requires that all the
Resources will need must be requested at once. The system must grant them on an “ All or none “ bases. If
the set of resources required by the process is available, the system will allocate them to the requesting
process and will be allowed to proceed. If the complete set is not available, the process must wait. While
the process waits, it may not hold any resources. Thus the wait for the condition is denied and deadlock
cannot occur.

Second strategy proposed by Havender denies the “ No Preemption ” condition. A system


will not be deadlocked when a process requiring additional resources is survived successfully by the
system, as any other process does not use the resources requested by the process. But consider what
happens if the request cannot be survived. Now a process holds resources, a second process may require to
proceed whereas the second process may hold the resources thew first process required. This strategy
suggests that when a process is denied the access to an additional resource, it must release its held resources
and if necessary, request them again together with additional resources.

Havender’s third strategy denies thePossibility circular wait. Because all the resources
are uniquely numbered and because processes must request resources in ascending order, it is impossible
for a circular wait to develop. If process A is holding resource number 1 and process B is holding resource
number 2 then, although process A can request the resource number 2 but process B can never request for
process number one as the requests are supposed to be for the next number process. This will prevent a
deadlock to happen.

DEADLOCK AVOIDANCE:
Deadlock detection is a process of actually determining that a deadlock has occurred and of
identifying the processes and resources involved in the deadlock .In this strategy, the system is said to be in
a safe state if it can allow all the current users to complete their jobs within finite time by carefully
manipulating the allocation of resources (i.e. by watching the current behavior s of a process frequency of
resource allocation requests and frequency of resource releasing etc). Processes do claim exclusive use of
the resources they require. Processes are allowed to hold currently allocated resources while requesting
and waiting for additional resources and resources may not be preempted from a process. Users ease on the
system by requesting one resource at a time. The system may grant or deny the request. If the request is
denied the process holds the allocated resources and waits for a finite time until that resource is granted.
The system grants requests that result in a safe state only. A user request that would result in an unsafe state
is denied until it can eventually be satisfied. The system is always maintained in a safe state.

DEADLOCK
on releasing their hold resources) then there is no deadlock.
If all its processes cannot reduce a graph then the “irreducible “ processes constitute to a state of a dead
lock in the graph. When the system detects such a graph it can call deadlock recovery routines to remove
deadlocked processes and release the resources allocated to them.

DEADLOCK RECOVERY
Once a system has been deadlocked the deadlocked must be removed by removing one or
more necessary conditions. Usually, several processes will lose some or all of the work
they have done so far. This is a small price to pay compared with leaving the deadlock in
place.

In current systems, recovery is usually performed by forcibly removing the process from
the system and reclaiming its resources. The removed processes are ordinarily lost but
remaining processes may now be able to continue. Sometimes it is necessary to remove
several processes until sufficient resources have been reclaimed to allow the remaining
process to finish. Another and most desirable approach to deadlock recovery is an
affective suspend /resume mechanism which allows the system to put temporary hold on
processes and then resume the held processes without loss of their work. For example, it
may become necessary to shutdown a system temporarily and start up again from exactly
the same point without loss of productive work. It requires conscious effort on the part of
the system developers to incorporate such suspend/resume features.

PROCESS SCHEDULING
When there are several processes run able, the operating system
must decide which one to run first and for how long. This part
of the operating system responsible for this decision making is
known as the scheduler and the algorithm is known as scheduling
algorithm. Before looking at different scheduling algorithms we should
think about what the scheduler is trying to achieve. The scheduling
algorithm must provide some basic facilities, which are as follows:

1. Fairness: Make sure that every process gets a fair share of the CPU and no
process can suffer indefinite postponement.
2. Efficiency: Keeps the CPU busy 100% of the time.
3. Response time: Minimize response time for every user.
4. Turn Around: Minimize the time batch users must wait for output.
5. Maximum throughput: Service the largest possible number processes per unit
time.

Some of these points contradict with others thus making scheduling


a complex problem.
To provide the basic assistance to the scheduler all computers
have a built-in electronic timer or clock which interrupts
(known as clock interrupt) the system periodically. At each clock
interrupt, (or after 10 or 20 as decided by the scheduler) the operating
system runs and decides whether the currently running process should
be allowed to continue running or should it be suspended to give time
to other process.
The strategy to temporarily suspend a run able process is called
preemptive scheduling (i.e. the CPU can temporarily be taken from
the process by the system) whereas a policy of letting a process run as
long as it wants to (no preemptive scheduling) means that a
process calculating the factorial of 100 billion can deny service to all
other process for days or weeks.
In the following sections, we will look at the popular and tested
scheduling algorithms.
FIFO (FIRST IN FIRST OUT) SCHEDULING.
Perhaps the simplest scheduling algorithm is FIFO. Processes are
dispatched according to their arrival time to the ready to run queue.
Once a process has the CPU, it runs to completion (non-preemptible
scheduling). The FIFO concept is fair enough for short jobs but
not applicable for jobs requiring hours of CPU time (printing a
million record file). This technique is rarely used but is often
embedded within other schemes. For example, many scheduling
algorithm dispatch .Processes priority wise but processes with the
same priority or dispatch FIFO.
e.g. Consider a queue having six processes to be processed in
the queue, each process in the queue have different time for
processing. Now consider that process ‘A’ is currently being served
by the CPU and its processing time is 50 quantum, another process
‘E’ whose processing time is only 5 quantum is waiting for its turn
to come. It has to wait in the queue until all the processes in the
are processed by the CPU. The drawback is that a very small
process is waiting for infinitely long for other processes to be processed

A B C D E F

ROUND ROBIN SCHEDULING:


One of the simplest, fairest and most widely used algorithms is
round robin. Each run able process assigned a time interval
called its quantum. If the process is still running at the end
of the quantum it is suspended and the CPU is given to another
process. If the process terminates or blocks, it generates and
interrupts which causes the CPU to switch immediately to the
next process. All the scheduler has to do is to maintain a list
of run able processes. When the quantum of one process runs out,
it is put on the end of the list .The only interesting issue in round
robin is the length of quantum. Switching from one process to
another requires a certain amount of time for doing the administration
e.g. saving and loading register, updating various tables and lists
etc. Suppose the process switch operation takes five msec, and also
suppose that quantum size of each is said to 20 msec. Therefor
after doing 20 msec of productive processing the CPU has to waist
5 msec in processing switching. Therefore 20% of the CPU
time is wasted in the switching process.
To improve CPUs efficiency the quantum size can be increased to
say 500 msec. Now the wastage is less than one percent. But
assume there are 10 processes in the list and the last one has
requested a clear screen command.
When each process uses 500 msec, the last one will be served after
4.5 seconds which is a terribly long time in computer sciences.
The conclusion is that if the quantum is too short it causes too many
process switches and lowers the CPU’s efficiency, but setting it
too high causes poor response to short jobs. A quantum size
around 100 msec is a reasonable compromise.

PRIORITY SCHEDULING
As the users on a system can have different levels of importance.
The solution is to have a priority-scheduling algorithm.
In this strategy, the scheduler maintains a list in which processes
are queued according to their priority. A process with priority 0
will be serviced before a process with priority. Similarly a process
with priority 1 will be serviced before a process with priority 2 and
so on. In short, each process is assigned a priority and processes
with the highest priority are serviced first.
Now suppose, if the scheduler decides for a process switch after
checking priorities and a process X, having the highest priority
is run able, then X will be given the CPU. When its quantum
expires, scheduler cheeks the priorities again, still X has the
highest one and again X is serviced. Process X may continue
to run forever thus proposing a fear of indefinite postponement
of the process with less priorities.
To prevent such problems, the scheduler may decrease the priority
of process X with every clock tick (or clock interrupt) and when
its priority becomes less then the next process on the ready to run
list. A process switch occurs. Priorities are assigned by the system
when the user logs in but a process can reduce its own priority
voluntarily. On UNIX system, there is a nice () command which
allows a user to reduce its own priority in order to be nice with
other processes. Nobody ever uses it.

MULTIPLE QUEUES
One of the earliest priority schedulers was used in CTSS designed for IBM 7094.
CTSS had the problem the process switching was very slow as the IBM 7094 could hold
only one process in the memory at one time. Each process swap meant placing the current
process out the out the disk and loading the next one in the memory. CTSS designers
quickly realized that it was more efficient to give CPU bound processes a large quantum
once in a while rather than giving them small quantum’s frequently. So a solution was
derived in which the process switching was quite reduced.
Priority classes were used. Processes in the highest class were run for one quantum,
processes in the next class were run for two quantum’s, next for four quantum’s and so
on. Whenever a process used up the entire quantum’s allocated to it, was moved down
one class.

For example, consider a process that needed to computer for 100 quantum . It
would initially be given one quantum; than swapped out, next time it would be low
quantum’s before it is swapped out to the disk. On succeeding runs it would be given
4,8,16,32 and 64 quantum’s. Although it will use only 37 out of the last 64 but this way,
only
7 swaps would be needed instead of 100 with pure round robin algorithm. As the process
sank deeper and deeper into the priority queues, it would be run less and less frequently
(but when it runs it gets more and more quantum’s). This saves the CPU time for short
interactive processes.
Every process always started with the highest class no matter how many quantum’s it
require when it was initiated.

SHORTEST JOB FIRST (SJF) SCHEDULING:


Shortest job first is a non-preemptive scheduling algorithm in which the waiting
job with the smallest estimated run time required is run next. SJF favors short jobs over
longer ones. It selects jobs in a manner that ensures the next job will complete and save
the system as soon as possible. This tends to reduce the number of waiting jobs as a result
SJF can reduce the average waiting time of jobs as they pass to the system.

Suppose there were four jobs to run with A requiring 8 seconds of CPU time. B
requiring 2, C requiring 3, and D requiring 1 second. They will be executed in the
following pattern as displayed in figure nnn.nnn.
The obvious problem with SJF is that it requires precise knowledge of how long a
job or process will run and this information is not usually available. The best SJF can do
is to rely one-user estimates of run times. In production environments where same jobs
run regularly, it may be possible to provide reasonable estimates. But in development
environments users rarely know how long their program will execute. Another absurdity
in this algorithm is the possibility of indefinite postponement as the process requiring a
longer time period may never be served if processes with short time to run requirement
keep on appearing.

HIGHEST RESPONSE NEXT (HRN) SCHEDULING:


Branch Hanes’s developed the highest response Next (HRN) Strategy that corrects some of the
weaknesses in SJF, particularly excessive bias against longer jobs and excessive favoritism toward short
jobs. HRN is a non-preemptive algorithm in which the priority of the job is calculated not only from the
service time it requires but also from the amount of time the job has been waiting for service. Once a job
gets the CPU it runs to completion. The priorities in HRN are calculated according to the following
formula.

Priority – (time waiting + service time)/service time

If the service time required by the processor is 5 seconds and it has to wait for 10 seconds in get
served then its priority will be (10 | 5)/5-3 and if a process requires 2 seconds and it has to wait for 3
seconds to be served then its priority will be (3 | 2 )/ 2 =2.5 As we can see that the process requiring longer
time period does not have to wait long to be served.

POLICY DRIVEN SCHEDULING:


This method promises every user that if there are no users logged in, then every user will get about
1/n of the CPU time. To fulfill this promise, the system must keep track of how much CPU time a user has
had since login and also how long each user has been logged in .it then computes the amount of CPU each
user is entitled to now when the time actually used by the process and the time which a process should
actually get are known, a ratio between the two can be calculated. Scheduling is decided on the base of this
ration. For example, a ratio of 0:5 means that the process received half of only what it should have had, and
a ratio of 2.0 means the process has had double of what it was entitled to.
The algorithm is then to run the process with the lowest ratio unless its ratio becomes closer to its
immediate competitor.

TWO LEVEL SCHEDULING:


So far we assumed that all run able process are managed in the main memory. If the main memory
is in sufficient then some for the run able processes are kept on the disk. This situation has major
implications for the scheduler since the process switching time to bring in and run a process from the disk
is 2 or 3 times more in magnitude as compared to the time required to switch to a process already in the
memory. A practical way to deal with such a situation is to have a low-level scheduler.
The first level of the scheduler, known as the LOW LEVEL SCHEDULAR, only performs the job
of a normal scheduler i.e. switching over the processes in the memory. Where as HIGH LEVEL
SCHEDULAR, the second level is invoked periodically. To remove processes that have been in the
memory long enough and load the processes which were on the disk long enough.

Figure nn.nn explains the working of such a scheduler.


Processes A B C & D are currently in the memory and low level scheduler takes care of the time given to
each user. processes W X Y & Z are currently on the disk . When all the processes in the main memory are
served, high level scheduler is activated which removes A B C & D from the main memory, stores them on
the disk, loads W X Y & Z in the memory and transfers control to the low level scheduler. Thus the time
required every time to swap a process with the one on the disk is minimized.

OR

OPERATING SYSTEMS
HARDWARE Physical equipment of the computer, Hardware devices may
be electronic, magnetic or mechanical etc.

HARDWARE RESOURCES
1. CPU
2. I/O Devices
3. Bus Architecture
4. Main Memory

CPU The unit responsible for processing as well as controlling other resources.

I/O Devices: Equipment which is used for taking input for processing and
producing desired output processing.

BUS Channel or path (Cable & Circuits) used for transferring data and electrical
signals between different components of the computer.

Main Memory Addressable storage area directly controlled by the CPU. It is


used to store programs while they are being executed and data while it is being
processed.

COMPONENTS OF THE CPU


1. Control Unit
2. ALU
3. Processor Registers
Control Unit Portion of the CPU which directs the operation of the entire
computing system.

Arithmetic Logic Unit Basic components of the CPU which performs


arithmetic and logical operations.

Processor Registers
High speed memory used in CPU for temporary storage of small
amounts of data, addresses and other necessary information during processing.
Some of these registers are accessible to the user while some are not.

User Visible Registers are those which may be referenced by means of


machine language instructions. Types of registers that fall into this category are
data registers, address registers and condition-code registers.

Data Registers are those registers which are used to store data before or after
processing.

Address Registers contain main-memory addresses of data and instructions.

Condition-code Registers (Flag Registers) hold conditions codes (bits) which are
set by the hardware as the result of different operations. These registers are
partially visible to the user.

The registers which are not accessible to the programmer are the Control
and Status Registers. Some of these may be accessible by machine instructions
executed in the operating system mode. The commonly used registers are:

MAR (Memory address register)Specifies the address in memory for the next
read or write.
MBR (Memory buffer register) Contains data to be written into the memory or
data read from the memory.
I/OAR (I/O address register) Specifies the address of a particular I/O devices.
I/OBR (I/O buffer register) Used to exchange data between an I/O module
and the processor.
PC (program counter) Contains the address of an instruction to be
fetched.
IR (instruction register) Contains the instructions most recently fetched.

SOFTWARE Set of instructions to manipulate the hardware


resources.
TYPES OF SOFTWARE
1. System Software
2. Application Software

SYSTEM SOFTWARE

APPLICATION SOFTWARE
Programs which are used for specific purposes, e.g. word processors,
spread-sheets etc. They need system software to use hardware resources.

OPERATING SYSTEM
Operating system controls the execution of other programs. It consists of
instructions for the processor to manage other system resources. Two important
functions of the operating system are:

1. Provide interface between the user and the hardware resources.


2. Hide details of hardware from the application programmer.
Operating systems consists of the following:
1. Control Programs: Scheduler, I/O control system etc.
2. Service Programs: Compilers, Utility programs etc.
Common examples of operating systems are: DOS, UNIX, OS/2, VMS, etc.

KERNEL MODE
Like other programs, OS is also a software which is executed by the
processor. However, instructions in the OS are executed in a special operating
mode, called kernel or supervisor mode. In this mode certain privilaged
instructions can be executed which cannot be executed in the normal mode.

SERVICES PROVIDED BY THE OPERATING SYSTEM


Program Creation: Utilities like editors and debuggers which are used for
creating programs.

Program Execution: Functions like loading programs and data into the
memory and preparing other resources like I/O devices for use are also performed
by the OS.
Access to I/O Devices: Each I/O device has its own set of instructions for
operation. The details of these instructions are also provided by the OS.

Controlled Access to Files: Details of the file format as well as the I/O
device on which it is stored are also handled by the OS.

System Access: In case of shared systems, the OS also has to control access to
system as a whole as well as to specific system resources.

Error Detection & Response: Different type of hardware and software errors
can occur while a system is working. OS tries to resolve such errors or reduce their
impact as much as possible.

INSTRUCTION
Group of characters that defines an operation to be performed.

JOB Collection of specified tasks constituting a unit of work for a computer. A


job may consist of one program or a related group of programs which are used as a
unit.

CPU BOUND JOB The job which uses a high proportion of CPU time as
compared to I/O operations.

I/O BOUND JOB The job which uses a high proportion of time in I/O transfers
as compared to CPU processing.

LOADING Reading data from the secondary storage device into the main
memory of the computer.

LOADER Program which tells the processor how to read programs into main
memory of the computer for execution.

PROCESS A program in execution.

MULTIPROGRAMMING
Running two or more programs concurrently in the same computer. Each
program is allotted its own place in memory and its own peripherals, but all share
the CPU.
MULTIPROCESSING
Simultaneous execution of one or more than one processes by more than
one CUPs under a common control.

BATCH PROCESSING
The type of processing in which more than one jobs are submitted to the
computer as a group for processing is used usually for data which requires
periodic processing e.g. payroll, billing etc. In batch processing, there is lack of
interaction between the user and the job during execution.

BATCH MULTIPROGRAMMING SYSTEMS


In batch multiprogramming systems the jobs for multiprogramming are
selected from within a batch submitted for processing. Selection of a job may be
based on priority.

TIME SHARING SYSTEM


Those systems which allow multiple users to share the computer
simultaneously by providing each user a specific time in sequence.

REAL-TIME SYSTEMS
On-line processing systems which can receive and process data quickly
enough to produce output to control the outcome of an ongoing activity e.g. airline
reservation system, controlling a nuclear power plant.

DISTRIBUTED SYSTEMS
Distributed systems are used to distribute computation among several
processors (called ‘sites’). These are two types of distributed systems:
1. Tightly Coupled Systems
2. Loosely Coupled Systems
TIGHTLY COUPLED SYSTEMS
In tightly coupled distributed systems the processors share the memory and
the clock. Communication between the processors takes place through shared
memory.

LOOSELY COUPLED SYSTEMS


In loosely coupled distributed systems the processors do not share memory
or clock. Processors communicate with eachother through communication links.

Advantages of Distributed Systems


Resource Sharing: If a number of different sites are connected with one another,
then a user at one site may be able to use the resources available at another.

Load Sharing: If a particular site is currently overloaded with jobs then some
of these jobs may be moved to other lightly loaded sites.

Data Sharing: When a number of sites are connected to one another by a


communication network, the processes at different sites can exchange information
with each other.

Computation Speed Up: If a particular computation can be partitioned into a


number of subcomputations that can run concurrently, then that computation may
be distributed among different processors for concurrent processing.

Reliability: If one site fails in a distributed system, the remaining sites may be
able to continue their operation.

BUFFERING
Overlapping the I/O of a job with its own comutation. Each I/O device
creates its own buffer so that I/O Calls cause obly a transfer to or from the buffer.
BUFFER An area in momory which is used to store data temporarily before or
after processing

SPOOLING (Simutaneous Peripheral Operations On- Line)


The technique used for routing data via disk files. The jobs have to
communicate with disk systems which are much faster than othre devices such as
cardreader, magntic tape or printer. Spooling is an example of multiprogramming
because I/O of one job can be overlapped with computation of anther.

PREPARING A DISK FOR USE BY THE OPERATING SYSTEM

Formatting Process of DOS


Before a disk can be used for storing data, it must be prepared for use the
operating system. This process of preparing the disk for use is called
“Formatting”.

DOS uses FORMAT command to format the disk. In this process, each side
of the disk is divided into a specific number of ‘tracks’ and each track is further
subdivided into a specific number of ‘sectors’. A sector is usually capable of
storing 512 bytes of information. The number of tracks per side and sectors per
track depends upon the disk type and is specific for specific type of disk. The
storage capacity of a disk can be calculated using the formula:
Total capacity = no. of sides x tracks per side x sectors per track x sector
size.

CYLINDER The vertical column of tracks on a magnetic disk. The corresponding


tracks on each surface of the disk collectively form a cylinder.

CLUSTER A group of consecutive sectors used by DOS to store data. It is the


smallest unit that DOS uses for storing a file. The size of clusters depends upon
the size of the disk.

During the process of formatting, DOS to reserves first few sectors of the
disk for its own use.

Boot Sector DOS uses the first sector for placing the ‘boot record’. The boot
record contains information about the number of bytes per sector, sector per
cluster, sectors per track, total number of sectors and some other specifications.

Reserved Space For FAT (File Allocation Table) DOS uses FAT to access
the actual sectors on the disk. Following the boot record, DOS uses a few sectors
for sectors for keeping at least two copies of the FAT. The number of sectors used
for FATs varies with the size of the disk.

Reserved Space For Root Directory Following the FATs, DOS reserves some
sectors for root directory entries. Once FORMAT reserves space for root directory,
DOS cannot increase its size.

Preparing a Bootable System Disk for DOS


For DOS to start, a disk must contain two system files (IO.SYS &
MSDOS.SYS in case of MS-DOS) and the file COMMAND.COM. The above
mentioned two files should be the first two entries in the root directory. DOS also
places a small program (bootstrap program) in the boot sector which begins the
DOS startup process.

BOOTING PROCESS
Whenever a system is started is started or reset, the OS which has to control
it performs some functions automatically before allowing the user to interact with
the machine. These functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.

BOOTING PROCESS OF DOS


Whenever a system is started or reset, the OS which has to control it
performs some functions automatically before allowing the user to interact with
the machine these functions enable the computer to work properly. This whole
process of preparing the computer for use is called the ‘Booting Process’.

BOOTING PROCESS OF DOS


When the system is started or reset, the execution starts at address
0FFFF0h. This because of the design of 80X86 family of microprocessors. This
address lies in ROM and contains a JUMP instructions to another part of ROM.
From this location, a small portion of code called ‘System Test Code’ is executed.
This code checks the microprocessor itself as well as the devices connected to it.
This test is called POST (Power On Self Test). After performing POST, the ROM
bootstrap routine is executed.

Rom Bootstrap Routine is a program in ROM which performs the following


tasks:
1. It creates the IVT (Interrupt Vector Table) in the lower most area of the
memory.

2. If the system is booted from a hard disk, it loads the ‘partition table of that
disk and reads it to access the active drive. There is no partition table in
floppy disks. Therefore, in case of floppy drives, it directly jumps to the
boot sector.

3. From the boot sector, it loads the disk bootstrap routine and transfers
control to it.

Disk Bootstrap Routine is a small code which resides in the boot sector of the
bootable system disk. During the booting process, when it gets the control of the
CPU from the Rom Bootstrap Routine, it performs the following tasks:

1. It checks the first two directory entries of the disk. If these entries are no
IO.SYS & MSDOS.SYS (in case of MS-DOS), it displays the error
message “Non System Disk Or Disk Error” and waits for the system disk in
Drive A.
2. If these files are found, disk bootstrap routine loads them into the memory
and transfers control to IO.SYS.

IO.SYS has two portions - BIOS and SYSINIT.SYS. BIOS contains the
code of resident device drivers and is loaded above IVT. SYSINIT.SYS is loaded
above BIOS and control is actually transferred to this portion by the disk bootstrap
routine. During its execution, it performs the following functions:

1. It determines the amount of available memory.


2. It loads itself in the higher memory and moves MSDOS.SYS to its place
(above BIOS).
3. It gives control to MSDOS.SYS.

MSDOS.SYS is also called DOS Kernel. When it gets the control, it:
1. Sets up its internal tables.
2. Makes intreeupt vector entries.
3. Initiliazes all the internal device drives.

4. It returns control back to SYSINIT.SYS.


When SYSINIT.SYS gets control for the second time, it:

1. Looks for the file CONFIG.SYS in the current directory and loads it into
the memory and executes it. During the execution of CONFIG.SYS,
installable device drivers are loaded and initialized. If CONFIG.SYS does
not exist, some default values are used.

2. Loads the shell which is mentioned in CONFIG.SYS (or


COMMAND.COM by default) and transfers control to it.

COMMAND.COM, on getting the control of the CPU, does the following:


1. It looks for the file AUTOEXEC.BAT and processes it.
2. Displays the prompt to indicate that it is ready to take commands from the
user.

IVT (Interrupt Vector Table)


A table which is created in the lower portion of the memory by DOS. It
keeps the addresses of the all the interrupt handling routines. It takes 1Kbytes of
memory and can store 256 entries.

Partition Table
The information kept at side 0, track 0, sector 1 of the hard disk. It keeps
record of all the partitions made on the disk such as partition size, starting and
ending location and active flag (for booting process).

Device Drivers
Routines which are used to manipulate the hardware devices connected to
the computer. There are two types of device drivers-resident & installable.
Resident Device Drivers: Device drivers which are automatically loaded from
IO.SYS and permanently stored in RAM for later execution.
Intallable Device Drivers: Device drivers which are loaded on explicit commaand
from the user. These commands are usually placed in CONFIG.SYS file.

PROCESS MANAGEMENT
PROCESS STATES

In a single processor system, only one instruction from one program can be
executed at any one intant, although the processor may be able to execute multiple
programs over a period of time, a facility known as multiprogramming. The
operating system manages these multiple programs by keeping all or part of each
these processes in memory and switching control between them to give an
impression of simultaneous execution. The process that is currently using the
processor is said to be in the ‘running’ state. The rest of the processes would be in
some state other than the running state. The possible states for a process are as
follows:

Running: The process that is currently being executed. The number of running
processes will be equal to the number of processes in the system.
Ready: Processes that reside in main memory and are prepared to execute
when given the opportunity.
Blocked: A process that is in main memory but cannot execute when given the
control of the processor until some event (such as completion of an
I/O operation) occurs.
New: A process that has just been created but not yet admitted to the pool
of executable processes by the OS.
Exit: A process that has been released from the pool of executable
processes by the OS, either normally or abnormally.

In operating systems that support process swapping, two additional states exist:

Ready Suspended: The process is in secondary memory but is available for


execution when loaded into the main memory.
Blocked Suspended:The process is in secondary memory and also waiting for and
event.

PROCESS CREATION & TERMINATION

Reasons of Process Creation


A process may be created due to one of the following four reasons:

1. New Batch Job Whenever the OS encounters a new job in the batch
submitted to it, it creates a new process for it.
2. Interactive Log on A new user logs on to the system.
3. Created by OS to The OS may create a process itself to provide a service
to the user.
Provide a Service process, e.g. for printing.
4. Spawned by Parent An already existing process may itself create
another process in which case the new process is called the child of that process.
Reasons of Process Termination
The common reasons for process termination are as follows:

1. Normal Completion An instruction in the program calls the OS to


terminate it.
2. Time Limit Exceeded The process may be terminated if it exceeds the
time limit provided to it.
3. Memory Unavailable The process requires new memory during its
execution an it is more than the system can provide.
4. Bound Violation The process tries to access some memory location
outside its limits.
5. Arithmetic Error An invalid comutation such as divide by zero.
6. I/O Failure Error in I/O, either physical (e.g., disk damaged) or logical
(e.g., file not found). Invalid operation (e.g., reading from printer) can also
be a reason.
7. Provileged Instruction The process attempts to use an instruction
reserved for the OS.
8. Parent Termination The OS may terminate the child process if the
parent is terminated.
9. Parent Request The parent process may terminate its child.
10. Interactive Log outAn interactive user may log out of the system.

PROCESS CONTROL STRUCTURES

PROCESS IMAGE
Whenever a new process is created and entered into the ready queue, the OS
creates the following elements for its execution:

User Program It consists of the user program (code) to be executed.


User Data It is the modifiable part of the space allocated to the process and is
directly accessable to the user.
System Stack Each process has at least one system stack that is used during
interrupt handling, system calls and procedure calls.
Process Control Block Information needed by the OS to control the process.
These elements colectively form the ‘Process Image’.
PROCESS CONTROL BLOCK
An important element of the process image is the process control block (PCB). It
contains all the attributes that are required by the OS to manage the process. The
contents of the PCB may be categorized as follows:

• Process Identification
• Process State Information
• Process Control Information

Process Identification
Information relating to process identification includes:

• Identifier of the process itself.


• Identifier of the parent of the process.
• User identifier for whom the process has been created.

In UNIX, e.g., these identifiers are named as pid, ppid & uid respectively.

Process State Information


This information is basically related to the contents of the processor registers.
These include bothe the user-visible registers as well as non-visible registers (often
called control & status registers and stack pointers).

Process Control Information


Apart from the identification and registers information, the PCB contains other
information required by the OS. This information may be categorized as follows:
Scheduling & State Information Thsi includes information required by the OS to
perform scheduling. It contains information as process state, priority, event if the
process is currently blocked for one, and other information such as the amount of
time the process has been waiting etc.

Data Structuring A Process may be related to other process in a queue or some


other data structure, e.g., all ready processes may be placed in a quiue to wait for
their turn.
Interprocess Communication Information relating to communication between
two independent processes.
Precess Privileges Privileges may be related to the amount of memory and types
of instructions available ot the process as well as to the use of system services.
Memory Management Information required to assign and manage virtual
memory for the process and map it on the actual available memory.
Resource Ownership Resources controlled by the process such as opened
files.

INSTRUCTION EXECUTION
The processing required for a single instruction is called ‘instruction cycle’.
Its two basic steps are:

Fetch Cycle The processor reads an instruction from the main memory.
Execute Cycle The processor executes this instruction.
Program execution stops only when:

1. Machine is turned off.


2. An unrecoverable error occurs.
3. An instruction is executed which asks the processor to stop execution.

At the beginning of each instruction cycle, the processor fetches an


instruction from the memory into one of its registers (usually called Instruction
Register (IR)). Another register called Program Counter (PC) keeps the address of
the memory location from where the next instruction is to be fetched. The
processor always increments this register after each instruction fetch, unless told
otherwise.

The instruction is in the form of binary code. The first few bits of the this
code contain the ‘op-code’ which tells the processor what operation is to be
performed. The remaining bits give the address of the location where the operation
is to be performed. The results of these operations can be temporarily stored in the
register called ‘Accumulator’.

After executing one instruction, the processor reads next adress from the
PC and loads instrucion from that location into IR for execution. In this way, the
program is executed, one instruction at a time, in a sequence. This sequence can
only be changed by an instruction which tells the processor to load a new address
in the PC.

The operations that are performed usually fall in the following four
categories:

1. Transferring data from the processor to memory & vice versa.


2. Transferring data to or from a peripheral device using an I/O module.
3. Performing some arithmetic or logical operations.
4. An instrucion that asks to alter the sequence of execution.

INTERRUPTS
An interrupt is a signal which causes the hardware to transfer program
control to some specific location in the main memory, thus breaking the normal
flow of the program.

TYPES OF INTERRUPTS
Interrupts can be of the following four types:
1. Program Generated as a result of an instruction execution, such as
overflow, division by zero etc.
2. Timer Generated by a timer which enables the OS to perform certain
funtions on regular basis.
3. I/O Generated by an I/O controller, to get the attention of the CPU.
4. Hardware Failure Generated by a failure such as power failure.

INTERRUPTS AND INSTRUCTION CYCLE


An instruction cycle comprises of ‘fetch cycle’ and ‘execute cycle’. In this
setup of instruction execution, a program which gets control of the CPU, loses
control only when it is normally or abnormally terminated. With the introduction
of an ‘interrupt cycle’, the OS becomes able to take control from a process during
its execution and give it to some other process. The control can then be given bach
to the interrupted process if desired.

After every ‘execute’ cycle, the processor checks for the occurrence of an
interrupt. In case of an interrupt, the processor suspends the execution of the
current program and executes an ‘interrupt handling routine’. This routine is
usually part of the OS. The interrupt handling routine determines the nature of the
interrupt and performs the required action. When the execution of this routine is
completed, the processor can resume execution of the interrupted program at the
point of interruption.

Example Of Interrupt Execution


Suppose a user program is in execution. During its execution, it reaches an
instruction which asks for I/O operatio. On reaching this instruction, the following
steps are performed in sequence:
1. The processor suspends the execution of the current program and saves
some information about it in the memory.

2. It checks the type of request and goes to the routine which will service that
request.

3. This routine prepares the I/O device for use and asks it to perform the
desired operation.

4. While I/O device is busy transferring data to or from the main memory, the
control may be given to another user program which is ready to execute.

5. The I/O device finishes with the current I/O operation and issues an
interrupt signal to the processor.

6. The processor checks for the interrupt signal after each ‘execute cycle’.
When it finds the signal, it sends an acknowledgment signal to the I/O
device.

7. The processor saves some necessary information about the currently


running process in the memory and determines the interrupt handling
routine which is to be executed.

8. When interrupt handling routine is executed, it asks the I/O device to


transfer the next block of data to or from the memory.

9. The I/O device again starts exchange of data from the memory and the
processor starts executing the interrupted program by restoring the saved
information from the main memory.

This operation (point 5-9) may be repeated until the whole I/O transfer is
complete.

MULTIPLE INTERRUPTS
It is possible for an interrupt to occur while another is being procesed. This
situation can be handled in two ways:

1. While one interrupt is being processed, the OS can disable other interrupts
from occuring. If an interrupt occurs during this time period, it is kept
pending until the interrupts are enabled. It is a simple approach but it does
not consider priorily or time critical needs.
2. In the second approach of handling multiple interrupt is given a priority
level such that an interrupt of higher priority can cause a lower priority
interrupt to itself interrupted. After processing the higher priority interrupt,
the control is given to the lower priority interrupt which was interrupted. In
this way, the efficiency of the system is increased.

I/O COMMUNICATION TECHNIQUES


The following texhniques may be used for I/O operations;
1. Programmed I/O
2. Interrupt-Driven I/O
3. Direct Memory Access (DMA)

PROGRAMMED I/O
The most significant feature of programmed I/O is the lack of interrupts. In
this technique, an I/O operation proceeds as follows:

During the execution of a program, the processor reaches an instruction


which requires some I/O operation. The processor executes the instruction by
issuing a command to the appropriate device controller. The device controller
performs the given task but does not signal the processor to ask for further
commands by generating any interrupt. As the device controller does not interrupt
the processor, therefore it is the responsibility of the processor to check the status
of the device-controller periodically until it finds that the operation is complete.
Thus the processor is needlessly bound while it waits for the completion of the I/O
operation. This condition of the processor is called ‘busy waiting’.

INTERRUPT-DRIVEN I/O
As opposed to programmed I/O, the interrupt-driven I/O uses interrupts
during I/O operations.

In programmed I/O, the processor has to wait needlessly for the I/O device
to send or receive more data which slows down the performance of the system.
Interrupt driven I/O is used to utilize this wasted time.

In interrupt driven I/O, the processor issues the I/O command to the
appropriate device controller and istead of waiting for the I/O operation to
complete, starts doing some other work. It does not check the device controller
periodically for completion of the operation. When the device finishes with its
task, it signals an interrupt to the processor. The processor checks for the
occurrence of any interrupt during each instruction cycle and finding an interrupt,
responds to the I/O device. Accordig to the requirement, it may ask the device to
transfer more data or tell it that the whole I/O transfer is complete.

DIRECT MEMORY ACCESS (DMA)


Interrupt-driven I/O is more efficient as compared to programmed I/O but is
still uses a lot of processor time because every word of datathat goes from memory
to I/O device or vice-versa, must pass through the processor. It may work well
with slow devices like keyboard, but with high-speed devices like hard disk, it
becomes too inefficient.

To counter this situation, the DMA technique is used. In this technique, the
device controller transfers an entire block of data at a time without involving the
CPU.

Thus the interrupt is generated per block of transfer instead of per word. In
order to use DMA technique, the DMA controller is given the following
information:
• Type of operation
• Address of I/O device
• Starting memory location to read from or write to
• Number of words to be read or written

With this information, the device controller is enabled to transfer the entire
block of data, one word at a time, directly to or from the memory. The processor is
involved only at beginning and end of the transfer.

MICROSOFT WINDOWS
INTRODUCTION TO OPERATIING SYSTEM & WINDOWS

Operating System is an interface between the user and the computer.


Operating System provides us a facility to work with the computer. It basically
makes the PC illigible to work with it. Most commonly used Operating System is
the DOS operating system for PC’s.

DOS stands for Disk Operating System. Normally PC’s use this operating
system because its rather easier to work with it.
DOS commands have syntax with which we work. But its necessary to
understand syntax and usage of the commands.

“Windows” is a program that provides a graphical user interface (GUI), for


Personal Computers. When using windows the computer user interacts with the
computer virtually; rather than typing commands, the user moves a small arrow
called a pointer around the screen & select commands. Program appears on the
screen in rectagular areas called windows. Each window can be sized and moved
around the screen. Multiple programs can appear on the screen at the same time
and they can execute simultaneously.

With version 3.1 Windows now employs an advanced technology called


object linking and embedding (OLE). Using OLE you can paste, or embed,
information from one Windows program to another. For example, you can paste
graphics generated by Windows paintbrush into a Windows Write document. Then
from within the Write document, you can double-click on those graphics to invoke
Paintbrush and edit them. OLE allows Windows programs to work together very
easily.

Windows is a boom for users of MS-DOS personal computers. They no


longer need to understand the complex syntex of MS-DOS commands. Windows
provides a standard, usable, powerful interface for all programs under MS-DOS.

WINDOWS OPERATING MODES

Windows operates differently on different systems. It has two operating


modes, one for the computers based on the 80286 microprocessor (AT-class
computers), and one for computers based on the 80386 microprocessors. When
windows run, it detects what kind of computer you have and executes in a mode
appropriate to you type of machine (Windows 3.0 has an operating mode called
Real mode that supports 8088 or 8086 microprocessors. Real mode has been
eliminated from WINDOWS 3.1).

The two modes in which Windows 3.1 operates are described below:

In Standard Mode, Windows requires at least 1MB of memory and an 80286,


80386 and 80486 microprocessor to run with Windows can access up to 16MB of
physical memory in Standard mode, and can switch between multiple DOS
applications. Those applications have to operate using the full screen; they can not
appear in windows. In standard mode, multiple DOS applications do not multitask,
but windows applications do.
Enhanced Mode requires an 80386 or 80486 microprocessor and lat least 2
MB of memory. In 386 enhanced mode, windows takes advantage of the advanced
features of the 80386 and 80486 microprocessor by using a portion of the hard
disk, as if it were the real memory. Hard disk space used as memory is called
virtual memory.

Although Windows automatically detects the type of machine you have and
uses the requisite operating mode. If you want the Windows to operate in some
other mode; for example, the benefits of 386 enhanced mode using its ability to
multitask DOS applications and use its virtual memory; Standard mode actually
makes slightly more efficient use of memory in Windows. If you have an 80386
based system, with little memory, less hard disk space, and you are not going to
run DOS applications, you may want Windows to run in Standard mode.

You can make Windows run in any mode you choose using command line
parameters; it is a series of characters placed after a command that causes the
commands to execute in a specific manner. To run Windows in the mode of you
choice; Do the following jobs:

From the DOS prompt, type WIN/S to run in Standard mode or WIN/3 to
run in 386 Enhanced mode, then press Enter.

WINDOWS APPLICATIONS

Windows Applications, which are designed to take advantage of windows


features, requires Windows version 3.0 or later in order to run. Windows
applications generally are highly graphical, menu,commands and dialog boxes are
visually the same as those in Windows. Several Windows applications are included
with Windows.

NON-WINDOWS APPLICATIONS

Non - Windows Applications are designed to run with MS-DOS; but not
specifically with Windows. Windows provides us facility to run these applications
from within Windows. The method to run these applications from Windows
environment is discusses later in this chapter.

RUNNING WINDOWS & NON - WINDOWS APPLICATIONS

Mostly in windows we create icons specially for windows applications. We


click the icons to run applications. If icon does not exist we can create icons as
well. How to create icons is discussed in “Creating ICONS & GROUPS”.

You might also like