You are on page 1of 34

Unit-1 Introducation to O/S

An operating system (OS) is a collection of system programs that together control the operation of a
computer system.An operating system is a program that act as interface between the user and computer
hardware and controls the execution of all kinds of program and it  performs all the basic task like
process management, memory management, file management, handling input and output and
controlling peripheral devices.
Some popular operating systems include Linux, windows, unix, os/400 etc.
goals:
To make a computer user friendly.
Proper utilization of computer resources.
To hide the details of the hardware resources from the users.
To provide users a convenient interface to use the computer system.
Provides an environment within which other programs can do useful work
Objective of OS
Tow main objective of operating system use are:
OS as an Extended Machine
OS as a Resource Manager
OS as an Extended Machine
The computer system is composed of hardware and application software interacting with each other to
perform any computationFor any application to perform its task, it must deal with hardware for
memory management, process control, storage and so on. But for a programmer, it would be tedious to
consider the hardware level machine language programming for each application
development.Operating system provides the high-level abstraction to deal with such issues.Operating
system provides a set of programs that handles the complex hardware level programming and provides
the programmer an easy interface to develop application without hardware consideration.
For eg: consider the case of disks. A disk contains a collection of files. Each file can be opened, read,
written and closed. The hardware details like if recording should use modified FM or the current state
of the motor, etc. are masked by OS and also handled effectively.
OS as a Resource Manager
A computer system is a system with various resources like memory processor, disk, printer, etc.
It is necessary to keep track of the resource allocation and usage in order to resolve conflict between
two or more processes for allocation of the same resources.So, the need of managing memory and
resources among the processes or the users is necessary. OS keeps track of current usages status of
resources, who is using resource, to grant resource request if available and so on.
Eg. If 3 programs are requesting for a printer service the OS schedules the tasks based on priority. It
can be managed by buffering all the output destined for the printer on disk

Types of Operating System


Batch OS:
The users of a batch operating system do not interact with the computer directly. Each user prepares his
job on an off-line device like punch cards and submits it to the computer operator.To speed up
processing, jobs with similar needs are batched together and run as a group. A job gets assigned to the
CPU, only when the execution of the previous job completes.
E.g. Payroll System, Bank Statements etc
  Advantages :
  1 Multiple users can share the batch system
  2 idle time for batch system is less
  3 easy to manage large work
Disadvantage
1  It is sometime costly
2 Hard to debug
3 Operators should be well known with batch system

Interactive System:
The user can interact directly with the OS to supply commands and data as applications program
executes.
The user receives the output immediately.
 There is direct two way communication between user and computer.
 A user interface is provided which may be command line or GUI.
Example : Mac and Windows etc
Time- Sharing System(Multitasking)
The processor’s time is shared among multiple users simultaneously.
The main objective is to do tasks by switching.
 It reduces CPU idle time.
Example : Multics, Unix etc.
advantage
1.This type of operating system avoids duplication of software.
2.It reduces CPU idle time.
Disadvantages
1.Time sharing has a problem of reliability.
2.Question of security and integrity of user programs and data can be raised.
Real time OS
Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within
deadlines.
Hard Real Time :
In Hard RTOS, the deadline is handled very strictly which means that a given task must start executing
on specified scheduled time, and must be completed within the assigned time duration.
Example: Medical critical care system, Aircraft systems, etc.
Soft Real Time:
In this type of RTOS, there is a deadline assigned for a specific job, but a delay for a small amount of
time is acceptable. So, deadlines are handled softly by this type of RTOS.
Example Online Transaction system and Livestock price quotation System.

Function of Operating system


Memory Management 
The operating system manages the Primary Memory or Main Memory. Main memory is made up of a
large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be first
loaded in the main memory. An Operating System performs the following activities for memory
management: 
It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that has
not yet been used. In multiprogramming, the OS decides the order in which processes are granted
access to memory, and for how long. It Allocates the memory to a process when the process requests it
and deallocates the memory when the process has terminated or is performing an I/O operation. 
Processor Management 
In a multiprogramming environment, the OS decides the order in which processes have access to the
processor, and how much processing time each process has. This function of OS is called process
scheduling. An Operating System performs the following activities for processor management. 
Keeps track of the status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is processor to a process. De-allocates the processor when a process
is no longer required. 

Device Management – 
An OS manages device communication via their respective drivers. It performs the following activities
for device management. Keeps tracks of all devices connected to system. designates a program
responsible for every device known as the Input/Output controller. Decides which process gets access
to a certain device and for how long. Allocates devices in an effective and efficient way. Deallocates
devices when they are no longer required. 
 

File Management – 
A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings and status of
every file and more… These facilities are collectively known as the file system. 
Control over system performance – 
Monitors overall system health to help improve performance. records the response time between
service requests and system response to have a complete view of the system health. This can help
improve performance by providing important information needed to troubleshoot problems. 

Unit -2 O/S structure


Operating System Structure
The structure of an Operating System determines how it has been designed and how it functions. The
structure of the OS depends mainly on how the various common components of the operating system
are interconnected and melded into the kernel There are numerous ways of designing a new structure of
an Operating system.The most common structures of operating System are
Layered system
Kernel system
Client-server model
Virtual machine

Layered System
The operating system is divided into a number of layers (levels), each built on top of lower layers.The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface With modularity,
layers are selected such that each uses functions (operations) and services of only lower-level layers.
A layered design was first used in THE operating system. Its six layers are as follows
Layer 0 was responsible for the multiprogramming aspect of the operating system. It decided which
process was allocated to the CPU. 
It dealt with interrupts and performed the context switch when process change was required.
Layer 1 was concerned with allocating memory to process.
Layer 2 deals with inter-process communication and communication between the operating system and
the console.
Layer 3 manages all I/O between device attached to the computer. This included buffering information
from the various devices.
Layer 4 The programs used by the user are operated in this layer, and they don’t have to worry about
I/O management, operator/processes communication, memory management, or the processor
allocation.
Layer 5 The system operator process is located in the outermost layer.
This system operator controls the overall of the system.
Kernel System
Kernel is core component of Operating system without which OS can not work.
Kernel is nervous system of OS
It is central component of operating system that manages operation of computer and hardware.It
basically manages operation of memory and CPU time.
 Kernel acts as a bridge between applications and data processing performed at hardware level using
inter-process communication and system calls.
Kernel uses the system call to prepare all its function like CPU scheduling, memory management etc.
The main function of kernel are:
It provides mechanism for creation and deletion of processes.
It provides CPU scheduling, memory management and I/O management.
It provides mechanism for inter-process communications
There are mainly two types of kernel.
Monolithic/Macro kernel
Micro / Exo-kernel
Monolithic/Macro kernel:
It is traditional or older approach of kernel deign.
In monolithic kernel every basic system series like process and memory management, interrupt
handling  and  I/O communication, file system run in kernel space.
Every component of operating system is contained in the kernel and can directly communicate with
each other by using function call.
Kernel typically executes with unrestricted access to the computer system
This approach provides rich and powerful hardware access.
Ex: Linux, UNIX, open VMS etc.
Advantage:
The execution of monolithic kernel is quite fast.
Smaller in source and compile forms.
System call are used to do operation
Less code generally means fewer bugs and security problem is also less.
Disadvantage:
Coding in kernel space is hard.
Debugging is harder.
Monolithic kernel must be written for each new architecture that the OS to be used.

Micro / Exo-kernel
In a microkernel, the kernel provides basic functionality such as IPC, memory management and
scheduling etc that allows execution of server and separate programs. The kernel is broken into
separate processes known as server. Some of server run in user space and some in kernel space.
The communication in the microkernel is done via message passing. The server communicates through
Interprocess communication.
Servers invoke “services” from each other by sending messages.
If a service fails , the rest of the OS works fine.
Advantage:
Smaller in size.
Easy to maintain
New services can be added easily.
Portable.
Crash resistant.
Disadvantage
Service provided by microkernel is expensive.
Execution is slow.
Client server Model
One of the recent advances in computing is the idea of a client/server model .
A server provides services to any client that requests it. 
This model is heavily used in distributed systems where a central computer acts as a server to many
other computers. 
In this model, kernel handles the communication between the client and servers.
operating system (OS)  is splitting up into parts, each of which only handles one fact of the system,
such as file service, terminal service, process service, or memory service, each part becomes small and
manageable.
Advantage:
It can result in a minimal kernel.
it’s adaptability to user in distributed system
Crash resistant.
Disadvantage:
Security problem
overloading
Virtual machine
Virtual machine is an illusion of a real machine.
It is created by a real machine operating system, which make a single real machine appears to be
several real machine.
 Actually, virtual machine can run several operating systems at once, each of them on its virtual
machine.
A virtual machine provides an interface identical to underlying bare hardware.
The operating system creates illusion of multiple processes, each executing on its own processor with
its own memory.
The resource of the physical computer are shared to create virtual machine
Advantage: 
Provides a robust level of Security.
Allows multiple OS to run
On same machine at same Time
Easier to system upgrade.
Disadvantage:
Less efficient.
Difficult in sharing of hard drives
Shell
Shell is the outer layer of an operating system.
It is part of operating system that interface with the user
A shell is a user interface for access to an operating system’s services
Most often the user interacts with the shell using a command-line interface 
It is interface to which accepts, interpret, and then executes command from user
There are two types
Command line interpreter(CLI)
Graphical user interface(GUI)

Unit-3 Process management


A process is sequential program in execution. A process defines the
fundamental unit of computation for the computer. Components of process
are :
1. Object Program
2. Data
3. Resources
4. Status of the process execution.
Processes vs Programs
- Process is a dynamic entity, that is a program in execution. A process is a sequence of information
executions. Process exists in a limited span of time.Two or more processes could be executing the same
program, each using
their own data and resources.
- Program is a static entity made up of program statement. Program contains
the instructions. A program exists at single place in space and continues to
exist. A program does not perform the action by itself.

Process Life Cycle

When process executes, it changes state. Process state is defined as the current activity of the process.
Fig. 3.1 shows the general form of the process state transition diagram. Process state contains five
states. Each process is in one of the states. The states are listed below.

1. New : A process that just been created.


2. Ready : Ready processes are waiting to have the processor allocated to them by the operating system
so that they can run.
3. Running : The process that is currently being executed. A running process possesses all the resources
needed for its execution, including the processor.
4. Waiting : A process that cannot execute until some event occurs such as the completion of an I/O
operation. The running process may become suspended by invoking an I/O module.
5. Terminated : A process that has been released from the pool of executable processes by the operating
system.
Whenever processes changes state, the operating system reacts by placing the process PCB in the list
that corresponds to its new state. Only one process can be running on any processor at any instant and
many processes
may be ready and waiting state.
Suspended Processes
Characteristics of suspend process
1. Suspended process is not immediately available for execution.
2. The process may or may not be waiting on an event.
3. For preventing the execution, process is suspend by OS, parent process, process itself and an agent.
4. Process may not be removed from the suspended state until the agent orders the removal.

Swapping is used to move all of a process from main memory to disk. When all the process by putting
it in the suspended state and transferring it to disk.
Reasons for process suspension
1. Swapping : OS needs to release required main memory to bring in a process that is
ready to execute.
2. Timing : Process may be suspended while waiting for the next time interval.
3. Interactive user request : Process may be suspended for debugging purpose by user.
4. Parent process request : To modify the suspended process or to coordinate the activity of various
descendants.

Process Control Block (PCB)


Each process contains the process control block (PCB). PCB is the data structure used by the operating
system. Operating system groups all information that needs about particular process.
1. Pointer : Pointer points to another process control block. Pointer is used for maintaining the
scheduling list.
2. Process State : Process state may be new, ready, running, waiting and so on.
3. Program Counter : It indicates the address of the next instruction to be executed for this process.
4. Event information : For a process in the blocked state this field contains information concerning the
event for which the process is waiting.
5. CPU register : It indicates general purpose register, stack pointers, index registers and accumulators
etc. number of register and type of register totally depends upon the computer architecture.
6. Memory Management Information : This information may include the value of base and limit
register. This information is useful for deallocating the memory when the process terminates.
7. Accounting Information : This information includes the amount of CPU and real time used, time
limits, job or process numbers, account numbers etc

Process Management / Process Scheduling


1. Multiprogramming operating system allows more than one process to be loaded into the executable
memory at a time and for the loaded process to share the CPU using time multiplexing.
2. The scheduling mechanism is the part of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of particular
strategy
Scheduling Queues
 When the process enters into the system, they are put into a job queue. This queue consists of all
processes in the system. The operating system also has other queues.
 Device queue is a queue for which a list of processes waiting for a particular I/O device. Each
device has its own device queue. Fig. below shows the queuing diagram of process scheduling. In
the fig 3.3, queue is represented by rectangular box. The circles represent the resources that serve

the queues. The arrows indicate the flow of processes in the system.
 Queues are of two types : ready queue and set of device queues. A newly arrived process is put in
the ready queue. Processes are waiting in ready queue for allocating the CPU. Once the CPU is
assigned to the process, then process will execute. While executing the process, one of the several
events could occur.
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and put back in the
ready queue.

Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −

Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming
is stable, then the average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there
is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance
with the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to execute and allocates CPU to one of
them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces
the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped
out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and
make space for other processes, the suspended process is moved to the secondary storage. This process
is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary
to improve the process mix.

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to
discuss in this chapter −
First-Come, First-Served (FCFS) Scheduling
Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin(RR) Scheduling
Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.

Poor in performance as average wait time is high.

Shortest Job Next (SJN)


This is also known as shortest job first, or SJF
This is a non-preemptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not known.
The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Priority Based Scheduling


Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms
in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.

Shortest Remaining Time


Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted by a newer ready job
with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not known.
It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling


Round Robin is the preemptive process scheduling algorithm.
Each process is provided a fix time to execute, it is called a quantum.
Once a process is executed for a given time period, it is preempted and other process executes for a
given time period.
Context switching is used to save states of preempted processes.

What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps track
of which instruction to execute next, system registers which hold its current working variables, and a
stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that. A thread is also called
a lightweight process. Threads provide a way to improve application performance through parallelism.
Threads represent a software approach to improving performance of operating system by reducing the
overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications
on shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.

Difference between Process and Thread.


Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread
Threads are implemented in following two ways −
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.
The Kernel maintains context information for the process as a whole and for individuals threads within
the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage
than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode switch to the
Kernel.

Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is
a good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types

Many to Many Model


The many-to-many model multiplexes any number of user threads onto an equal or smaller number of
kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This
model provides the best accuracy on concurrency and when a thread performs a blocking system call,
the kernel can schedule another thread for execution.

Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is
done in user space by the thread library. When thread makes a blocking system call, the entire process
will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run
in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides
more concurrency than the many-to-one model. It also allows another thread to run when a thread
makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.

Differences

Inter Process Communication


Inter process communication (IPC) refers to a set of mechanisms that the operating system must
support in order to permit multiple processes to interact amongst each other. This include mechanisms
related to synchronization, coordination and communication.

IPC mechanisms are broadly categorized as either message-based or memory-based. Message-based


IPC mechanisms include sockets, pipes, and message queues. Memory-based IPC utilizes shared
memory. This may be in the form of unstructured shared physical memory pages or memory mapped
files which can be accessed by multiple processes.
In the shared-memory model, a region of memory which is shared by cooperating processes gets
established. Processes can be then able to exchange information by reading and writing all the data to
the shared region. In the message-passing form, communication takes place by way of messages
exchanged among the cooperating processes.

The two communications models are contrasted in the figure below:

Race Condition
A race condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in critical section differs according to the order in which the threads execute.

Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.

Critical Section
The critical section in a code segment where the shared variables can be accessed. Atomic action is
required in a critical section i.e. only one process can execute in its critical section at a time. All the
other processes have to wait to execute in their critical sections.
The critical section is given as follows:
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);

The critical section problem needs a solution to synchronise the different processes. The solution to the
critical section problem must satisfy the following conditions −

Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any time. If any
other processes require the critical section, they must wait until it is free.
Progresss
Progress means that if a process is not using the critical section, then it should not stop any other
process from accessing it. In other words, any process can enter a critical section if it is free.
Bounded Waitings
Bounded waiting means that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.

Semaphore
A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can be signalled by
another thread. This is different than a mutex as the mutex can be signalled only by the thread that
called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then
no operation is performed.
wait(S){
while (S<=0);
S--;
}

The signal operation increments the value of its argument S.


signal(S){
S++;
}

Classical IPC Problem:


Dining-Philosphers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked
up by any one of its adjacent followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free manner.

Readers and Writers Problem:


Suppose that a database is to be shared among several concurrent processes. Some of these processes
may want only to read the database, whereas others may want to update (that is, to read and write) the
database. We distinguish between these two types of processes by referring to the former as readers and
to the latter as writers. Precisely in OS we call this situation as the readers-writers problem. Problem
parameters:
 One set of data is shared among a number of processes.
 Once a writer is ready, it performs its write. Only one writer may write at a time.
 If a process is writing, no other process can read it.
 If at least one reader is reading, no other process can write.
 Readers may not write and only read.

Sleeping Barber Problem:


Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the barber
goes to sleep in barber chair and must be woken when a customer comes in. When barber is cutting hair
new custmers take empty seats to wait, or leave if no vacancy.
UNIT $ Deadlock
In a multiprogramming environment, several processes may compete for a finite number of resources.
A process requests resources; if the resources are not available at that time, the process enters a wait
state. It may happen that waiting processes will never again change state, because the resources they
have requested are held by other waiting processes. This situation is called deadlock.

If a process requests an instance of a resource type, the allocation of any instance of the type will
satisfy the request. If it will not, then the instances are not identical, and the resource type classes have
not been defined properly. A process must request a resource before using it, and must release the
resource after using it. A process may request as many resources as it requires to carry out its
designated task.
Under the normal mode of operation, a process may utilize a resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the requesting process must wait until it
can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource

Deadlock Characterization
1. Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting process
must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.3. No preemption :
Resources cannot be preempted; that is, a resource can be released only voluntarily by the process
holding it, after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is waiting for
a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Method For Handling Deadlock


There are are four different methods for dealing with the deadlock problem:

1. Deadlock Prevention
Condition for deadlock prevention:
Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For example, a printer cannot be
simultaneously shared by several processes. Sharable resources, on the other hand, do not require
mutually exclusive access, and thus cannot be involved in a deadlock.

Hold and Wait


1. When whenever a process requests a resource, it does not hold any other resources. One
protocol that be used requires each process to request and be allocated all its resources before it
begins execution.
2. An alternative protocol allows a process to request resources only when the process has none. A
process may request some resources and use them. Before it can request any additional resources,
however it must release all the resources that it is currently allocated here are two main disadvantages
to these protocols. First, resource utilization may be low, since many of the resources may be allocated
but unused for a long period. In the example given, for instance, we can release the tape drive and disk
file, and then again request the disk file and printer, only if we
can be sure that our data will remain on the disk file. If we cannot be assured that they will, then we
must request all resources at the beginning for both protocols. Second, starvation is possible.

No Preemption
If a process that is holding some resources requests another resource that cannot be immediately
allocated to it, then all resources currently being held are preempted. That is this resources are
implicitly released. The preempted resources are added to the list of resources for which the process is
waiting process will be restarted only when it can regain its old resources, as well as the new ones that
it is requesting.

Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource types, and to require
that each process requests resources in an increasing order of enumeration.Let R = {R1, R2, ..., Rn} be
the set of resource types. We assign to each resource type a unique integer number, which allows us to
compare two resources and to determine whether one precedes another in our ordering. Formally, we
define a one-to-one function F: R _ N, where N is the set of natural numbers.

2. Deadlock Avoidance
Prevent deadlocks requests can be made. The restraints ensure that at least one of the necessary
conditions for deadlock cannot occur, and, hence, that deadlocks cannot hold. Possible side effects of
preventing deadlocks by this, melted, however, are Tow device utilization and reduced system
throughput.
Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in some order
and still avoid a deadlock. More formally, a system is in a safe state only if there exists a safe sequence.
A sequence of processes <P1, P2, .. Pn> is a safe sequence for the current allocation state if, for each Pi
the resources that Pj can still request can be satisfied by the currently available resources plus the
resources held by all the Pj, with j < i. In this situation, if the resources that process Pi needs are not
immediately available, then Pi can wait until all Pj have finished. When they have finished, Pi can
obtain all of its needed resources, complete its designated task return its allocated resources, and
terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so on.

Banker's Algorithm
The resource-allocation graph algorithm is not applicable to a resourceallocation system with multiple
instances of each resource type. The deadlockavoidance algorithm that we describe next is applicable
to such a system, but isless efficient than the resource-allocation graph scheme. This algorithm is
commonly known as the banker's algorithm.

3. Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a
deadlock situation may occur.
• An algorithm that examines the state of the system to determine whether a deadlock has occurred.
• An algorithm to recover from the deadlock.

Resource-Allocation Graph Algorithm


Suppose that process Pi requests resource Rj. The request can be granted only if converting the request
edge Pi → Rj to an assignment edge Rj → Pi does not result in the formation of a cycle in the resource-
allocation graph.

4. Recovery from Deadlock


When a detection algorithm determines that a deadlock exists, several alternatives exist. One possibility
is to inform the operator that a deadlock has
spurred, and to let the operator deal with the deadlock manually. The other possibility is to let the
system recover from the deadlock automatically. There are
two options for breaking a deadlock. One solution is simply to abort one or more processes to break the
circular wait. The second option is to preempt some
resources from one or more of the deadlocked processes.

Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated
processes.
• Abort all deadlocked processes: This method clearly will break the dead – lock cycle, but at a great
expense, since these processes may have computed for a long time, and the results of these partial
computations must be discarded, and probably must be recomputed.
• Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable
overhead, since after each process is aborted a deadlock-detection algorithm must be invoked to
determine whether a processes are still deadlocked.

Resource PreemptionTo eliminate deadlocks using resource preemption, we successively preempt


some resources from processes and give these resources to other processes until he deadlock cycle is
broken.
The three issues are considered to recover from deadlock
1. Selecting a victim
2. Rollback
3. Starvation

Unit-5 Memoery management


What is Memory Hierarchy?
The memory in a computer can be divided into five hierarchies based on the speed as well as use. The
processor can move from one level to another based on its requirements. The five hierarchies in the
memory are registers, cache, main memory, magnetic discs, and magnetic tapes.
Registers
Usually, the register is a static RAM or SRAM in the processor of the computer which is used for
holding the data word which is typically 64 or 128 bits. The program counter register is the most
important as well as found in all the processors. Most of the processors use a status word register as
well as an accumulator.
Cache Memory
Cache memory can also be found in the processor, however rarely it may be another IC (integrated
circuit) which is separated into levels. The cache holds the chunk of data which are frequently used
from main memory..
Main Memory
The main memory in the computer is nothing but, the memory unit in the CPU that communicates
directly. It is the main storage unit of the computer. This memory is fast as well as large memory used
for storing the data throughout the operations of the computer. This memory is made up of RAM as
well as ROM.
Magnetic Disks
The magnetic disks in the computer are circular plates fabricated of plastic otherwise metal by
magnetized material. Frequently, two faces of the disk are utilized as well as many disks may be
stacked on one spindle by read or write heads obtainable on every plane.
Magnetic Tape
This tape is a normal magnetic recording which is designed with a slender magnetizable covering on an
extended, plastic film of the thin strip. This is mainly used to back up huge data.
Advantages of Memory Hierarchy
1 .The need for a memory hierarchy includes the following.
2.Memory distributing is simple and economical
3.Removes external destruction
4.Data can be spread all over
5.Permits demand paging & pre-paging
6.Swapping will be more proficient
Logical and Physical Address in Operating System
Logical Address is generated by CPU while a program is running. The logical address is virtual address
as it does not exist physically, therefore, it is also known as Virtual Address. This address is used as a
reference to access the physical memory location by CPU. The term Logical Address Space is used for
the set of all logical addresses generated by a program’s perspective. 
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.
Physical Address identifies a physical location of required data in a memory. The user never directly
deals with the physical address but can access by its corresponding logical address. The user program
generates the logical address and thinks that the program is running in this logical address but the
program needs physical memory for its execution, therefore, the logical address must be mapped to the
physical address by MMU before they are used. The term Physical Address Space is used for all
physical addresses corresponding to the logical addresses in a Logical address space. 
Differences Between Logical and Physical Address in Operating System
1 .The basic difference between Logical and physical address is that Logical address is generated by
CPU in perspective of a program whereas the physical address is a location that exists in the memory
unit.
2.Logical Address Space is the set of all logical addresses generated by CPU for a program whereas
the set of all physical address mapped to corresponding logical addresses is called Physical Address
Space.
3.The logical address does not exist physically in the memory whereas physical address is a location
in the memory that can be accessed physically.
4.Identical logical addresses are generated by Compile-time and Load time address binding methods
whereas they differs from each other in run-time address binding method. Please refer this for details.
Memory Management with Swapping
Swapping is mechanism in which a process can be swapped temporarily out of main memory to
secondary storage (disk) and make the that memory available to other processes. At some later time,
the system swaps back the process from the secondary storage to main memory

Memory Management With Bitmaps


Memory is divided into allocation units and each allocation unit has a bit in the bitmap. If the bit is 0,
the unit is free. It requires less memory to store bit maps. Searching a bitmap for a run of a given
length is a slow operation.
Memory Management With Linked List
 A linked list of allocated and free memory segments is maintained.
Each entry in a list specifies a hole or process, address at which it starts, length and pointer to next
entry. Process and hole are on a list sorted by address.

Mono-Programming
In this scheme, it is allowed to run only one program at a time, sharing memory between program and
OS. When user provides command, the as copies requested program to memory form disk and
executes it. As the program completes, the OS displays prompt character and waits for next
command. When next command is provided, it beds new program into memory overwriting the
previous program

Multi-Programming
In multiprogramming, multiple program reside in main memory at time. There are two Memory
Management Techniques: Contiguous, and Non-Contiguous. In Contiguous Technique, executing
process must be loaded entirely in main memory. Contiguous Technique can be divided into:
Fixed (or static) partitioning. Variable (or dynamic) partitioning

Memory Fragmentation
When holes given to other process they may again partitioned, the remaining holes gets smaller
eventually becoming too small to hold new processes – waste of memory occur – memory
fragmentation.
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous.
Fist fit and Best Fit suffers from external fragmentation. External fragmentation can be reduced by
memory compaction
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this
size difference is memory internal to a partition, but not being used

Non-Contiguous memory allocation


In the non-contiguous memory allocation the available free memory space are scattered here and
there and all the free memory space is not at one place In the non-contiguous memory allocation, a
process will acquire the memory space but it is not at one place it is at the different locations
according to the process requirement. Non-contiguous memory allocation techniques can be two
types a. Paging b.Segmentation
a.paging
Paging is a fixed size partitioning scheme. In paging, secondary memory and main memory are
divided into equal fixed size partitions.

Virtual Memory
Virtual memory is a memory management capability of an operating system that uses hardware and
software to allow computer to compensate for physical memory shortages by temporarily transferring
data from random access memory (RAM) to disk storage.
Page table.
A page table is the data structure used by virtual memory system in a computer operating system to
store mapping between virtual address and physical address Page table is kept in main memory.

Page table structure


Frame number: specifies the frame where the page is stored in the main memory.
Present/absent bit: If present/absent bit is present, the virtual addresses is mapped to the
corresponding physical address. If present/absent is absent the trap is occur called page fault.
Protection bit: Tells what kinds of access are permitted read, write or read only.
Referenced bit: set whenever a page is referenced; used in page replacement.
Caching disabled: used for that system where the mapping into device register rather than memory.
Modified bit (dirty bit): Identifies the changed status of the page since last access; if it is modified
then it must be rewritten back to the disk.

Types of page table


Hierarchical Page table: It is also known as multi-level paging
Break up the logical address space into multiple page tables.
A simple techniques is two-level page table.

Hashed Page Tables


A common approach for handling address space larger than 32-bit.
The hash value is the virtual-page number. Each entry in the hash table contains a linked list of
elements that hash to the same location.

nverted Page Tables


A common approach for handling address space larger than 32-bit. One entry per page frame, rather
than one entry per page in earlier tables. (Ex: 256MB RAM with 4KB page requires only 65,536
entries in table) Entry consists of the virtual address of the page stored in that physical memory
location, with information about the process that owns that page.

Page Faults
When a page referenced by the CPU is not in main memory, it is called a page fault.When a page fault
occurs, the required page has to be fetched from secondary memory into main memory.Page fault can
be handled by operating System.

Page Reslacment algorithms


First-In-First-Out (FIFO) Algorithm
This associates with each page the time when that page was brought into the memory. The page with
highest time is chosen to replace.
This can also be implemented by using queue of all pages in memory.

Example: Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 3 frames (3 pages can be in


memory at a time per process)
Advantages: Easy to understand and program. Distributes fair chance to all.
Problems: FIFO is likely to replace heavily (or constantly) used pages and they are still needed for
further processing.

Optimal Page Replacement (OPR)


Replace the page that will not be used for the longest period of time.
Each page can be labeled with number of instructions that will be executed before that page is first
referenced.
Ex: For 3 - page frames and 8 pages system the optimal page replacement is as:
Advantages: An optimal page-replacement algorithm; it guarantees the lowest possible page fault rate.
Problems: Unrealizable, at the time of the page fault, the OS has no way of knowing when each of the
pages will be referenced next. This is not used in practical system.

Least Recently Used (LRU) Algorithm


Recent past is a good indicator of the near future. When a page fault occurs, throw out the page that
has been unused for longest time.

Advantages: Excellent, efficient is close to the optimal algorithm.


Problems: Difficult to implement exactly. How to find good heuristic? If it is implemented as linked
list, updating list in every reference is not a way making system fast!

Second Chance Page Replacement


Modified Version of FIFOInstead of swapping out the last page, the reference bit is checked.Gives
every page a “Second Chance”
Example: Reference string: 2,3,2,1,5,2,4,5,3,2,5,2 frames (3 pages can be in memory at a time per
process)

Segmentation
Segmentation is non contiguous memory management techniques. In which the memory is divided into
variable size parts. Each part is known as segment which can be allocated to process. The details about
each segment are stored in table called segment table.Segment table contains mainly following
information about segment.
1.Base: it is base address of segment. 2.Limit: it is Length of segment

Segmentations example

Unit-6 Input /Output device Management


Introduction
Management of I/O devices is a very important part of the operating system - so important and so
varied that entire I/O subsystems are devoted to its operation. Consider the range of devices on a
modern computer, from mice, keyboards, disk drives, display adapters, USB devices, network
connections, audio I/O, printers, special devices for the handicapped, and many special-purpose
peripherals.

Classification of I/O Devices


Block Devices
Block devices are accessed a block at a time, and are indicated by a "b" as the first character in a long
listing on UNIX systems. Operations supported include read( ), write( ), and seek( ).
Character Devises
Character devices are accessed one byte at a time, and are indicated by a "c" in UNIX long listings.
Supported operations include get( ) and put( ), with more advanced functionality such as reading an
entire line supported by higher-level library routines.
Communication Devices
It access is inherently different from local disk access, most systems provide a separate interface for
network devices.One common and popular interface is the socket interface, which acts like a cable or
pipeline connecting two networked entities. Data can be put into the socket at one end, and read out
sequentially at the other end. Sockets are normally full-duplex, allowing for bi-directional data transfer.

Communication to I/O Devices


The CPU must have a way to pass information to and from an I/O device. There are three approaches
available to communicate with the CPU and Device.
Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O devices. These instructions
typically allow data to be sent to an I/O device or read from an I/O device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O devices. The
device is connected directly to certain main memory locations so that I/O device can transfer block of
data to/from memory without going through CPU.
Using I/O port

Direct Memory Access (DMA)


Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If
a fast device such as a disk generated an interrupt for each byte, the operating system would spend
most of its time handling these interrupts. So a typical computer uses direct memory access (DMA)
hardware to reduce this overhead

Interrupts/ working mechanism of interrupts


An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs
immediate attention. Whenever an interrupt occurs, the controller completes the execution of the
current instruction and starts the execution of an Interrupt Service Routine (ISR) or Interrupt Handler.
ISR tells the processor or controller what to do when the interrupt occurs. The interrupts can be either
hardware interrupts or software interrupts.

Goals of the I/O Software


Device independence
A key concept in the design of I/O software is known as device independence. It means that I/O
devices should be accessible to programs without specifying the device in advance.
Uniform Naming, simply be a string or an integer and not depend on the device in any way. In UNIX,
all disks can be integrated in the file-system hierarchy in arbitrary ways so the user need not be aware
of which name corresponds to which device.
Error Handling: If the controller discovers a read error, it should try to correct the error itself if it can.
If it cannot, then the device driver should handle it, perhaps by just trying to read the block again. In
many cases, error recovery can be done transparently at a low level without the upper levels even
knowing about the error.
Synchronous (blocking) and Asynchronous (interrupt-driven) transfers: Most physical I/O is
asynchronous, however, some very high-performance applications need to control all the details of the
I/O, so some operating systems make asynchronous I/O available to them.
Buffering: Often data that come off a device cannot be stored directly in their final destination.
Sharable and Dedicated devices: Some I/O devices, such as disks, can be used by many users at the
same time. No problems are caused by multiple users having open files on the same disk at the same
time. Other devices, such as printers, have to be dedicated to a single user until that user is finished.
Then another user can have the printer.

Handling O/I
Programmed I/O
The programmed I/O was the most simple type of I/O technique for the exchanges of data or any types
of communication between the processor and the external devices.
Interrupt driven I/O
Interrupt I/O is a way of controlling input/output activity whereby a peripheral or terminal that needs to
make or receive a data transfer sends a signal The CPU issue a command to I/O module then proceeds
with its normal work until interrupted by I/O device on completion of its work.
I/O using DMA
 Direct Memory Access is a technique for transferring data within main memory and external device
without passing it through the CPU.Issue command to DMA module, by sending necessary information
to DMA module.

I/O Software Layes


Interrupt Handler
Interrupt should hidden so that system knows about them as little as possible.Every process starting an
I/O operation blocks until the I/O has completed and the interrupt occurs.When interrupt occurs, the
handler does whatever it has to in order to unlock the process that started it.
Device Drivers
Device driver consists of device dependent codes, which are installed in the kernel.It accepts abstract
request from device independent software and see that request is executed. It must decide which
controller operations are required and in what sequence

Device Independent I/O Software


The functions of device independent software are: -
a) Perform I/O functions common to all devices and to provide a uniform interface to user level
software. It may symbolic device names onto the proper driver. It provides device independent block
size by combining sectors into blocks. It performs buffering as necessity. It reports errors if driver
cannot handle then.
User space I/O software
A small portion of I/O software consists of libraries linked together with user programs. Spooling
system keeps track of device requests mode by users.
Disk Structure
A magnetic disk contains several platters. Each platter has 2 surface and two read /write heads for
each surface. Each platter is divided into circular shaped tracks. Each platter has same number of
tracks .The length of the tracks near the centre is less than the length of the tracks farther from the
centre.
Disk structure key terms
Seek Time: Time taken by Read/Write head to reach desired track
Rotation Time: Time taken for one Full rotation.
Rotational Latency: Time taken to reach to desired sector.
Transfer Time: Transfer time is time to transfer data.

Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Multiple
I/O requests may arrive by different processes and only one I/O request can be served at a time by the
disk controller. Thus other I/O requests need to wait in the waiting queue and need to be scheduled.
1 First come- first serve (FCFS)
 FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is : 98, 183, 41, 122, 14, 124, 65, 67
2. Shortest seek time first (SSTF)
Requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
3. Elevator (SCAN)
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular direction
till the end, satisfying all the requests coming in its path, and then it turns back and moves in the
reverse direction satisfying requests coming in its path.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
4. Circular Scan
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until it
reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without servicing
any request then it turns back and start moving in that direction servicing the remaining requests.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
Unit-7 File system
File naming
Descriptive file names are an important part of organizing, sharing, and keeping track of data files.
Develop a naming convention based on elements that are important to the project.
File naming best practices:
Files should be named consistently
File names should be short but descriptive (<25 characters) (Briney, 2015)
Avoid special characters or spaces in a file name
Use capitals and underscores instead of periods or spaces or slashes
File Structure
 File can be structured as
• Byte sequence
• Record sequence
• Tree
In byte sequence, file is an unstructured sequence of bytes. The OS only sees bytes and the meaning
should be provided by user level programs.
In record sequence, a file is a sequence of fixed length records, each with same internal structure.
In tree a file consists of a tree of records, may be of varying length, each containing key field in fixed
position in the record. The key field is used for sorting tree.

File Type
Many OS support several type of files.
Regular File
Contains user information These may have text, databases or executable program. The user can apply
various operations on such files like add, modify, delete or even remove the entire file.
Directory Files: System files for maintaining the structure of file system.
Character special file: Related to I/O and used to model serial I/O device such as terminal, printers
and network.
Block Special File: Used to model disk

File Acess
File access mechanism refers to the manner in which the records of a file may be accessed.The
information in file can be accessed in several ways. The most commonly used method are:
• Sequential access
• Direct access
Sequential accessA sequential access is that in which the records are accessed in some sequence, i.e.,
the information in the file is processed in order, one record after the other. This access method is the
most primitive one
Direct Access: Files whose bytes or records can be read in any order. Based on disk model of file,
since disks allow random access to any block. Used for immediate access to large amounts of
information. When a query concerning a particular subject arrives, we compute which block contain
the answer, and then read that block directly to provide desired information.

File Attributs
A file has a name and data. Moreover, it also stores meta information like file creation date and time,
current size, last modified date, etc. All this information is called the attributes of a file system.
Here, are some important File attributes used in OS:
Name: It is the only information stored in a human-readable form.
Identifier: Every file is identified by a unique tag number within a file system known as an identifier.
Location: Points to file location on device.
Type: This attribute is required for systems that support various types of files.
Size. Attribute used to display the current file size.
Protection. This attribute assigns and controls the access rights of reading, writing, and executing the
file.
Time, date and security: It is used for protection, security, and also used for monitoring

Directory Structure
A directory is a node containing information's about files. Both the directory structure and the files
reside on disk Directories can have different structures

Path name
Absolute path name
Path from root directory to the file Always start at the root directory and is unique /usr/bin/
Relative path name
Used in conjunction with the concept of “current working directory” Specified relative to the current
directory More convenient than the absolute form and achieves the same effect

Unit-8 Security Management


Security refers to providing a protection system to computer system resources such as CUP, memory
disk, software programs and most importantly data or information stored in the computer.
Security refers to providing a protection system to computer system resources such as CPU, memory,
disk, software programs and most importantly data/information stored in the computer system. If a
computer program is run by an unauthorized user, then he/she may cause severe damage to computer or
data stored in it. So a computer system must be protected against unauthorized access, malicious access
to system memory, viruses, worms etc. We're going to discuss following topics in this chapter.
Authentication
One Time passwords
Program Threats
System Threats
Computer Security Classifications
Authentication
Authentication refers to identifying each user of the system and associating the executing programs
with those users. It is the responsibility of the Operating System to create a protection system which
ensures that a user who is running a particular program is authentic. Operating Systems generally
identifies/authenticates users using following three ways −
Username / Password − User need to enter a registered username and password with Operating
system to login into the system.
User card/key − User need to punch card in card slot, or enter key generated by key generator in
option provided by operating system to login into the system.
User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute via
designated input device used by operating system to login into the system.
One Time passwords
One-time passwords provide additional security along with normal authentication. In One-Time
Password system, a unique password is required every time user tries to login into the system. Once a
one-time password is used, then it cannot be used again. One-time password are implemented in
various ways.
Random numbers − Users are provided cards having numbers printed along with corresponding
alphabets. System asks for numbers corresponding to few alphabets randomly chosen.
Secret key − User are provided a hardware device which can create a secret id mapped with user id.
System asks for such secret id which is to be generated every time prior to login.
Network password − Some commercial applications send one-time passwords to user on registered
mobile/ email which is required to be entered prior to login.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user program made
these process do malicious tasks, then it is known as Program Threats. One of the common example
of program threat is a program installed in a computer which can store and send user credentials via
network to some hacker. Following is the list of some well-known program threats.
Trojan Horse − Such program traps user login credentials and stores them to send to malicious user
who can later on login to computer and can access system resources.
Trap Door − If a program which is designed to work as required, have a security hole in its code and
perform illegal action without knowledge of user then it is called to have a trap door.
Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain conditions
met otherwise it works as a genuine program. It is harder to detect.
Virus − Virus as name suggest can replicate themselves on computer system. They are highly
dangerous and can modify/delete user files, crash systems. A virus is generatlly a small code embedded
in a program. As user accesses the program, the virus starts getting embedded in other files/ programs
and can make system unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user in trouble.
System threats can be used to launch program threats on a complete network called as program attack.
System threats creates such an environment that operating system resources/ user files are misused.
Following is the list of some well-known system threats.
Worm − Worm is a process which can choked down a system performance by using system resources
to extreme levels. A Worm process generates its multiple copies where each copy uses system
resources, prevents all other processes to get required resources. Worms processes can even shut down
an entire network.
Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system
vulnerabilities to make an attack on the system.
Denial of Service − Denial of service attacks normally prevents user to make legitimate use of the
system. For example, a user may not be able to use internet if denial of service attacks browser's
content settings.

Unit-9
Distributed Operating system
Distributed system is a collection of autonomous computer system capable of communicating each
other.A distributed system is a collection of loosely coupled processors interconnected by a
communication network. Processors are also called nodes, computer, machine, hosts etc. Generally,
one host at one site, the server, has a resource that another host at another site, the client (or user),
would like to use In this system the processors do not share memory and clock but the processor has its
own local memory.

Reasons
Resource sharing - user at one site may be able to use the resources available at another
Computation speedup – Distributed system allow us to distribute the computation among the various
sites to run the computation concurrently.
Reliability – detect and recover from site failure, function transfer, reintegrate failed site
Communication – message passing i.e, processes at different sites can communicate each other to
exchange information. User may initiate file transfer or communicate with another via electronic mail.

Advantages of Distributed Systems over Centralized Systems:


Economics: Microprocessors offer a better price/performance than mainframes
Speed: A distributed system may have more total computing power than a mainframe
Inherent distribution: Some applications involve spatially separated machines
Reliability: If one machine crashes, the system as a whole can still survive
Incremental growth: Computing power can be added in small increments

Advantages of Distributed Systems over independent Computers:


Data sharing: Allow many users access to a common data base
Device sharing Allow many users to share expensive peripherals like color printers
Communication Make human to human communication easier, for example, by electronic mail
Flexibility Spread the workload over the available machines in the most cost effective way
Communication in Distributed system
1. Shared memory
Distributed shared memory (DSM) is form of memory architecture where physically separated
memories can be addressed as one logical address space. Hera the term shared does not mean that there
is single centralized memory but that the address space is shared.A distributed shared memory system
implements the shared memory model on physically distributed memory system.

2. Message Passing
Message passing model allows multiple processes to read and write data to the message queue without
being connected to each other. Message are stored on the queue until their recipient retrieves them.
Massage passing is the basis of most inter-process communication in distributed system.

Clock Synchronization
In distributed system the internal clock of several computer may differ with each other.Even when
initially set accurately, real clock will differ after some time due to clock drift, cause by clock counting
time at slightly different rates. Clock synchronization is mechanism to synchronize the all computer in
distributed system. It is the process of setting all the cooperating systems of distributed network to the
same logical or physical clock.
Page Replacement Algorithm printf("Fault rate is %f\n",rate);
FIFO (First In First Out)
#include<stdio.h> Disk Scheduling
#include<conio.h> FCFS algorithm
void main() #include<stdio.h>
{ #include<math.h>
int #include<conio.h>
i,j,n,page[50],frameno,frame[10],move=0,flag,c void main()
ount=0; {
float rate; int i,n,m,sum=0,h;
printf("Enter the number of pages\n"); printf("Enter the size of disk\n");
scanf("%d",&n); scanf("%d",&m);
printf("Enter the page reference numbers\n"); printf("Enter number of requests\n");
for(i=0;i<n;i++) scanf("%d",&n);
scanf("%d",&page[i]); printf("Enter the requests\n");
printf("Enter the number of frames\n"); // creating an array of size n
scanf("%d",&frameno); int a[20];
for(i=0;i<frameno;i++) for(i=0;i<n;i++)
frame[i]=-1; {
printf("Page reference string\tFrames\n"); scanf("%d",&a[i]);
for(i=0;i<n;i++) }
{ for(i=0;i<n;i++)
printf("%d\t\t\t",page[i]); {
flag=0; if(a[i]>m)
for(j=0;j<frameno;j++) {
{ printf("Error, Unknown position %d\n",a[i]);
if(page[i]==frame[j]) }
{ }
flag=1; printf("Enter the head position\n");
printf("No replacement\n"); scanf("%d",&h);
break; // head will be at h at the starting
} int temp=h;
} printf("%d",temp);
if(flag==0) for(i=0;i<n;i++){
{ printf("-> %d ",a[i]);
frame[move]=page[i]; //cout<<" -> "<<a[i]<<' ';
move=(move+1)%frameno; // calculating the difference for the head
count++; movement
for(j=0;j<frameno;j++) sum+=abs(a[i]-temp);
printf("%d\t",frame[j]); // head is now at the next I/O request
printf("\n"); 2 temp=a[i];
} }
} printf("\n");
rate=(float)count/(float)n; printf("Total head movements = %d\n",sum);
printf("Number of page faults is %d\n",count); getch();
} for(i=0;i<temp2-1;i++)
{
Disk scheduling for(j=i+1;j<temp2;j++)
SCAN algorithm {
#include<stdio.h> if(queue2[i]<queue2[j])
#include<math.h> {
#include<conio.h> temp=queue2[i];
int main() queue2[i]=queue2[j];
{ queue2[j]=temp;
int }
queue[20],n,head,i,j,seek=0,max,diff,temp,queu }
e1[20],queue2[20], }
temp1=0,temp2=0; for(i=1,j=0;j<temp1;i++,j++)
float avg; queue[i]=queue1[j];
printf("Enter the max range of disk\n"); queue[i]=max;
scanf("%d",&max); for(i=temp1+2,j=0;j<temp2;i++,j++)
printf("Enter the initial head position\n"); queue[i]=queue2[j];
scanf("%d",&head); queue[i]=0;
printf("Enter the size of queue request\n"); queue[0]=head;
scanf("%d",&n); for(j=0;j<=n+1;j++)
printf("Enter the queue of disk positions to be {
read\n"); diff=abs(queue[j+1]-queue[j]);
for(i=1;i<=n;i++) seek+=diff;
{ printf("Disk head moves from %d to %d with
scanf("%d",&temp); seek %d\n",queue[j],queue[j+1],diff);
if(temp>=head) }
{ printf("Total seek time is %d\n",seek);
queue1[temp1]=temp; avg=seek/(float)n;
temp1++; printf("Average seek time is %f\n",avg);
} getch();
else return 0;
{ }
queue2[temp2]=temp;
temp2++;
}
}
for(i=0;i<temp1-1;i++)
{
for(j=i+1;j<temp1;j++)
{
if(queue1[i]>queue1[j])
{
temp=queue1[i];
queue1[i]=queue1[j];
queue1[j]=temp;
}
}
}

You might also like