Professional Documents
Culture Documents
An operating system (OS) is a collection of system programs that together control the operation of a
computer system.An operating system is a program that act as interface between the user and computer
hardware and controls the execution of all kinds of program and it performs all the basic task like
process management, memory management, file management, handling input and output and
controlling peripheral devices.
Some popular operating systems include Linux, windows, unix, os/400 etc.
goals:
To make a computer user friendly.
Proper utilization of computer resources.
To hide the details of the hardware resources from the users.
To provide users a convenient interface to use the computer system.
Provides an environment within which other programs can do useful work
Objective of OS
Tow main objective of operating system use are:
OS as an Extended Machine
OS as a Resource Manager
OS as an Extended Machine
The computer system is composed of hardware and application software interacting with each other to
perform any computationFor any application to perform its task, it must deal with hardware for
memory management, process control, storage and so on. But for a programmer, it would be tedious to
consider the hardware level machine language programming for each application
development.Operating system provides the high-level abstraction to deal with such issues.Operating
system provides a set of programs that handles the complex hardware level programming and provides
the programmer an easy interface to develop application without hardware consideration.
For eg: consider the case of disks. A disk contains a collection of files. Each file can be opened, read,
written and closed. The hardware details like if recording should use modified FM or the current state
of the motor, etc. are masked by OS and also handled effectively.
OS as a Resource Manager
A computer system is a system with various resources like memory processor, disk, printer, etc.
It is necessary to keep track of the resource allocation and usage in order to resolve conflict between
two or more processes for allocation of the same resources.So, the need of managing memory and
resources among the processes or the users is necessary. OS keeps track of current usages status of
resources, who is using resource, to grant resource request if available and so on.
Eg. If 3 programs are requesting for a printer service the OS schedules the tasks based on priority. It
can be managed by buffering all the output destined for the printer on disk
Interactive System:
The user can interact directly with the OS to supply commands and data as applications program
executes.
The user receives the output immediately.
There is direct two way communication between user and computer.
A user interface is provided which may be command line or GUI.
Example : Mac and Windows etc
Time- Sharing System(Multitasking)
The processor’s time is shared among multiple users simultaneously.
The main objective is to do tasks by switching.
It reduces CPU idle time.
Example : Multics, Unix etc.
advantage
1.This type of operating system avoids duplication of software.
2.It reduces CPU idle time.
Disadvantages
1.Time sharing has a problem of reliability.
2.Question of security and integrity of user programs and data can be raised.
Real time OS
Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within
deadlines.
Hard Real Time :
In Hard RTOS, the deadline is handled very strictly which means that a given task must start executing
on specified scheduled time, and must be completed within the assigned time duration.
Example: Medical critical care system, Aircraft systems, etc.
Soft Real Time:
In this type of RTOS, there is a deadline assigned for a specific job, but a delay for a small amount of
time is acceptable. So, deadlines are handled softly by this type of RTOS.
Example Online Transaction system and Livestock price quotation System.
Device Management –
An OS manages device communication via their respective drivers. It performs the following activities
for device management. Keeps tracks of all devices connected to system. designates a program
responsible for every device known as the Input/Output controller. Decides which process gets access
to a certain device and for how long. Allocates devices in an effective and efficient way. Deallocates
devices when they are no longer required.
File Management –
A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings and status of
every file and more… These facilities are collectively known as the file system.
Control over system performance –
Monitors overall system health to help improve performance. records the response time between
service requests and system response to have a complete view of the system health. This can help
improve performance by providing important information needed to troubleshoot problems.
Layered System
The operating system is divided into a number of layers (levels), each built on top of lower layers.The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface With modularity,
layers are selected such that each uses functions (operations) and services of only lower-level layers.
A layered design was first used in THE operating system. Its six layers are as follows
Layer 0 was responsible for the multiprogramming aspect of the operating system. It decided which
process was allocated to the CPU.
It dealt with interrupts and performed the context switch when process change was required.
Layer 1 was concerned with allocating memory to process.
Layer 2 deals with inter-process communication and communication between the operating system and
the console.
Layer 3 manages all I/O between device attached to the computer. This included buffering information
from the various devices.
Layer 4 The programs used by the user are operated in this layer, and they don’t have to worry about
I/O management, operator/processes communication, memory management, or the processor
allocation.
Layer 5 The system operator process is located in the outermost layer.
This system operator controls the overall of the system.
Kernel System
Kernel is core component of Operating system without which OS can not work.
Kernel is nervous system of OS
It is central component of operating system that manages operation of computer and hardware.It
basically manages operation of memory and CPU time.
Kernel acts as a bridge between applications and data processing performed at hardware level using
inter-process communication and system calls.
Kernel uses the system call to prepare all its function like CPU scheduling, memory management etc.
The main function of kernel are:
It provides mechanism for creation and deletion of processes.
It provides CPU scheduling, memory management and I/O management.
It provides mechanism for inter-process communications
There are mainly two types of kernel.
Monolithic/Macro kernel
Micro / Exo-kernel
Monolithic/Macro kernel:
It is traditional or older approach of kernel deign.
In monolithic kernel every basic system series like process and memory management, interrupt
handling and I/O communication, file system run in kernel space.
Every component of operating system is contained in the kernel and can directly communicate with
each other by using function call.
Kernel typically executes with unrestricted access to the computer system
This approach provides rich and powerful hardware access.
Ex: Linux, UNIX, open VMS etc.
Advantage:
The execution of monolithic kernel is quite fast.
Smaller in source and compile forms.
System call are used to do operation
Less code generally means fewer bugs and security problem is also less.
Disadvantage:
Coding in kernel space is hard.
Debugging is harder.
Monolithic kernel must be written for each new architecture that the OS to be used.
Micro / Exo-kernel
In a microkernel, the kernel provides basic functionality such as IPC, memory management and
scheduling etc that allows execution of server and separate programs. The kernel is broken into
separate processes known as server. Some of server run in user space and some in kernel space.
The communication in the microkernel is done via message passing. The server communicates through
Interprocess communication.
Servers invoke “services” from each other by sending messages.
If a service fails , the rest of the OS works fine.
Advantage:
Smaller in size.
Easy to maintain
New services can be added easily.
Portable.
Crash resistant.
Disadvantage
Service provided by microkernel is expensive.
Execution is slow.
Client server Model
One of the recent advances in computing is the idea of a client/server model .
A server provides services to any client that requests it.
This model is heavily used in distributed systems where a central computer acts as a server to many
other computers.
In this model, kernel handles the communication between the client and servers.
operating system (OS) is splitting up into parts, each of which only handles one fact of the system,
such as file service, terminal service, process service, or memory service, each part becomes small and
manageable.
Advantage:
It can result in a minimal kernel.
it’s adaptability to user in distributed system
Crash resistant.
Disadvantage:
Security problem
overloading
Virtual machine
Virtual machine is an illusion of a real machine.
It is created by a real machine operating system, which make a single real machine appears to be
several real machine.
Actually, virtual machine can run several operating systems at once, each of them on its virtual
machine.
A virtual machine provides an interface identical to underlying bare hardware.
The operating system creates illusion of multiple processes, each executing on its own processor with
its own memory.
The resource of the physical computer are shared to create virtual machine
Advantage:
Provides a robust level of Security.
Allows multiple OS to run
On same machine at same Time
Easier to system upgrade.
Disadvantage:
Less efficient.
Difficult in sharing of hard drives
Shell
Shell is the outer layer of an operating system.
It is part of operating system that interface with the user
A shell is a user interface for access to an operating system’s services
Most often the user interacts with the shell using a command-line interface
It is interface to which accepts, interpret, and then executes command from user
There are two types
Command line interpreter(CLI)
Graphical user interface(GUI)
When process executes, it changes state. Process state is defined as the current activity of the process.
Fig. 3.1 shows the general form of the process state transition diagram. Process state contains five
states. Each process is in one of the states. The states are listed below.
Swapping is used to move all of a process from main memory to disk. When all the process by putting
it in the suspended state and transferring it to disk.
Reasons for process suspension
1. Swapping : OS needs to release required main memory to bring in a process that is
ready to execute.
2. Timing : Process may be suspended while waiting for the next time interval.
3. Interactive user request : Process may be suspended for debugging purpose by user.
4. Parent process request : To modify the suspended process or to coordinate the activity of various
descendants.
the queues. The arrows indicate the flow of processes in the system.
Queues are of two types : ready queue and set of device queues. A newly arrived process is put in
the ready queue. Processes are waiting in ready queue for allocating the CPU. Once the CPU is
assigned to the process, then process will execute. While executing the process, one of the several
events could occur.
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and put back in the
ready queue.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming
is stable, then the average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there
is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance
with the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to execute and allocates CPU to one of
them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces
the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped
out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and
make space for other processes, the suspended process is moved to the secondary storage. This process
is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary
to improve the process mix.
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to
discuss in this chapter −
First-Come, First-Served (FCFS) Scheduling
Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin(RR) Scheduling
Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps track
of which instruction to execute next, system registers which hold its current working variables, and a
stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that. A thread is also called
a lightweight process. Threads provide a way to improve application performance through parallelism.
Threads represent a software approach to improving performance of operating system by reducing the
overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications
on shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.
Types of Thread
Threads are implemented in following two ways −
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is
a good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types
If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides
more concurrency than the many-to-one model. It also allows another thread to run when a thread
makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
Differences
Race Condition
A race condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in critical section differs according to the order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Critical Section
The critical section in a code segment where the shared variables can be accessed. Atomic action is
required in a critical section i.e. only one process can execute in its critical section at a time. All the
other processes have to wait to execute in their critical sections.
The critical section is given as follows:
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
The critical section problem needs a solution to synchronise the different processes. The solution to the
critical section problem must satisfy the following conditions −
Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any time. If any
other processes require the critical section, they must wait until it is free.
Progresss
Progress means that if a process is not using the critical section, then it should not stop any other
process from accessing it. In other words, any process can enter a critical section if it is free.
Bounded Waitings
Bounded waiting means that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.
Semaphore
A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can be signalled by
another thread. This is different than a mutex as the mutex can be signalled only by the thread that
called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then
no operation is performed.
wait(S){
while (S<=0);
S--;
}
If a process requests an instance of a resource type, the allocation of any instance of the type will
satisfy the request. If it will not, then the instances are not identical, and the resource type classes have
not been defined properly. A process must request a resource before using it, and must release the
resource after using it. A process may request as many resources as it requires to carry out its
designated task.
Under the normal mode of operation, a process may utilize a resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the requesting process must wait until it
can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource
Deadlock Characterization
1. Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting process
must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.3. No preemption :
Resources cannot be preempted; that is, a resource can be released only voluntarily by the process
holding it, after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is waiting for
a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held by P0.
1. Deadlock Prevention
Condition for deadlock prevention:
Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For example, a printer cannot be
simultaneously shared by several processes. Sharable resources, on the other hand, do not require
mutually exclusive access, and thus cannot be involved in a deadlock.
No Preemption
If a process that is holding some resources requests another resource that cannot be immediately
allocated to it, then all resources currently being held are preempted. That is this resources are
implicitly released. The preempted resources are added to the list of resources for which the process is
waiting process will be restarted only when it can regain its old resources, as well as the new ones that
it is requesting.
Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource types, and to require
that each process requests resources in an increasing order of enumeration.Let R = {R1, R2, ..., Rn} be
the set of resource types. We assign to each resource type a unique integer number, which allows us to
compare two resources and to determine whether one precedes another in our ordering. Formally, we
define a one-to-one function F: R _ N, where N is the set of natural numbers.
2. Deadlock Avoidance
Prevent deadlocks requests can be made. The restraints ensure that at least one of the necessary
conditions for deadlock cannot occur, and, hence, that deadlocks cannot hold. Possible side effects of
preventing deadlocks by this, melted, however, are Tow device utilization and reduced system
throughput.
Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in some order
and still avoid a deadlock. More formally, a system is in a safe state only if there exists a safe sequence.
A sequence of processes <P1, P2, .. Pn> is a safe sequence for the current allocation state if, for each Pi
the resources that Pj can still request can be satisfied by the currently available resources plus the
resources held by all the Pj, with j < i. In this situation, if the resources that process Pi needs are not
immediately available, then Pi can wait until all Pj have finished. When they have finished, Pi can
obtain all of its needed resources, complete its designated task return its allocated resources, and
terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so on.
Banker's Algorithm
The resource-allocation graph algorithm is not applicable to a resourceallocation system with multiple
instances of each resource type. The deadlockavoidance algorithm that we describe next is applicable
to such a system, but isless efficient than the resource-allocation graph scheme. This algorithm is
commonly known as the banker's algorithm.
3. Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a
deadlock situation may occur.
• An algorithm that examines the state of the system to determine whether a deadlock has occurred.
• An algorithm to recover from the deadlock.
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated
processes.
• Abort all deadlocked processes: This method clearly will break the dead – lock cycle, but at a great
expense, since these processes may have computed for a long time, and the results of these partial
computations must be discarded, and probably must be recomputed.
• Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable
overhead, since after each process is aborted a deadlock-detection algorithm must be invoked to
determine whether a processes are still deadlocked.
Mono-Programming
In this scheme, it is allowed to run only one program at a time, sharing memory between program and
OS. When user provides command, the as copies requested program to memory form disk and
executes it. As the program completes, the OS displays prompt character and waits for next
command. When next command is provided, it beds new program into memory overwriting the
previous program
Multi-Programming
In multiprogramming, multiple program reside in main memory at time. There are two Memory
Management Techniques: Contiguous, and Non-Contiguous. In Contiguous Technique, executing
process must be loaded entirely in main memory. Contiguous Technique can be divided into:
Fixed (or static) partitioning. Variable (or dynamic) partitioning
Memory Fragmentation
When holes given to other process they may again partitioned, the remaining holes gets smaller
eventually becoming too small to hold new processes – waste of memory occur – memory
fragmentation.
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous.
Fist fit and Best Fit suffers from external fragmentation. External fragmentation can be reduced by
memory compaction
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this
size difference is memory internal to a partition, but not being used
Virtual Memory
Virtual memory is a memory management capability of an operating system that uses hardware and
software to allow computer to compensate for physical memory shortages by temporarily transferring
data from random access memory (RAM) to disk storage.
Page table.
A page table is the data structure used by virtual memory system in a computer operating system to
store mapping between virtual address and physical address Page table is kept in main memory.
Page Faults
When a page referenced by the CPU is not in main memory, it is called a page fault.When a page fault
occurs, the required page has to be fetched from secondary memory into main memory.Page fault can
be handled by operating System.
Segmentation
Segmentation is non contiguous memory management techniques. In which the memory is divided into
variable size parts. Each part is known as segment which can be allocated to process. The details about
each segment are stored in table called segment table.Segment table contains mainly following
information about segment.
1.Base: it is base address of segment. 2.Limit: it is Length of segment
Segmentations example
Handling O/I
Programmed I/O
The programmed I/O was the most simple type of I/O technique for the exchanges of data or any types
of communication between the processor and the external devices.
Interrupt driven I/O
Interrupt I/O is a way of controlling input/output activity whereby a peripheral or terminal that needs to
make or receive a data transfer sends a signal The CPU issue a command to I/O module then proceeds
with its normal work until interrupted by I/O device on completion of its work.
I/O using DMA
Direct Memory Access is a technique for transferring data within main memory and external device
without passing it through the CPU.Issue command to DMA module, by sending necessary information
to DMA module.
Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Multiple
I/O requests may arrive by different processes and only one I/O request can be served at a time by the
disk controller. Thus other I/O requests need to wait in the waiting queue and need to be scheduled.
1 First come- first serve (FCFS)
FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is : 98, 183, 41, 122, 14, 124, 65, 67
2. Shortest seek time first (SSTF)
Requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
3. Elevator (SCAN)
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular direction
till the end, satisfying all the requests coming in its path, and then it turns back and moves in the
reverse direction satisfying requests coming in its path.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
4. Circular Scan
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until it
reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without servicing
any request then it turns back and start moving in that direction servicing the remaining requests.
Example: Suppose that a disk has 200 cylinders, numbered 0 to 199. The drive is currently serving a
request at cylinder 53 and previous request was at cylinder 45. The queue of pending request , in FIFO
order is :
98, 183, 41, 122, 14, 124, 65, 67
Unit-7 File system
File naming
Descriptive file names are an important part of organizing, sharing, and keeping track of data files.
Develop a naming convention based on elements that are important to the project.
File naming best practices:
Files should be named consistently
File names should be short but descriptive (<25 characters) (Briney, 2015)
Avoid special characters or spaces in a file name
Use capitals and underscores instead of periods or spaces or slashes
File Structure
File can be structured as
• Byte sequence
• Record sequence
• Tree
In byte sequence, file is an unstructured sequence of bytes. The OS only sees bytes and the meaning
should be provided by user level programs.
In record sequence, a file is a sequence of fixed length records, each with same internal structure.
In tree a file consists of a tree of records, may be of varying length, each containing key field in fixed
position in the record. The key field is used for sorting tree.
File Type
Many OS support several type of files.
Regular File
Contains user information These may have text, databases or executable program. The user can apply
various operations on such files like add, modify, delete or even remove the entire file.
Directory Files: System files for maintaining the structure of file system.
Character special file: Related to I/O and used to model serial I/O device such as terminal, printers
and network.
Block Special File: Used to model disk
File Acess
File access mechanism refers to the manner in which the records of a file may be accessed.The
information in file can be accessed in several ways. The most commonly used method are:
• Sequential access
• Direct access
Sequential accessA sequential access is that in which the records are accessed in some sequence, i.e.,
the information in the file is processed in order, one record after the other. This access method is the
most primitive one
Direct Access: Files whose bytes or records can be read in any order. Based on disk model of file,
since disks allow random access to any block. Used for immediate access to large amounts of
information. When a query concerning a particular subject arrives, we compute which block contain
the answer, and then read that block directly to provide desired information.
File Attributs
A file has a name and data. Moreover, it also stores meta information like file creation date and time,
current size, last modified date, etc. All this information is called the attributes of a file system.
Here, are some important File attributes used in OS:
Name: It is the only information stored in a human-readable form.
Identifier: Every file is identified by a unique tag number within a file system known as an identifier.
Location: Points to file location on device.
Type: This attribute is required for systems that support various types of files.
Size. Attribute used to display the current file size.
Protection. This attribute assigns and controls the access rights of reading, writing, and executing the
file.
Time, date and security: It is used for protection, security, and also used for monitoring
Directory Structure
A directory is a node containing information's about files. Both the directory structure and the files
reside on disk Directories can have different structures
Path name
Absolute path name
Path from root directory to the file Always start at the root directory and is unique /usr/bin/
Relative path name
Used in conjunction with the concept of “current working directory” Specified relative to the current
directory More convenient than the absolute form and achieves the same effect
Unit-9
Distributed Operating system
Distributed system is a collection of autonomous computer system capable of communicating each
other.A distributed system is a collection of loosely coupled processors interconnected by a
communication network. Processors are also called nodes, computer, machine, hosts etc. Generally,
one host at one site, the server, has a resource that another host at another site, the client (or user),
would like to use In this system the processors do not share memory and clock but the processor has its
own local memory.
Reasons
Resource sharing - user at one site may be able to use the resources available at another
Computation speedup – Distributed system allow us to distribute the computation among the various
sites to run the computation concurrently.
Reliability – detect and recover from site failure, function transfer, reintegrate failed site
Communication – message passing i.e, processes at different sites can communicate each other to
exchange information. User may initiate file transfer or communicate with another via electronic mail.
2. Message Passing
Message passing model allows multiple processes to read and write data to the message queue without
being connected to each other. Message are stored on the queue until their recipient retrieves them.
Massage passing is the basis of most inter-process communication in distributed system.
Clock Synchronization
In distributed system the internal clock of several computer may differ with each other.Even when
initially set accurately, real clock will differ after some time due to clock drift, cause by clock counting
time at slightly different rates. Clock synchronization is mechanism to synchronize the all computer in
distributed system. It is the process of setting all the cooperating systems of distributed network to the
same logical or physical clock.
Page Replacement Algorithm printf("Fault rate is %f\n",rate);
FIFO (First In First Out)
#include<stdio.h> Disk Scheduling
#include<conio.h> FCFS algorithm
void main() #include<stdio.h>
{ #include<math.h>
int #include<conio.h>
i,j,n,page[50],frameno,frame[10],move=0,flag,c void main()
ount=0; {
float rate; int i,n,m,sum=0,h;
printf("Enter the number of pages\n"); printf("Enter the size of disk\n");
scanf("%d",&n); scanf("%d",&m);
printf("Enter the page reference numbers\n"); printf("Enter number of requests\n");
for(i=0;i<n;i++) scanf("%d",&n);
scanf("%d",&page[i]); printf("Enter the requests\n");
printf("Enter the number of frames\n"); // creating an array of size n
scanf("%d",&frameno); int a[20];
for(i=0;i<frameno;i++) for(i=0;i<n;i++)
frame[i]=-1; {
printf("Page reference string\tFrames\n"); scanf("%d",&a[i]);
for(i=0;i<n;i++) }
{ for(i=0;i<n;i++)
printf("%d\t\t\t",page[i]); {
flag=0; if(a[i]>m)
for(j=0;j<frameno;j++) {
{ printf("Error, Unknown position %d\n",a[i]);
if(page[i]==frame[j]) }
{ }
flag=1; printf("Enter the head position\n");
printf("No replacement\n"); scanf("%d",&h);
break; // head will be at h at the starting
} int temp=h;
} printf("%d",temp);
if(flag==0) for(i=0;i<n;i++){
{ printf("-> %d ",a[i]);
frame[move]=page[i]; //cout<<" -> "<<a[i]<<' ';
move=(move+1)%frameno; // calculating the difference for the head
count++; movement
for(j=0;j<frameno;j++) sum+=abs(a[i]-temp);
printf("%d\t",frame[j]); // head is now at the next I/O request
printf("\n"); 2 temp=a[i];
} }
} printf("\n");
rate=(float)count/(float)n; printf("Total head movements = %d\n",sum);
printf("Number of page faults is %d\n",count); getch();
} for(i=0;i<temp2-1;i++)
{
Disk scheduling for(j=i+1;j<temp2;j++)
SCAN algorithm {
#include<stdio.h> if(queue2[i]<queue2[j])
#include<math.h> {
#include<conio.h> temp=queue2[i];
int main() queue2[i]=queue2[j];
{ queue2[j]=temp;
int }
queue[20],n,head,i,j,seek=0,max,diff,temp,queu }
e1[20],queue2[20], }
temp1=0,temp2=0; for(i=1,j=0;j<temp1;i++,j++)
float avg; queue[i]=queue1[j];
printf("Enter the max range of disk\n"); queue[i]=max;
scanf("%d",&max); for(i=temp1+2,j=0;j<temp2;i++,j++)
printf("Enter the initial head position\n"); queue[i]=queue2[j];
scanf("%d",&head); queue[i]=0;
printf("Enter the size of queue request\n"); queue[0]=head;
scanf("%d",&n); for(j=0;j<=n+1;j++)
printf("Enter the queue of disk positions to be {
read\n"); diff=abs(queue[j+1]-queue[j]);
for(i=1;i<=n;i++) seek+=diff;
{ printf("Disk head moves from %d to %d with
scanf("%d",&temp); seek %d\n",queue[j],queue[j+1],diff);
if(temp>=head) }
{ printf("Total seek time is %d\n",seek);
queue1[temp1]=temp; avg=seek/(float)n;
temp1++; printf("Average seek time is %f\n",avg);
} getch();
else return 0;
{ }
queue2[temp2]=temp;
temp2++;
}
}
for(i=0;i<temp1-1;i++)
{
for(j=i+1;j<temp1;j++)
{
if(queue1[i]>queue1[j])
{
temp=queue1[i];
queue1[i]=queue1[j];
queue1[j]=temp;
}
}
}