Professional Documents
Culture Documents
Operating System
Operating System
1) Serial Processing:
The Serial Processing Operating Systems are those which Performs all the instructions into
a Sequence Manner or the Instructions those are given by the user will be executed by using the
FIFO Manner means First in First Out. All the Instructions those are Entered First in the System
will be Executed First and the Instructions those are Entered Later Will be Executed Later. For
Running the Instructions the Program Counter is used which is used for Executing all the
Instructions.
2) Batch Processing:
The Batch Processing is same as the Serial Processing Technique. But in the Batch
Processing Similar Types of jobs are Firstly Prepared and they are Stored on the Card. and that
card will be Submit to the System for the Processing. The System then Perform all the
Operations on the Instructions one by one. And a user can’t be Able to specify any input.
And Operating System wills increments his Program Counter for Executing the Next Instruction.
The Main Problem is that the Jobs those are prepared for Execution must be the Same Type and
if a job requires for any type of Input then this will not be Possible for the user.
3) Multi-Programming:
In Multi programming, we can Execute Multiple Programs on the System at a Time and in the
Multi-programming the CPU will never get idle, because with the help of Multi-Programming
we can Execute Many Programs on the System.
The Multi-programming Operating Systems never use any cards because the Process is entered
on the Spot by the user. But the Operating System also uses the Process of Allocation and De-
allocation of the Memory Means he will provide the Memory Space to all the Running and all
the Waiting Processes. There must be the Proper Management of all the Running Jobs.
4) Real Time System:
In Real time systems, response Time is already fixed. Means time to Display the Results after
Possessing has fixed by the Processor or CPU. Real Time System is used at those Places in
which we Requires higher and Timely Response. These Types of Systems are used in
Reservation. So when we specify the Request, the CPU will perform at that Time. There are two
Types of Real Time System
1) Hard Real Time System: In the Hard Real Time System, Time is fixed and we can’t Change
any Moments of the Time of Processing. Means CPU will Process the data as we Enters the
Data.
2) Soft Real Time System: In the Soft Real Time System, some Moments can be Change.
Means after giving the Command to the CPU, CPU Performs the Operation after a Microsecond.
- Distributed Means Data is Stored and Processed on Multiple Locations. When a Data is stored
on to the Multiple Computers, those are placed in Different Locations. Distributed means In the
Network, Network Collections of Computers are connected with Each other.
Then if we want to Take Some Data from other Computer, Then we uses the Distributed
Processing System. And we can also Insert and Remove the Data from out Location to another
Location. In this Data is shared between many users. And we can also Access all the Input and
Output Devices are also accessed by Multiple Users.
6) Multiprocessing: Generally a Computer has a Single Processor means a Computer have a just
one CPU for Processing the instructions. But if we are Running multiple jobs, then this will
decrease the Speed of CPU. For Increasing the Speed of Processing then we uses the
Multiprocessing, in the Multi Processing there are two or More CPU in a Single Operating
System if one CPU will fail, then other CPU is used for providing backup to the first CPU. With
the help of Multi-processing, we can Execute Many Jobs at a Time. All the Operations are
divided into the Number of CPU’s. if first CPU Completed his Work before the Second CPU,
then the Work of Second CPU will be divided into the First and Second.
This type of operating system only has to deal with one person at a time, running one user
application at a time.
An example of this kind of operating system would be found on a mobile phone. There can only
be one user using the mobile and that person is only using one of its applications at a time.
The operating system is designed mainly with a single user in mind, but it can deal with many
applications running at the same time. For example, you might be writing an essay, while
searching the internet, downloading a video file and also listening to a piece of music.
Windows
Linux
Mac OS X
The difference compared to the Single-Use, Single Application operating system is that it must
now handle many different applications all running at the same time.
The memory available is also very different, for example it is quite normal to have Gigabytes of
RAM available on a personal computer which is what allows so many applications to run.
Multi-User, Multi-Tasking
This kind of operating system can be found on Mainframe and Supercomputers.
They are highly sophisticated and are designed to handle many people running their
programmes on the computer at the same time.
Examples of this kind of operating system include various versions of UNIX, Linux, IBM's z/OS,
OS/390, MVS and VM.
When a program is being executed in memory, this is called a 'process'. Many people using
the same process at the same time. Each person is running a 'thread' of execution within the
process.
2) Main-Memory Management
3) File Management
The five main major activities of an operating system in regard to file management are
5) Secondary-Storage Management
The three major activities of an operating system in regard to secondary storage management are:
6) Networking
7) Protection System
Protection refers to mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer systems.
Controlling access to the system
Program execution
I/O operations
Communication
Error Detection
Resource Allocation
Protection
Program execution
Following are the major activities of an operating system with respect to program management.
I/O Operation
Operating System manages the communication between user and device drivers. Following are the
major activities of an operating system with respect to I/O Operation.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
Following are the major activities of an operating system with respect to file management.
The operating system gives the permission to the program for operation on file.
Communication
Following are the major activities of an operating system with respect to communication.
Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the memory
hardware. Following are the major activities of an operating system with respect to error handling.
Resource Management
. Following are the major activities of an operating system with respect to resource management.
Protection
Protection refers to mechanism or a way to control the access of programs, processes, or users to
the resources defined by a computer systems. Following are the major activities of an operating
system with respect to protection.
OS ensures that external I/O devices are protected from invalid access attempts.
Process
A process is a program in execution. The execution of a process must progress in a sequential
fashion. Definition of process is following.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
Process State
Process scheduler
The process scheduler is a part of the operating system that decides which process runs at a
certain point in time. It usually has the ability to pause a running process, move it to the back of
the running queue and start a new process; such a scheduler is known as preemptive scheduler,
otherwise it is a cooperative scheduler
On some systems, the long term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is use of long term scheduler.
Short term scheduler also known as dispatcher, execute most frequently and makes the fine
grained decision of which process to execute next. Short term scheduler is faster than long term
scheduler.
Running process may become suspended if it makes an I/O request. Suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory
and make space for other process, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled out. Swapping
may be necessary to improve the process mix.
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)
Scheduling algorithms
Priority Scheduling
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Processer should know in advance how much time process will take.
P0 3-0=3
P1 0-0=0
P2 16 - 2 = 14
P3 8-3=5
Processes with same priority are executed on first come first serve basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
P3 0-0=0
Once a process is executed for given time period. Process is preempted and other process
executes for given time period.
Context switching is used to save states of preempted processes.
P0 (0-0) + (12-3) = 9
P1 (3-1) = 2
P3 (9-3) + (17-12) = 11
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer
server.
2. Because threads can share common data, they do not need to use
interprocess communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
Levels of Threads
1) User-Level Threads
2) Kernel-Level Threads
3) System-Level Threads
4) User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the
kernel.
Advantages:
Disadvantages:
5) Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is
needed in this case. Instead of thread table in each process, the kernel has a thread
table that keeps track of all threads in the system. In addition, the kernel also
maintains the traditional process table to keep track of processes. Operating Systems
kernel provides system call to create and manage threads.
Advantages:
Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes.
Process Synchronization
Process synchronization refers to the idea that multiple processes are to join up
or handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action.
Process synchronization is required when one process must wait for another to complete some
operation before proceeding.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.
A Critical Section is a code segment that accesses shared variables and has to be executed as an
atomic action. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes.
The general idea is that in a number of cooperating processes, each has a critical
section of code, with the following conditions and terminologies:
Only one process in the group can be allowed to execute in their critical
section at any one time.
The code preceding the critical section, and which controls access to the
critical section, is termed the entry section. It acts like a carefully controlled
locking door.
The code following the critical section is termed the exit section. It generally
releases the lock on someone else's door, or at least lets the world know that
they are no longer in their critical section.
The rest of the code not included in either the critical section or the entry or
exit sections is termed the remainder section.
General structure of a typical process Pi
A solution to the critical section problem must satisfy the following three
conditions:
1. Mutual Exclusion - Only one process at a time can be executing in their
critical section.
2. Progress - If no process is currently executing in their critical section,
and one or more processes want to execute their critical section,
processes cannot be blocked forever waiting to get into their critical
sections.
3. Bounded Waiting - There exists a limit as to how many other processes
can get into their critical sections after a process requests entry into
their critical section and before that request is granted
2) Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes
by using the value of a simple integer variable to synchronize the progress of interacting processes.
This integer variable is called semaphore. So it is basically a synchronizing tool and is accessed
only through two low standard atomic operations, wait and signal designated by P() and V()
respectively.
Properties of Semaphores
1. Simple
5. Can permit multiple processes into the critical section at once, if desirable.
Types of Semaphores
1. Binary Semaphore
It is a special form of semaphore used for implementing mutual exclusion, hence it is often
called Mutex. A binary semaphore is initialized to 1 and only takes the value 0 and 1 during
execution of a program.
2. Counting Semaphores
Limitations of Semaphores
Deadlock
A condition that occurs when two processes are each waiting for the other to complete before
proceeding. The result is that both processes hang. Deadlocks occur most commonly
in multitasking and client/server environments. Ideally, the programs that are deadlocked, or
the operating system, should resolve the deadlock, but this doesn't always happen.
A deadlock is also called a deadly embrace.
Solutions to deadlock
There are several ways to address the problem of deadlock in an operating system.
1.Ignore deadlock
The text refers to this as the Ostrich Algorithm. Just hope that deadlock doesn't
happen.
If deadlock does occur, it may be necessary to bring the system down, or at least
manually kill a number of processes, but even that is not an extreme solution in most
situations.
3.Deadlock avoidance
This works only if the system knows what requests for resources a process will be
making in the future, and this is an unrealistic assumption. The text describes the
bankers algorithm but then points out that it is essentially impossible to implement
because of this assumption.
4.Deadlock Prevention
Few Terms:
1) Thrashing
When referring to a computer, thrashing or disk thrashing is a term used to describe when
the hard drive is being overworked by moving information between
the systemmemory and virtual memory excessively. Thrashing usually occurs when the system
does not have enough memory, the system swap file is not properly configured, or too much is
running at the same time and it has low system resources.
When thrashing occurs you will notice the computer hard drive always working and a decrease
in system performance. Thrashing is bad on the hard drive because of the amount of work the
hard drive has to do and if left unfixed can cause an early hard drive failure.
To resolve hard drive thrashing, a user can do any of the below.
1. Increase the amount of RAM in the computer.
2. Decrease the number of programs being run on the computer.
3. Adjust the size of the swap file.
2) Page Fault
An interrupt that occurs when a program requests data that is not currently in real memory.
The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM.
An invalid page fault or page fault error occurs when the operating system cannot find the data
in virtual memory. This usually happens when the virtual memory area, or the table that maps
virtual addresses to real addresses, becomes corrupt.
3) Paging
Paging is a memory management technique in which the memory is divided into fixed size
pages. Paging is used for faster access to data. When a program needs a page, it is available in
the main memory as the OS copies a certain number of pages from your storage device to main
memory. Paging allows the physical address space of a process to be noncontiguous.
4) Segmentation
Segmentation is a Memory Management technique in which memory is divided into variable
sized chunks which can be allocated to processes. Each chunk is called a segment. A table stores
the information about all such segments and is called Global Descriptor Table (GDT). A GDT
entry is called Global Descriptor.
5) Fragmentation
Fragmentation Refers to the condition of a disk in which files are divided into pieces scattered
around the disk. Fragmentation occurs naturally when you use a disk frequently, creating,
deleting, and modifying files. At some point, the operating system needs to store parts of a file
in noncontiguous clusters. This is entirely invisible to users, but it can slow down the speed at
which data is accessed because the disk drive must search through different parts of the disk to
put together a single file.
6) Semaphore
In Unix systems, semaphores are a technique for coordinating or synchronizing activities in
which multiple processes compete for the same operating system resources.
OR
A semaphore is a variable or abstract data type that is used for controlling access, by
multiple processes, to a common resource in a parallel programming or a multi user
environment.
7) Starvation
Starvation is a resource management problem where a process does not get the resources it
needs for a long time because the resources are being allocated to other processes.
The solution to starvation is to include the process of Aging.
8) Aging
9) DMA
Stands for "Direct Memory Access." DMA is a method of transferring data from the
computer's RAM to another part of the computer without processing it using the CPU.
While most data that is input or output from your computer is processed by the CPU,
some data does not require processing, or can be processed by another device. In these
situations, DMA can save processing time and is a more efficient way to move data from
the computer's memory to other devices.
10) Process
A process is an instance of a program running in a computer. In unix and some other operating
systems, a process is started when a program is initiated.
A process can initiate a subprocess, which is a called a child process (and the initiating process
is sometimes referred to as its parent ).
Processes can exchange information or synchronize their operation through several methods of
interprocess communication ( IPC ).
11)Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a
backing store and then brought back into memory for continued execution.
Major time consuming part of swapping is transfer time. Total transfer time is directly proportional to
the amount of memory swapped.
A critical region is a simple mechanism that prevents multiple threads from accessing at once
code protected by the same critical region.
The code fragments could be different, and in completely different modules, but as long as the
critical region is the same, no two threads should call the protected code at the same time.